The SCENE project had the goal to develop as a paradigm for novel scene representations for richer networked media and was realized by a consortium of nine leading European research institutes and companies. This three year project was sponsored by the European Commission within the scope of the 7th EU general programme for research and technological development (FP7), to integrate cutting edge research in all stages of the video production process. The SCENE project ended in October 2014, having enhanced data acquisition, visualization, interaction and numerous ways of data post-processing.

The project was an exemplar to overcome the bottleneck imposed by conventional camera systems and the physical world by itself. Computational Photography alters image content by computational means to create visually appealing and artistically interesting results. Successful implementation of ideas from computational photography requires high quality data and information on the scene content. The same holds for computational videography, which transfers the ideas of computational photography to motion pictures. Using limited depth of field, reduced lighting, or motion blur for artistic purposes in the early days has been a decision which information to keep: the spatial one or the temporal one. Recently, by employing novel acquisition methods and visualization algorithms such camera effects that affect the data quality can be synthesized and forwarded to post-production, thus also facilitating other post-processing methods and new approaches. 

However, constructing a 3D world synthetically requires information beyond the traditionally captured color information. The Motion SCENE Camera developed in the SCENE project is built with a time-of-flight sensor capable of recording color and depth simultaneously, capturing complete spatial and temporal data.


Novel segmentation algorithms and surface tracking techniques help to create a spatially and temporally consistent full 3D model from this information. A Scene Representation Architecture (SRA) was developed especially to go beyond the ability of either sample based (video) or model-based (CGI) methods to deliver richer media experiences. The SRA is a layer based architecture oriented towards movie production with the intention to merge real and generated content on the lowest possible level for facilitated post processing and an enhanced consumer experience. 

The SCENE renderer is one fundamental tool to visualize the content at a high quality from the Scene representation. The renderer is designed in a way to allow easy integration of future rendering modules which employ developments in computational videography to provide realistic rendering of the scene content.

Project team

Principal Investigator
Prof. Dr.-Ing. Thorsten Herfet