Mobile Dynamic Scene Reconstruction

The goal of this new project is to investigate scalable algorithms for dynamic scene reconstruction for more general dynamic scenes with only a few handheld mobile cameras. This is relevant for a variety of exciting applications, such as human-computer interaction, 3D video reconstruction, augmented reality, video editing, or telepresence. A particular focus of this newly started project lies on developing novel inter- and intra-device- as well as on space-time correspondence, and on finding methods that succeed for casual community videos, i.e. multiple unsynchronized handheld videos recorded with varying camera types. Ultimately, the combination of these correspondence-finding approaches with stronger model-based priors will be researched to start enabling reconstruction of general deformable scenes with lightweight mobile sensors. In this context, we will also further investigate concepts on inverse rendering, i.e. methods to estimate illumination and appearance models from general videos, which promise to be important tools for making correspondence finding and general motion reconstruction with community videos of general scenes much more robust.

The main goal is to develop approaches working with normal RGB cameras, but we will make use of additional hardware resources such as depth cameras in case it is deemed to be appropriate. All in all, this will entail entirely new algorithmic challenges, but also conceptual challenges to make best use of the distributed and possibly unbalanced processing power offered jointly by mobile and cloud resources.

Project Team

Principal Investigators
Prof. Dr. Christian Theobalt