Unconstrained Performance Capture (Optical Performance Capture)

„One of the first methods to reconstruct modifiable animation models of people in general apparel through performance capture from video“

http://www.newscientist.com/article/dn19617 and „Video-based Reconstruction of Animatable Human Characters“, Carsten Stoll, Jürgen Gall, Edilson de Aguiar, Sebastian Thrun and Christian Theobalt, ACM Transactions on Graphics (Proc. SIGGRAPH ASIA 2010), 29(6), p. 139-149, 2010, Seoul, Korea (http://www.mpi-inf.mpg.de/resources/perfcap/index_vrhc.html)

„One of the first approaches for marker-less capture of skeleton motion and detailed surface geometry of multiple interacting people from video“

Motion capture algorithms reconstruct a mathematical description of the motion of a human, animal or moving mechanical device from sensor measurements. Motion capture has many applications in different areas, including medicine and biomechanics, computer animation, computer games and special effects, surveillance, 3D video and human-computer-interaction, to name just a few.

Unfortunately, most existing motion capture approaches have a strongly restricted application range, for instance since obvious interference with the scene is required. As an example, most optical systems require the subjects to wear special suits with fiducial markers on them. In practice, this is very constraining and the approaches can thus merely capture skeletal motion under controlled lab conditions. Marker-less motion capture approaches overcome some of these limitations and enable skeleton reconstruction without fiducials. However, it is still infeasible to capture detailed dynamic scene geometry or people in general apparel.

In our research, we investigate the algorithmic foundations of the next generation of motion capture, that we call performance capture. Performance capture methods reconstruct much more complete and detailed dynamic scene models that do not only contain a coarse motion description, but feature detailed dynamic scene geometry, detailed motion models, detailed surface appearance, and potentially more advanced scene descriptions, such as physical properties. Our aim is to be able to reconstruct performances of arbitrary moving devices, as well as humans in general apparel from a handful of unmodified video streams.

The goals of this project are two-fold:

  • Enlarging the application range of high-quality marker-less performance capture methods such that they can be applied under more unconstrained conditions and such that the level of detail in the reconstructions is greatly enhanced.
  • Enhancing the runtime performance of marker-less reconstruction methods such that performances can be captured in real-time from simpler sensors setups.


Project team

Principal Investigator
Prof. Dr. Christian Theobalt

Nadia Robertini
Ahmed Elhayek
Pablo Garrido
Levi Valgaerts