SIGGRAPH 2015

This year Intel VCI is again demonstrating many of its results at the top computer graphics conference Siggraph 2015.

Find a complete list of our contributions below.

 

Courses

User-Centric Computational Videography
Thursday, 13 August 2:00 pm - 5:15 pm  |   Los Angeles Convention Center, Room 502AB
INTERMEDIATE

Christian Richardt (Universität des Saarlandes, Max-Planck-Institut für Informatik, Intel Visual Computing Institute), James Tompkin (Harvard University), Jiamin Bai (Light, Co.), Christian Theobalt (Max-Planck-Institut für Informatik)

Digital video is ubiquitous: virtually every mobile device comes with at least one high-resolution video camera, and users often upload video to community web sites. One hundred hours of video are uploaded to YouTube alone every minute. Yet, many commercial solutions for capturing, editing, and browsing videos are difficult to use, so they constrict user creativity.

Video capturing requires framing techniques and shot planning to effectively convey the intended message and be comfortable to watch. Handheld capture with mobile devices in particular often results in shaky and wobbly footage. In addition, traditional video-editing tools are not evolving to support the proliferation of video content, and some provide little more than an image-editing interface with a timeline. Unfortunately, this rather trivial addition of the temporal axis to images does not enable users to perform complex editing tasks that change the content of videos, like adding or removing objects in a video. And existing community video collections typically treat videos like photos. They require users to navigate static thumbnails, instead of visualizing spatial or temporal overlaps between videos.

This course aims to improve the quality and flexibility of capturing, editing, and exploring consumer videos. Topics include recent techniques in computer vision and graphics, and their recent evolution. By finding and exploiting inter- and intra-video content connections, these techniques make videos easier for amateur users (for example, by enabling dynamic object removal in videos) and provide new empowering video experiences like content-based video browsing. The course summarizes current trends in the software industry and in research, and proposes directions for future research.

 

Posters

The XML3D Architecture
Poster Session: 13 August, 12:40 pm

XML3D research tries to make 3D graphics available to a broader audience via web developers. Despite its high abstraction level, it offers mechanisms such as data-flow processing and programmable shading to exploit low-level GPU capabilities.

Kristian Sons (Deutsches Forschungszentrum für Künstliche Intelligenz, Universität des Saarlandes), Felix Klein (Universität des Saarlandes), Jan Sutter
(Universität des Saarlandes, Deutsches Forschungszentrum für Künstliche Intelligenz), Philipp Slusallek (Universität des Saarlandes, Deutsches Forschungszentrum für Künstliche Intelligenz, Intel Visual Computing Institute)

 

Technical Papers

Rendering Complex Appearance
Monday, 10 August 3:45 PM - 5:35 PM   |   Los Angeles Convention Center, Room 152

The SGGX Microflake Distribution

The SGGX microflake distribution represents spatially varying properties of anisotropic microflake participating media. It allows for robust linear interpolation and prefiltering, and provides closed-form expressions for all operations used in the microflake framework.

Eric Heitz (Karlsruhe Institute of Technology, NVIDIA Research), Jonathan Dupuy (Université de Montréal, Université de Lyon 1), Cyril Crassin (NVIDIA Research), Carsten Dachsbacher (Karlsruhe Institute of Technology)

Multi-Scale Modeling and Rendering of Granular Materials

A multi-scale framework for modeling and rendering granular media that enables efficient rendering of millions of complex grains with specular surfaces.

Johannes Meng (Karlsruher Institut für Technologie, Disney Research Zürich),
Marios Papas (Disney Research Zürich, ETH Zürich), Ralf Habel (Disney Research Zürich), Carsten Dachsbacher (Karlsruher Institut für Technologie), Steve Marschner (Cornell University), Markus Gross (Disney Research Zürich, ETH Zürich), Wojciech Jarosz (Disney Research Zürich, Dartmouth College)


 

Geometry Zoo
Wednesday, 12 August 9:00 AM - 10:30 AM   |   Los Angeles Convention Center, Room 153A-C Session 

Shading-Based Refinement on Volumetric Signed-Distance Functions

A novel method to obtain fine-scale detail in 3D reconstructions generated with low-budget RGB-D cameras or other commodity scanning devices. This approach leverages RGB data to refine these reconstructions through shading cues, as color input is typically of much higher resolution than the depth data.

Michael Zollhöfer (Friedrich-Alexander-Universität Erlangen-Nürnberg), Angela Dai (Stanford University), Matthias Innmann (Friedrich-Alexander-Universität Erlangen-Nürnberg), Chenglei Wu (ETH Zürich), Marc Stamminger (Friedrich-Alexander-Universität Erlangen-Nürnberg), Christian Theobalt (Max-Planck-Institut für Informatik), Matthias Nießner (Stanford University)


 

Perception & Color
Thursday, 13 August 10:45 AM - 12:15 PM   |   Los Angeles Convention Center, Room 152

Data-Driven Color Manifold

A technique to extract low-dimensional color manifolds with varying density from labeled internet examples. The proposed manifolds contain colors of a specific context (skin, for example) and can be used to improve color-picking performance, color stylization, compression, or white balancing.

Chuong Nguyen (Max-Planck-Institut für Informatik), Tobias Ritschel (Max-Planck-Institut für Informatik, Universität des Saarlandes), Hans-Peter Seidel (Max-Planck-Institut für Informatik)

 

Demos

Live Demo at Intel Booth #701. Tuesday, 11th August  
A live demo on lighting estimation and editing tools in the Dreamspace Project will be presented at the Exhibition on the Intel booth #701

 

Intel Exhibitor Sessions

Wednesday, 12 August | 3:15 - 4:15 pm | Room 406B

Let There Be Light: On-Set Light Capture and Rendering for Virtual Production in the Dreamspace Project

 


 

Thursday, 13 August | 4:30 - 5:30 pm | Room 406B

Let There Be Light: On-Set Light Capture and Rendering for Virtual Production in the Dreamspace Project