Hybrid Rendering

„Ubiquitous multi-terrabyte distributed visualization"

  • „Efficient I/O for Parallel Visualization“, Thomas Fogal and Jens Krüger, ACM Eurographics Symposium on Parallel Graphics and Visualization 2011
  • „ZAPP - A management framework for distributed visualization systems“, Georg Tamm, Alexander Schiewe and Jens Krüger, CGVCVIP 2011

In this project we are developing a hybrid rendering and visualization system that combines the strengths of different rendering algorithms, hardware models, and display technologies while avoiding their weaknesses. “Hybrid” in this context has three meanings: First, our system implements multiple rendering methods – such as parallel ray casting and GPU based rasterization – dynamically deciding which approaches to use for parts of the scene, and composites these parts into the final image. Second, the system uses both client and server hardware and software, adapting even to rapidly changing environments. Finally, hybrid rendering means rendering on a variety of devices, ranging from high-end multi-touch VR equipment, to workstation and commodity hardware, all the way down to handheld devices such as tablets, pads, or smartphones. These devices are used in concert, empowering users to make the most of their hardware and software environments.

In the last decade graphics processing units (GPUs) have dominated the field of interactive computer graphics and visualization. GPUs have kept up with the growing model and shading demands required by computer games by implementing highly parallel SIMD processing models. While the raw processing power of GPUs continues to grow at a rapid pace, the data transfer rate to the graphics subsystem as well as the rasterization step in the middle of pipeline become more and more the bottlenecks that limit the abilities of this hardware to interactively render very dense data. Alternative rendering approaches–such as parallel ray casting–have proven to scale better for large models, but even though massive speedups haven been achieved due to ongoing research in this area, still for primary effects ray casting systems are being outperformed by rasterization hardware for practically all but the densest scenes.

The goal of this research project is the development of novel algorithms as well as the improvement and combination of existing techniques for the interactive visualization of large-scale data. In particular, methods and algorithms will be integrated into a modular system that allows for the interactive rendering and visualization of scientific data on a wide range of devices. A “device” in the scope of this project is any piece of input/output equipment that can be connected to the system. Devices range from small mobile equipment such as smartphones or pads/tablets, to commodity PCs with one or multiple screens, up to clusters driving powerwalls, stereo projections, and entire VR systems.

The hybrid concept is motivated by the observation that both on the algorithmic as well as on the hardware side there exists no single solution that delivers optimal rendering performance under all conditions. While the parallel computing power of GPU-based rasterization delivers unmatched performance for small to medium sized data sets, its performance degrades for very large models. Highly optimized real-time ray casting, on the other hand, has the potential to outperform rasterization only for dense and mostly static scenes. While it has been shown that it is possible to handle dynamic scenes efficiently with sample-based representations, still, for small to medium datasets GPU based rasterization wins over ray casting. Thus, for optimal performance a combination of these and other approaches is desirable. Note that some implementations of these rendering approaches also require specific hardware, such as GPUs, multicore architectures or specialized vector processors. In particular, in mobile devices or clusters of computers some of these resources may not be available, or at least not directly available. But, resources with the necessary features may be reachable over a network link. This link in turn is often not of constant quality or may only offer transient connectivity.

Therefore, we conduct research and development of a distributed, hybrid rendering and visualization methodology that makes use of multiple local and remote rendering subsystems to dynamically utilize all available rendering, display and interaction resources, delivering an optimal visualization experience. The basic idea is to develop and implement a collection of highly optimized rendering solutions together with an oracle function that–given a description of partition of the scene–estimates the rendering time for each approach. A supervisor will then, based on the oracle’s predictions, redirect the partition rendering subtasks to the most efficient rendering subsystems and composite the final image. Note that the collection of rendering subsystems may include remote server based-algorithms that would transmit the image via network.

 

Project team

Principal Investigator
Prof. Dr. Jens Krüger

Researchers
M.Sc. Alexander Schiewe
Andrey Krekhov
Andre Waschk
Michael Michalski