Device-Independent Multi Reality Interfaces
This project introduces a browser based approach of multi-reality interfaces in the context of e-maintenance in modular factory environments, specifically remote monitoring and maintenance scenarios. Through their rich visualization capabilities (stereoscopic video & 3D graphics) multi-reality interfaces offer a way to aggregate and structure the wide-spread variety of information from different sources that is required to analyze such complex scenarios. Additionally, they serve the purpose of making collaboration with on-site personal easier and more efficient in time and cost. Finally, the browser based implementation fits very well into the web based world that a modular factory environment represents and make the presented solution extremely adaptive to (non-)industrial HMI.
The multi-reality paradigm is heavily influenced by the principle of dual reality, where the real and a virtual world are interconnected by several means. In contrast to augmented reality this approach presents real and virtual world in parallel. The target use cases for this type of interface are remote expert scenarios where a remotely sitting specialist supports on-site workforces in solving problems on physical objects.
Multi-reality interfaces provide several means to improve the ability of remote collaboration on physical tasks. On the one hand, they present the task space in a high quality (stereoscopic) video stream to allow a detailed (remote) visual analysis. On the other hand, the task space is visualized in the form of a 3D model that is able to provide a rich set of interaction possibilities, like selections, explosions, and visualization of data. Additionally, our implementation offers features that support remote collaboration, e.g. video conferencing, visualization of data sets, view synchronization, remote painting.
The current implementation is completely browser based and runs in a standard Chrome browser. The video visualization of the task space is implemented using a custom video plugin to enable a flexible approach to the delivery of the video stream. We apply the XML3D technology to create an interactive 3D model. In contrast to the task space visualization, we apply the WebRTC framework to realize the video conferencing feature. For the data exchange between different peers we use a set of WebSocket servers which we can communicate to through the native WebSocket API of the browser.