log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Defense: Fusing Multimedia Data Into Dynamic Virtual Environments
Ruofei Du
Friday, October 19, 2018, 10:45 am-12:45 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments.

First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatio-temporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos.

We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. We conduct a user study with 20 participants to qualitatively evaluate Social Street View and Geollery. The user study has identified several use cases for these systems, including immersive social storytelling, experiencing culture, and crowd-sourced tourism.

We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures, haptic feedback, and visual cryptography in virtual and augmented reality environments.

Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education.

Examining Committee: 
 
                          Chair:               Dr. Amitabh Varshney
                          Dean's rep:      Dr. Joseph JaJa
                          Members:        Dr. Matthias Zwicker
                                                    Dr. Furong Huang
                                                    Dr. Ming Chuang
This talk is organized by Tom Hurst