log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Fusing Multimedia Data Into Dynamic Virtual Environments
Ruofei Du
Monday, October 16, 2017, 2:00-3:30 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

In spite of the dramatic growth of virtual and augmented reality (VR and AR) 

technology, content creation for immersive and dynamic virtual environments 

remains a significant challenge. In this proposal, we present our research in

automatically fusing multimedia data, including text, photos, panoramas, 

point clouds, and multi-view videos, to create rich and compelling virtual 



First, we present Social Street View, which renders geo-tagged social media 

in its natural geo-spatial context provided by 360° panoramas, such as 

Google Street View. Our system takes into account visual saliency and uses 

maximal Poisson placement with spatio-temporal filters to render social 

multimedia in an immersive setting. We explore several potential use cases

including immersive social storytelling, experiencing culture, and 

crowd-sourced tourism.


Second, we present Video Fields, a novel web-based interactive system to 

create, calibrate, and render dynamic videos overlaid on 3D scenes. 

Our system renders dynamic entities from multiple videos, using early and 

deferred texture sampling. Video Fields can be used for immersive surveillance 

in virtual environments.


Third, we present our work on Montage4D, an interactive system for seamlessly 

fusing multi-view video textures with dynamic meshes. We use geodesics on 

meshes with view-dependent rendering to mitigate spatial occlusion seams 

while maintaining temporal consistency. We believe that Montage4D will be 

critical for several applications such as immersive telepresence, immersive 

training, and live entertainment.


We next plan to work on efficient processing and rendering of 360° videos, 

geo-spatial registration of social media with immersive maps, and using 

multi-view video data for reconstruction of dynamic 3D models.


Examining Committee:

Chair:                    Dr. Amitabh Varshney
Dept rep:             Dr. Furong Huang
Members:           Dr. Matthias Zwicker




This talk is organized by Jennifer Story