log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Enhancing Visual and Gestural Fidelity for Effective Virtual Environments
Xiaoxu Meng
Remote
Friday, October 30, 2020, 2:00-4:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Despite being an immense technological achievement with a potentially revolutionary impact on the world - VR lacks acceptance from the general public beyond tech-geeks. One challenge that the VR industry is facing is that virtual reality is not immersive enough to make people feel real: the low frame rate leads to dizziness and the lack of human body visualization limits the human- computer interaction. In this dissertation, I present our research on enhancing visual and gestural fidelity in the virtual environment.

First, I present a foveated rendering technique: Kernel Foveated Rendering (KFR), which parameterizes foveated rendering by embedding polynomial kernel functions in log-polar space. This GPU-driven technique uses parameterized foveation that mimics the distribution of photoreceptors in the human retina. I present a two-pass kernel foveated rendering pipeline that maps well onto modern GPUs. I have carried out user studies to empirically identify the KFR parameters and have observed a 2.8X-3.2X speedup in rendering on 4K displays.

Second, I explore the rendering acceleration through foveation for 4D light fields, which captures both the spatial and angular rays, thus enabling free-viewpoint rendering and custom selection of the focal plane. I optimize the KFR algorithm by adjusting the weight of each slice in the light field, so that it automatically selects the optimal foveation parameters for different images according to the gaze position. I have validated our approach on the rendering of light fields by carrying out both quantitative experiments and user studies. Our method achieves speedups of 3.47X-7.28X for different levels of foveation and different rendering resolutions.

Thirdly, I present a simple yet effective technique for further reducing the cost of foveated rendering by leveraging ocular dominance - the tendency of the human visual system to prefer scene perception from one eye over the other. Our new approach, eye-dominance-guided foveated rendering (EFR), renders the scene at a lower foveation level (with higher detail) for the dominant eye than the non-dominant eye. Compared with traditional foveated rendering, EFR can be expected to provide superior rendering performance while preserving the same level of perceived visual quality.

Finally, I present an end-to-end convolutional autoencoder to reconstruct a 3D human hand from a single RGB image. To train networks with full supervision, we fit a parametric hand model to 3D annotations, and we train the networks with the RGB image with the fitted parametric model as the supervision. Our approach leads to significantly improved quality compared to state-of-the-art hand mesh reconstruction techniques.

Examining Committee: 
 
                           Chair:              Dr. Amitabh Varshney               
                           Dean's rep:      Dr.  Joseph F. JaJa 
                          Members:         Dr.  Matthias Zwicker 
                                                Dr.  Furong Huang
                                                Dr. Roger Eastman     
Bio

Xiaoxu Meng is a Ph.D. candidate from University of Maryland, College Park, working with Dr. Amitabh Varshney. Her research focuses on computer graphics and computer vision, such as efficient high-quality rendering (foveated rendering in virtual reality; deep-learning-based denoising for Monte Carlo renderings) and 3D reconstruction (hand surface reconstruction from RGB images).

This talk is organized by Tom Hurst