In this talk, I will present my research on utilizing neural fields to represent immersive visual data. First, I will discuss how the memorization capacity of neural fields significantly reduces storage and transmission costs for high-quality light fields in immersive viewing applications. Next, I will introduce an innovative approach to 3D scene geometry modeling, which melds the representational power of neural fields with the efficiency of ray-based light field principles. Moreover, I will present our work that uncovers the surprising potential of image-based neural fields to render convincing and photorealistic novel views, even without any camera pose or 3D structure inherent in the formulation. I will conclude my presentation by discussing the future of neural fields in visual computing and exploring how we can harness their potential to push the fundamental boundaries of imaging and visualization, ultimately extending the realm of visible reality for humans.
Brandon Feng is a Ph.D. candidate in Computer Science at the University of Maryland. His dissertation research focuses on developing novel machine learning algorithms for image and 3D data processing. He has broadly worked on various topics in computational photography (light field and image-based rendering), computational imaging (imaging through scattering and turbulence), and general deep learning applications (protein folding and docking, depth estimation, 3D MRI segmentation). His current research interest is to extend the realm of visible reality for humans with physics-inspired machine learning algorithms. He received a B.A. in Computer Science and Statistics and a M.S. in Statistics, both from the University of Virginia. His personal website is https://brandonyfeng.github.