PhD Proposal: Depth Sensing and Photorealistic 3D Mapping of Real-World Scenes
Jaehoon Choi
IRB-4109
Abstract
Understanding and reconstructing the 3D world is essential for robotics, Augmented Reality (AR), and Virtual Reality (VR) applications. However, accurately estimating depth and reconstructing photorealistic maps from images captured in real-world environments presents significant challenges. While traditional computer vision and graphics methods form the foundation for these tasks, many difficult cases still arise in real-world settings. Recently, neural methods for depth estimation, 3D reconstruction and rendering have shown potential in complementing or even overcoming these challenges.
Our research focuses on three key areas: single-view depth estimation, multi-view surface reconstruction, and neural rendering. These areas align with our broader interest in using learning-based techniques to address the limitations of traditional depth sensing, 3D reconstruction, and rendering approaches. We specifically tackle the challenges of single-view depth estimation, which often suffers from scale ambiguity and requires large amounts of data. Additionally, we address the reconstruction of textured meshes, from small objects to large-scale scenes, by integrating traditional and neural methods. We also explore neural data generation techniques to advance UAV perception algorithms, contributing to the practical implementation of neural rendering techniques.
Our research focuses on three key areas: single-view depth estimation, multi-view surface reconstruction, and neural rendering. These areas align with our broader interest in using learning-based techniques to address the limitations of traditional depth sensing, 3D reconstruction, and rendering approaches. We specifically tackle the challenges of single-view depth estimation, which often suffers from scale ambiguity and requires large amounts of data. Additionally, we address the reconstruction of textured meshes, from small objects to large-scale scenes, by integrating traditional and neural methods. We also explore neural data generation techniques to advance UAV perception algorithms, contributing to the practical implementation of neural rendering techniques.
Bio
Jaehoon Choi is a PhD student in the Department of Computer Science at the University of Maryland, College Park, working under the supervision of Professor Dinesh Manocha. His research spans Computer Vision, Robotics, and Computer Graphics. He holds both a Bachelor's and a Master's degree in Electrical Engineering from the Korea Advanced Institute of Science and Technology (KAIST), with a focus on Computer Vision. His current work centers on reconstructing high-fidelity 3D geometry and achieving photorealistic rendering.
This talk is organized by Migo Gui