log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Generating Novel Synthetic Photorealistic Data For Dynamic UAV Scenes Using Neural Radiance Fields
Christopher Maxey
Tuesday, November 19, 2024, 1:00-3:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
 
Abstract:
Although there have been many advancements in perception algorithms for a variety of computing platforms, training said algorithms still requires a large amount of data. In order for these algorithms to perform in esoteric domains, such as search and rescue imagery from Unmanned Aerial Vehicles (UAVs), appropriate datasets are needed for training. Within said domains, there is a dearth of available datasets due to reasons including scene novelty, flight regulations, and difficulty in collecting varied data overall.

In order to account for gaps in available UAV datasets, our research focuses on synthetic data generation to augment real world data when training perception algorithms. In particular, we focus on Neural Radiance Fields (NeRF) to capture the 3-dimensional structure of a scene and render novel views with photorealistic fidelity. Accomplishments include developing a pipeline for rendering novel data as well as ground truth labels such as bounding boxes, extending existing state of the art dynamic NeRF algorithms to handle difficult UAV scenes, and developing a new NeRF technique, Tiered K-planes, to increase the fidelity of small dynamic portions of a scene when compared with previous state of the art neural rendering methods. Ongoing and future work includes a NeRF algorithm that incorporates “shared feature vectors” in order to leverage mutual information within a scene and extrapolate on viable novel view imagery from available training camera pose trajectories as well as a “codex” of feature vectors amongst independent scenes that can accelerate the training of new scenes and further extrapolate on the novelty of rendered camera poses from scenes with extremely limited training trajectories, e.g. fixed camera positions.
Bio

Christopher Maxey is a PhD student in the Department of Computer Science at the University of Maryland, College Park, working under the supervision of Professor Dinesh Manocha. His research spans Computer Vision, Robotics and Neural Rendering methods. He holds both a Bachelor's and a Master's degree in Mechanical Engineering as well as a Bachelor’s degree in Computer Science from the University of Maryland. His current work centers on using neural radiance fields to augment existing but limited real world data for training UAV perception algorithms.

This talk is organized by Migo Gui