log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Proposal: Level-Set Fusion: Real-time Reconstruction of Dynamic Scenes from a Single, Moving Camera
Gregory Kramida
Thursday, June 27, 2019, 10:00 am-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Concurrent reconstruction and tracking of multiple surfaces in unconstrained, dynamic environments and in real time has applications in augmented reality and robotics. In AR, it is necessary to effectively register virtual overlays with moving surfaces. In robotics, it is critical to automate navigation in natural settings and manipulation of deformable objects. Existing algorithms that address the problem are very scarce compared to static-scene reconstruction methods. Moreover, most include in their frame-by-frame processing a step where the scene is converted into a mesh and deformed according to previously-estimated motion, which can result in sheered, misaligned geometry, making the algorithms not robust to topological changes. Another few rely on level-set evolution to a single, initial scene state, are not able to reconstruct surface motion, and are not able to handle multiple objects. Remaining ones attempt to presegment the scene using 2D segmentation and handle chunks separately with static-scene reconstruction, and hence are poorly suited for non-rigid surfaces and exhibit cumulative errors due to presegmentation based on noisy, incomplete data.

We propose Level-Set Fusion, a method for dense 3D reconstruction of arbitrary dynamic scenes and surface motion from a single, moving RGB-D camera without any high-level prior knowledge about scene contents. The method incorporates voxel hashing, advanced resampling methods, rigid camera motion optimization, and hierarchical optimization of surface motion for every frame of the input sequence into a single fusion pipeline. Since it operates on voxel grids directly, it can potentially be used for tracking both rigid and non-rigid surfaces and is robust to topological changes.

We demonstrate how integration of voxel hashing to existing level-set evolution methods for reconstruction results in a significant speedup and enables their use on volumes well in excess of a cubic meter on present-day hardware and suggest how they can be made more memory-efficient. We discuss the usage of ellipsoidal weighted averaging filters to combat aliasing artifacts in reconstructed volumes and show preliminary results. Finally, we propose a complete reconstruction pipeline that would enable both tracking multiple surfaces and reconstructing them, simultaneously, without any apriori 2D segmentation, and can be extended to accrue and refine motion segmentation information over time.

Examining Committee: 
 
                          Chair:               Dr. Matthias Zwicker
                          Dept rep:         Dr.  Ramani Duraiswami
                          Members:        Dr. David Jacobs
This talk is organized by Tom Hurst