log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Motion Segmentation and Egomotion Estimation with Event-Based Cameras
Anton Mitrokhin
Virtual
Thursday, May 21, 2020, 12:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
The recent advancements in imaging sensor development have outpaced the development of algorithms for processing image data. In a quest to increase the reliability of computer vision methods, while reducing the latency and power consumption of sensors, the computer vision community has started to derive inspiration from the neuromorphic technologies, which attempt to mimic biological systems. For classical computer vision the most challenging problems are ones involving the processing of very fast motion with real-time control of a system, often encountered in autonomous navigation. Although computer vision and robotics communities have put forward solid mathematical frameworks and have developed many practical solutions, these solutions are currently not sufficient to deal with scenes with very high speed motion, high dynamic range, and changing lighting conditions. These are the scenarios where the event based frameworks excel.

Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), are ideally suited for real-time motion analysis. The unique properties encompassed in the readings of such sensors provide high temporal resolution, superior sensitivity to light and low latency. These properties provide the grounds to estimate motion extremely reliably in the most sophisticated scenarios, but they come at a price - modern event-based vision sensors have a lower resolution, compared to the classical cameras, sparse output and produce a significant amount of noise. Moreover, the asynchronous nature of the event stream is not compatible with traditional frame-based processing techniques, which calls for development of novel algorithms and introduction of new paradigms in event-based computer vision.

In this dissertation, we develop methods and frameworks for motion segmentation and egomotion estimation on event-based data, starting with a simple optimization-based approach for camera motion compensation and object tracking and continuing by developing several deep learning pipelines, while continuing to explore the connection between the shapes of the event clouds and scene motion. We collect EV-IMO - the first pixelwise-annotated dataset for motion segmentation for event cameras and propose a 3D graph-based learning approach for motion segmentation in (x,y,t) domain. Finally, we develop a set of mathematical constraints for event streams which leverage their temporal density and connect the shape of the event cloud with camera and object motion.

Examining Committee: 
 
                           Chair:              Dr. Yiannis Aloimonos        
                          Dean's rep:      Dr. Timothy Horiuchi    
                          Members:        Dr. Cornelia Fermuller 
                                                    Dr. Ramani Duraiswami  
                                               Dr. Matthias Zwicker    
Bio

Anton Mitrokhin is a Ph.D. candidate at the University of Maryland, advised by Prof. Yiannis Aloimonos and Dr. Cornelia Fermuller. He received his B.S. in Computer Science from Moscow Institute of Physics and Technology in 2016. His research focuses on high speed motion estimation, motion segmentation and tracking with neuromorphic event-based cameras. He has interned at Intel and at Nvidia and his thesis work was awarded the Prophesee fellowship in 2019.

This talk is organized by Tom Hurst