https://umd.zoom.us/j/
Abstract:
In computer vision, camera egomotion is typically solved with visual odometry techniques which relies on feature extraction from a sequence of images and computation of the optical flow. This, however, often requires a point-to-point correspondence between two consecutive frames which can often be costly to compute and its varying accuracy greatly affects the quality of estimated motion. Attempts have been made to bypass the difficulties originated from the correspondence problem by adopting line features and fusing other sensors (event camera, IMU) to improve performance, many of which still heavily rely on feature detectors. If the camera observes a straight line as the it moves, the image of such line is sweeping a surface, this is a ruled surface and analyzing its shapes gives information about egomotion. Inspired by event cameras' capabilities on edge detection, this research presents a novel algorithm to reconstruct 3D scenes with ruled surfaces from which the camera egomotion is computed simultaneously. Constraining the egomotion with the inertia measurements from an onboard IMU sensor, the dimensionality of the solution space is greatly reduced.
Bio:
Chenqi Zhu is a graduating master's student in computer science at the University of Maryland, College Park, where he is advised by Professor Yiannis Aloimonos. He received his Bachelor of Science in Computer Science, minor in mathematics, in 2023 as part of the combined BS/MS program at the University of Maryland. Chenqi's research in computer vision focuses on theoretical foundations, leveraging his strong mathematical expertise to tackle complex challenges. He has worked on normal flows and egomotion estimation through event cameras and IMU sensors, culminating in his master’s thesis, Inertially Constrained Ruled Surfaces for Egomotion Estimation. Following graduation, Chenqi will join Panasonic Connect, contributing to their AI Research & Development team.