Embracing this minimalism is paramount in unlocking the full potential of tiny robots and enhancing their perception systems. By streamlining and simplifying their design and functionality, these compact robots can maximize efficiency and overcome limitations imposed by size constraints. In this work, I propose a Minimal Perception framework that enables onboard autonomy in resource-constrained robots at scales (as small as a credit card) that were not possible before. Minimal perception refers to a simplified, efficient, and selective approach from both hardware and software perspectives to gather and process sensory information. Adopting a task-centric perspective allows for further refinement of the minimalist perception framework for tiny robots. For instance, certain animals like jumping spiders, measuring just 1/2 inch in length, demonstrate minimal perception capabilities through sparse vision facilitated by multiple eyes, enabling them to efficiently perceive their surroundings and capture prey with remarkable agility. The contributions of this work can be summarized as follows:
- Utilizing minimal quantities such as uncertainty in optical flow and its untapped potential to enable autonomous drone navigation, static and dynamic obstacle avoidance, and the ability to fly through unknown gaps.
- By utilizing the principles of interactive perception, the framework proposes novel object segmentation in cluttered environments eliminating the reliance on neural network training for object recognition.
- Introducing a generative simulator called WorldGen that has the power to generate countless cities and petabytes of high-quality annotated data, designed to minimize the demanding need for laborious 3D modeling and annotations, thus unlocking unprecedented possibilities for perception and autonomy tasks.
- I propose a method to predict metric dense depth maps in never-seen or out-of-domain environments by fusing information from a traditional RGB camera and a sparse 64-pixel depth sensor.
- The autonomous capabilities of the tiny robots are demonstrated on both aerial and ground robots: (a) autonomous car with a size smaller than a credit card (70mm), and (b) bee drone with a length of 120mm, showcasing navigation abilities, depth perception in all four main directions, and effective avoidance of both static and dynamic obstacles.
Chahat Deep Singh is a fifth-year Ph.D. candidate in the Perception and Robotics Group (PRG) with Professor Yiannis Aloimonos and Associate Research Scientist Cornelia Fermüller. He graduated with Master in Robotics at the University of Maryland in 2018. Later, he joined as a Ph.D. student in the Department of Computer Science. Singh’s research focuses on developing bio-inspired minimalist cognitive architectures to enable onboard autonomy on robots that are as small as a credit card. He was awarded Ann G. Wylie Fellowship for outstanding dissertation for the year 2022-2023, Future Faculty Fellowship 2022-2023, and UMD's Dean Fellowship in 2020. Recently, his work was featured in BBC, IEEE Spectrum, Voice of America, NVIDIA, Futurism, and much more. He has been serving as the PRG Seminar Series organizer since 2018 and has served as Maryland Robotics Center Student Ambassador from 2021 to 2023. Chahat is also a reviewer for RA-L, T-ASE, CVPR, ICRA, IROS, ICCV, and RSS among other top journals and conferences. Chahat will join UMD as a Postdoctoral Associate at Maryland Robotics Center in July 2023 under the supervision of Prof. Yiannis Aloimonos and Prof. Pratap Tokekar.
For more, please visit http://chahatdeep.github.io/.