Machine learning has revolutionized many subfields of robotics, from visual perception to task planning. However, the fundamental challenge of low-level motor control for object manipulation with raw sensory observations remains unresolved, primarily due to the lack of robot state and action data during manipulation. This issue is particularly pronounced in tasks requiring multi-finger coordination and fine tactile sensing.
Addressing the data problem is essential, as many modalities of robotic data, such as tactile and proprioceptive information, are not readily available online. The key scientific questions in this domain are: (i) how to collect data, (ii) what data to collect, and (iii) how to learn effectively from such data.
In this talk, I will (i) introduce a novel paradigm for data collection through the design of innovative interaction interfaces, (ii) demonstrate how identifying key dimensions for scaling data can enable human-level robotic grasping, and (iii) present methods and insights on efficiently learning from heterogeneous robotic data.