log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Defense: Learning Robot Policies from Intuitive Human Guidance
Amisha Bhaskar
IRB 4105 https://umd.zoom.us/j/6319739372?pwd=1aDsXO4mgX91LPQotgBGgDCCNBsvav.1&omn=99254268313&jst=2
Tuesday, June 23, 2026, 12:00-1:30 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Robots are increasingly being deployed in everyday environments, assisting with household tasks, warehouse logistics, and healthcare applications such as rehabilitation and assistive feeding. However, learning reliable manipulation policies for these settings remains a fundamental challenge. Classical planning and control methods rely on accurate models that are often unavailable in diverse real-world environments. Imitation learning typically requires large collections of expert demonstrations, while reinforcement learning often depends on extensive interaction data and carefully designed reward functions. This dissertation studies how robot learning can be made more data-efficient and practical by leveraging structured human guidance, multimodal feedback, and compact trajectory representations.

The dissertation investigates several complementary forms of structure for robot learning. First, it develops methods that transform 2D human trajectory sketches into feasible 3D robot motions, enabling intuitive and scalable forms of demonstration. Second, it introduces hierarchical learning frameworks that decompose long-horizon manipulation into planning, parameter selection, and low-level execution, while also learning when to invoke classical motion planning versus learned policies. Third, it explores preference-based supervision through contextual comparisons, enabling adaptive feedback that improves as robot capabilities evolve. The dissertation further studies multisensory policy learning, integrating vision, touch, audio, and proprioception to capture rich action distributions while supporting real-time control. Finally, it investigates structured trajectory parameterizations based on control points, showing that compact action representations can accelerate learning and enable strong performance with smaller models.

Across healthcare, household, and warehouse-inspired manipulation tasks, these methods improve learning efficiency, reduce the burden of demonstrations and supervision, and support robust policy learning in complex environments. Taken together, this dissertation shows that incorporating structure into both supervision and policy representation provides a practical path toward scalable robot learning for real-world manipulation.

Bio

Amisha Bhaskar is a PhD student in Computer Science at the University of Maryland, College Park, whose research focuses on robot learning for complex manipulation. She studies how robots can learn from intuitive and scalable forms of human guidance, such as sketches, demonstrations, preferences, and multimodal sensory feedback, with the goal of making robot learning more efficient, robust, and practical in real-world settings. Her work brings together imitation learning, reinforcement learning, generative policies, and multisensory learning across applications in healthcare, household assistance, and warehouse robotics. She has collaborated with Mitsubishi Electric Research Laboratories and Amazon Robotics, and her research has been recognized through best paper awards at ICRA workshops, university and media features, a top-10 Amazon thesis presentation, and her role as an Entrepreneurial Lead in the NSF I-Corps program.

Examining Committee Chair: Dr. Pratap Tokekar

Dean's Representative: Dr. Shuvra S Bhattacharya

Members:

Dr. Dinesh Manocha

Dr. Ruohan Gao

Dr. Tapomayukh Bhattacharjee



This talk is organized by Migo Gui