log in  |  register  |  feedback?  |  help  |  web accessibility
Human-Interpretable Multi-Agent Reinforcement Learning
Tuesday, March 28, 2023, 4:00-5:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Advances in AI enable systems that collaborate with humans to perform various tasks. However, there are many challenges that limit their deployment in complex, real-world tasks. Performant agents often use deep neural networks, which are not considered human interpretable. Furthermore, agents may act in ways that are not intuitive to people, further hindering their applicability. We propose to address these problems by learning interpretable and more human-like policies. By constructing interpretable policies, we enable domain experts to inspect the resulting behavior before deployment. By understanding what constitutes human likeness, we can adjust agent behavior to better align it with people’s expectations.

 

Bio

Stephanie Milani is a fourth year PhD student in the Machine Learning Department at Carnegie Mellon University, where she is advised by Fei Fang. Her research focuses on the deployment challenges of multi-agent reinforcement learning, including interpretability, human understanding of agent behavior, and environment modeling. She also co-organized the MineRL BASALT and Diamond competitions on learning from people and sample-efficient reinforcement learning, respectively. Previously, she graduated from UMBC with a B.S. in Computer Science and a B.A. in Psychology. Her personal website is: https://stephmilani.github.io/

 

This talk is organized by Saptarashmi Bandyopadhyay