Advances in AI enable systems that collaborate with humans to perform various tasks. However, there are many challenges that limit their deployment in complex, real-world tasks. Performant agents often use deep neural networks, which are not considered human interpretable. Furthermore, agents may act in ways that are not intuitive to people, further hindering their applicability. We propose to address these problems by learning interpretable and more human-like policies. By constructing interpretable policies, we enable domain experts to inspect the resulting behavior before deployment. By understanding what constitutes human likeness, we can adjust agent behavior to better align it with people’s expectations.
Stephanie Milani is a fourth year PhD student in the Machine Learning Department at Carnegie Mellon University, where she is advised by Fei Fang. Her research focuses on the deployment challenges of multi-agent reinforcement learning, including interpretability, human understanding of agent behavior, and environment modeling. She also co-organized the MineRL BASALT and Diamond competitions on learning from people and sample-efficient reinforcement learning, respectively. Previously, she graduated from UMBC with a B.S. in Computer Science and a B.A. in Psychology. Her personal website is: https://stephmilani.github.io/