log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Leveraging Social Theories to Enhance Human-AI Interaction
Harmanpreet Kaur
Zoom Link-https://umd.zoom.us/j/92977540316?pwd=NVF2WTc5SS9RSjFDOGlzcENKZnNxQT09
Thursday, March 30, 2023, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Human-AI partnerships are increasingly commonplace. Yet, systems that rely on these partnerships are unable to effectively capture the dynamic needs of people, or explain complex AI reasoning and outputs. The resulting socio-technical gap has led to harmful outcomes such as propagation of biases against marginalized populations and missed edge cases in sensitive domains. My work follows the belief that for human-AI interaction to be effective and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. Using human-AI interaction in the context of ML-based decision-support systems as a case study, in this talk, I will discuss my work that explains why interpretability tools do not work in practice. Interpretability tools exacerbate the bounded nature of human rationality, encouraging people to apply cognitive and social heuristics. These heuristics serve as mental shortcuts that make people's decision-making faster by not having to carefully reason about the information being presented. Looking ahead, I will share my research agenda that incorporates social theories to design human-AI systems that not only take advantage of the complementarity between people and AI, but also account for the incompatibilities in how (much) they understand each other.

Bio

Harman Kaur is a PhD candidate in both the department of Computer Science and the School of Information at the University of Michigan, where she is advised by Eric Gilbert and Cliff Lampe. Her research interests lie in human-AI collaboration and interpretable ML. Specifically, she designs and evaluates human-AI systems such that they effectively incorporate what people and AI are each good at, but also mitigate harms by accounting for the incompatibilities between the two. She has published several papers at top-tier human-computer interaction venues, such as CHI, CSCW, IUI, UIST, and FAccT. She has also completed several internships at Microsoft Research and the Allen Institute for AI, and is a recipient of the Google PhD fellowship. Prior to Michigan, Harman received a BS in Computer Science from the University of Minnesota.

This talk is organized by Richa Mathur