log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Human-centered Explainable AI: Expanding Explainable & Responsible AI
Upol Ehsan
IRB 4105 or https://umd.zoom.us/j/95853135696?pwd=VVEwMVpxeElXeEw0ckVlSWNOMVhXdz09
Thursday, April 25, 2024, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

If AI systems are going to inform consequential decisions such as deciding whether you should get a loan, they must be explainable to everyone, not just software engineers. While there has been commendable progress in Explainable AI (XAI) around “opening” the AI’s black-box, there is an overlooked insight: who opens the black-box matters just as much as opening it. As a result, many popular XAI interventions are ineffective and even harmful in real-world settings. In this talk, I will address this intellectual blind spot by introducing and operationalizing Human-centered XAI (HCXAI), a holistic sociotechnical paradigm for explainable and responsible AI. With a focus on non-AI experts, the talk will take the audience on a journey involving three “turns”--- to the machine, human, and sociotechnical. I will (a) exhibit how Social Transparency can encode socio-organizational context to augment AI explainability without changing the internal model, (b) demonstrate how we can make Explainable AI systems seamful, and (c) illustrate how algorithmic harms persist in the algorithm’s afterlife and what we should do about it. I will share contributions of my work on (1) expanding the XAI design space, (2) deepening our knowledges of “who” the humans are and their explainability needs, and (3) enabling resourceful ways to do Responsible AI (RAI). I will discuss the impact of my work on  informing Responsible AI policies, industry adoption, and pioneering the domain of HCXAI and nurturing a vibrant research community. Finally, I will cover my future research agenda on designing for recourse in AI, minimizing the sociotechnical gap in Large Language Model-powered healthcare technology, and imprint-aware algorithmic impact assessments. The work presented in this talk serves my vision of creating a future where anyone, regardless of their background, can interact with AI systems in an explainable, accountable, and dignified manner.

Bio

Upol Ehsan makes AI systems responsible and explainable so that people who aren't at the table don't end up on the menu. He is a Doctoral Candidate at Georgia Tech and an affiliate at Data & Society. His work has pioneered the domain of Human-centered Explainable AI (HCXAI), receiving awards at top-tier venues like ACM CHI, HCII, and AAAI AIES. Serving on multiple government-level AI task forces, his work is regularly covered in major media outlets such as MIT Tech Review, Vice, and VentureBeat. His work has informed Responsible AI policies at multiple organizations like the United Nations, Mozilla Foundation, and Center for Democracy and Society.  Throughout my PhD, he has researched at Microsoft Research (FATE), IBM Research, and Google. His work is generously supported by the NSF, DARPA, A2I, IBM, the World Bank along with prestigious fellowships like the Prime Minister’s Innovator Award. He led the editorial efforts for the inaugural ACM TiiS Journal on HCXAI and spearheads the organization of the HCXAI workshops at CHI every year since 2021. Outside research, he has led high performing teams in management consulting and serves on the boards of startups. He is also an advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor.

This talk is organized by Samuel Malede Zewdu