log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Towards Effective and Inclusive AI: Aligning AI Systems with User Needs and Stakeholder Values Across Diverse Contexts
Yang Cao
Hornbake 0108
Wednesday, March 27, 2024, 11:30 am-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Abstract:
Inspired by the Turing test, a long line of research in AI has focused on technical improvement on tasks thought to require human-like comprehension. However, this focus has often resulted in models with impressive technical capabilities but uncertain real-world applicability. Despite the advancements of large pre-trained models, we still see various failure cases towards discriminated groups and when applied to specific applications.
A major problem here is the detached model development process --- these models are designed, developed, and evaluated with limited consideration of their users and stakeholders. My dissertation is dedicated to addressing this detachment by examining how artificial intelligence (AI) systems can be more effectively aligned with the needs of users and the values of stakeholders across diverse contexts. This work aims to close the gap between the current state of AI technology and its meaningful application in the lives of real-life stakeholders.
Bio

Yang (Trista) Cao a Ph.D. candidate in Computer Science at the University of Maryland working with Prof. Hal Daumé III. She is a member of the CLIP Lab. Her research interests involve moving towards trustworthy and fair natural language processing systems. Currently, she is working on human-centered NLP and VQA for Blind people.

This talk is organized by Migo Gui