log in  |  register  |  feedback?  |  help  |  web accessibility
What does accessible AI look like in Augmentative and Alternative Communication (AAC)?
Wednesday, March 11, 2026, 11:00 am-12:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
AI language models are transforming how people communicate, offering unprecedented speed and efficiency. For users of Augmentative and Alternative Communication (AAC), who rely on typed messages to speak, these technologies hold particular promise but also pose unique challenges.

In this talk, I explore how AI can support, reshape, and sometimes complicate communication for AAC users. What happens when systems begin to anticipate or generate what someone might say? How do we ensure that AI supports expression without overriding the user’s voice? Drawing on human-centered design, participatory research, and systems developed in our lab, I share examples of emerging AI-supported AAC tools and design explorations created with people with speech and motor disabilities. These projects surface key tensions between speed and control, assistance and authorship, automation and identity.

I conclude by outlining open design directions for building AI systems that work with AAC users rather than simply for them, prioritizing transparency, adaptability, and user steerability. Ultimately, the goal is not only more efficient communication, but technologies that expand people’s ability to express themselves on their own terms.
Bio
Stephanie Valencia is an Assistant Professor in the College of Information at the University of Maryland where she teaches and conducts research on Human-Computer Interaction, Accessible Computer-mediated Communication, User Experience and Design. Stephanie is dedicated to promoting equitable access to information and accessible technology such as communication systems for people with speech and motor disabilities. She received her Ph.D. in Human-Computer Interaction from Carnegie Mellon University and B.Sc. in Biomedical Engineering from EIA and CES university. 
This talk is organized by Wei Ai