log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Synthesizing Syntactic Facial Expressions in ASL Animations
Hernisa Kacorri - University of Maryland
Wednesday, October 4, 2017, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Deaf adults using sign language as a primary means of communication tend to have low literacy skills in written languages due to limited spoken language exposure and other educational factors. Technology to automatically synthesize linguistically accurate and natural-looking sign language animations can increase information accessibility for this population. State-of-the-art sign language animation focus mostly on accuracy of manual signs rather than on facial expressions. I’ll describe work we’ve done to  synthesize syntactic ASL facial expressions, which are grammatically required and essential to the meaning of ASL animations.
 
 
Bio

Hernisa Kacorri is an assistant professor in the iSchool (College of Information Studies) at University of Maryland, College Park. She received her Ph.D. in computer science in 2016 from The Graduate Center at City University of New York, and has conducted research at University of Athens, IBM Research-Tokyo, Lawrence Berkeley National Lab, and Carnegie Mellon University. Her research focuses on data-driven technologies that address human challenges, faced due to health or disability, with an emphasis on rigorous, user-based experimental methodologies to assess impact. Hernisa is a recipient of a Mina Rees Dissertation Fellowship in the Sciences, an ACM ASSETS best paper finalist, and an CHI honorable mention award. She has been recognized by the Rising Stars in EECS program of CMU/MIT.

This talk is organized by Marine Carpuat