log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Affective Human Motion Detection and Synthesis
Uttaran Bhattacharya
Wednesday, September 21, 2022, 11:00 am-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Human emotion perception is an integral component of intelligent systems being designed for a wide range of socio-cultural applications, including video content understanding such as highlight detection, behavior prediction, social robotics, medical therapy and rehabilitation, and animation of virtual humans. These emotions can be perceived from various cues or modalities, including faces, audio, speech, and body expressions. Studies in affective computing indicate that emotions perceived from body expressions are extremely consistent across observers because humans tend to have less conscious control over their body expressions. Our work focuses on this aspect of emotion perception. Our goals include developing predictive methods for automated emotion recognition from body expressions, and building generative methods for synthesizing digital characters with appropriate affective body expressions.

We present two approaches for designing and training partially supervised methods for emotion recognition from body expressions, specifically gaits. We leverage existing gait datasets annotated with emotions to generate large-scale synthetic gaits corresponding to the emotion labels. We also utilize large-scale unlabeled gait datasets together with smaller annotated gait datasets to learn meaningful latent representations for emotion recognition. We design an autoencoder coupled with a classifier to learn latent representations for simultaneously reconstructing all input gaits and classifying the labeled gaits into emotion classes.

We also present novel generative methods to synthesize emotionally expressive bodily expressions, specifically gaits and gestures. The first method involves asynchronous generation, where we synthesize only one modality of the digital characters with affective expressions. We design an autoregression network that takes in a history of the characters' pose sequences and the intended emotions to generate future pose sequences with the desired affective expressions. The second method involves synchronous generation, where the affective contents of two modalities such as body gestures and speech need to be synchronized. Our approach utilizes machine translation techniques to translate from speech to body gestures and adversarial discrimination to differentiate between original and synthesized gestures in terms of affective expressions, ultimately producing state-of-the-art affective body gestures synchronized with speech. The final method extends synchronous generation to three modalities by synthesizing both facial expressions and body gestures synchronized with speech. To the best of our knowledge, this is the first multimodal synthesis method that can simultaneously incorporate emotional expressions in more than one modality, and leverages affordable, consumer-grade devices such as RGB video cameras to enable democratized usage.

Examining Committee:
Chair:
Dean's Representative:
Members:
Dr. Dinesh Manocha    
Dr. Jae Shim    
Dr. Ming Lin    
Dr. Huaishu Peng    
Dr. Aniket Bera    
Dr. Viswanathan Swaminathan (Adobe Research)
Bio

Uttaran Bhattacharya joined the Ph.D. program in Computer Science at the University of Maryland, College Park, in August 2018. He is advised by Dr. Dinesh Manocha in the GAMMA Lab and has a research focus on affective human motion recognition and synthesis. He has worked on automated techniques to detect emotions from 3D human body expressions such as gaits and gestures, and to generate animated 3D body expressions corresponding to different emotions in a variety of social contexts. His work has gained particular recognition in the fields of VR and multimedia, with a best paper award at the IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR) 2021 and a best paper nomination at the ACM International Conference on Multimedia (ACMMM) 2021. Throughout his Ph.D., Uttaran’s work has been supported in part by two fellowships, including the Dean’s fellowship from the University of Maryland in 2018 and the Adobe Research Fellowship in 2021.

This talk is organized by Tom Hurst