log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Understanding Human-Centric Properties of Deep AI Models
Bolei Zhou
https://umd.zoom.us/j/94543765116?pwd=clY3MVV5Z1g4T2xpdnJMdjFiMFhYdz09
Wednesday, March 24, 2021, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Over the past few years, data-driven AI models such as deep networks have made significant progress in a wide range of real-world applications, from self-driving cars to protein structure prediction. In order to deploy these models in high-stakes applications such as self-driving and medical diagnosis, it is essential to ensure the model output is interpretable and trustworthy to humans. Meanwhile, humans should be able to quickly examine the models and identify potential biases and blind spots. Such interpretable Human-AI interaction is crucial for building reliable collaboration between human and intelligent machines. In this talk, I will present our effort to examine and improve deep AI models' human-centric properties beyond the performance, such as explainability, steerability, generalization, and fairness.

First, I will introduce Class Activation Mapping, a simple yet effective approach to leverage the internal activation of the deep network to explain its classification output. Then, I will talk about improving the steerability of deep generative models to facilitate human-in-the-loop visual content creation. Lastly, I will briefly discuss improving the generalization of the self-driving agent through the procedural generation of reinforcement learning environments. I will conclude my talk with on-going and future works toward effective human-AI interaction and its broad applications to machine perception and autonomy.  

Bio

Bolei Zhou is an Assistant Professor with the Information Engineering Department at the Chinese University of Hong Kong. He earned his Ph.D. in Computer Science from the Massachusetts Institute of Technology in June 2018. His research interest lies at the intersection of machine perception and autonomy, focusing on enabling interpretable human-AI interactions. He received MIT Tech Review's Innovators under 35 in Asia-Pacific Award, Facebook Fellowship, Microsoft Research Asia Fellowship, MIT Greater China Fellowship; His research was featured in media outlets such as TechCrunch, Quartz, and MIT News. More about his research is at http://bzhou.ie.cuhk.edu.hk/.

This talk is organized by Richa Mathur