log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Fairness and Privacy In AI Applied to Healthcare
Philippe Burlina and Yinzhi Cao - John Hopkins University
Monday, May 10, 2021, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Current AI frameworks using deep learning (DL) have met or exceeded human capabilities for tasks such as classifying images, recognizing objects (ImageNet), or facial expressions. Medical AI is also approaching clinicians’ performance for diagnostics tasks. This success masks real issues in AI assurance that will impede the deployment of such algorithms in real-life production systems. Two of the most critical concerns affecting AI assurance are privacy and bias. Privacy leakages can invalidate HIPAA or HITECH compliance, interdicting the use of DL diagnostic models in healthcare. Additionally, DL systems’ performance strongly depends on the availability of large, diverse, and representative annotated training datasets, which are often unbalanced with regard to factors such as gender, ethnicity and/or disease type, resulting in diagnostic bias. In this talk, we will introduce our recent progress in developing algorithms to address bias as well as approaches to assess possible risks in existing algorithms for privacy/membership attacks and propose ways to effectively defend against such privacy attacks.

 

Passcode: FairAI-T8
Bio
Philippe Burlina is an associate research professor in Computer Science at Johns Hopkins University. He earned his M.S. and Ph.D. in Electrical Engineering at The University of Maryland, College Park, and Diplome d’Ingenieur at the Universite de Technologie de Compiegne in France. His research work is focused on computer vision and machine learning challenges that impact autonomy and healthcare, with emphasis on problems including fairness in AI, robustness and domain generalization, low shot and zero-shot learning, anomaly detection, and semantic approaches to generative modeling.

Yinzhi Cao is an assistant professor in Computer Science at Johns Hopkins University. He earned his Ph.D. in Computer Science at Northwestern University and worked at Columbia University as a postdoc. Before that, he obtained his B.E. degree in Electronics Engineering at Tsinghua University in China. His research mainly focuses on the security and privacy of the Web, smartphones, and machine learning. His past work was widely featured by over 30 media outlets, such as NSF Science Now (Episode 38), CCTV News, IEEE Spectrum, Yahoo! News, and ScienceDaily. He received two best paper awards at SOSP’17 and IEEE CNS’15 respectively.  He is a recipient of the Amazon ARA award 2017 and NSF CAREER Award 2021.

 

 
 
 
This talk is organized by Leonidas Tsepenekas