Current AI frameworks using deep learning (DL) have met or exceeded human capabilities for tasks such as classifying images, recognizing objects (ImageNet), or facial expressions. Medical AI is also approaching clinicians’ performance for diagnostics tasks. This success masks real issues in AI assurance that will impede the deployment of such algorithms in real-life production systems. Two of the most critical concerns affecting AI assurance are privacy and bias. Privacy leakages can invalidate HIPAA or HITECH compliance, interdicting the use of DL diagnostic models in healthcare. Additionally, DL systems’ performance strongly depends on the availability of large, diverse, and representative annotated training datasets, which are often unbalanced with regard to factors such as gender, ethnicity and/or disease type, resulting in diagnostic bias. In this talk, we will introduce our recent progress in developing algorithms to address bias as well as approaches to assess possible risks in existing algorithms for privacy/membership attacks and propose ways to effectively defend against such privacy attacks.
Yinzhi Cao is an assistant professor in Computer Science at Johns Hopkins University. He earned his Ph.D. in Computer Science at Northwestern University and worked at Columbia University as a postdoc. Before that, he obtained his B.E. degree in Electronics Engineering at Tsinghua University in China. His research mainly focuses on the security and privacy of the Web, smartphones, and machine learning. His past work was widely featured by over 30 media outlets, such as NSF Science Now (Episode 38), CCTV News, IEEE Spectrum, Yahoo! News, and ScienceDaily. He received two best paper awards at SOSP’17 and IEEE CNS’15 respectively. He is a recipient of the Amazon ARA award 2017 and NSF CAREER Award 2021.