log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Robust Learning under Distributional Shifts
Yogesh Balaji
Remote
Thursday, June 10, 2021, 2:00-4:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Designing robust models is critical for the reliable deployment of artificial intelligence systems. Deep neural networks perform exceptionally well on test samples that are drawn from the same distribution as the training set. However, they perform poorly when there is a mismatch between training and test conditions, a phenomenon called distributional shift. For instance, the perception system of a self-driving car can produce erratic predictions when it encounters a test sample with a different illumination or weather condition not seen during training. Such inconsistencies are undesirable, and can potentially create life-threatening conditions when these models are deployed in safety-critical applications.

In this dissertation, we develop several techniques for effectively handling distributional shifts in deep learning systems.

In the first part of the dissertation, we focus on detecting out-of-distribution shifts that can be used for flagging outlier samples at test-time. We develop a likelihood estimation framework based on deep generative models for this task. In the second part, we study the domain adaptation problem where the objective is to tune the neural network models to adapt to a specific target distribution of interest. We design novel adaptation algorithms based on variants of optimal transport distances, understand and analyze them under various settings. Finally, we focus on designing robust learning algorithms that can generalize to novel test-time distributional shifts. In particular, we study two types of distributional shifts: covariate and adversarial shifts. All developed algorithms are rigorously evaluated on several benchmark computer vision datasets.

Examining Committee: 
 
                           Chair:              Dr. Rama Chellappa      
                           Co-Chair:        Dr. Soheil Feizi      
                          Dean's rep:      Dr.  Wojciech Czaja  
                          Members:        Dr. Abhinav Shrivastava
                                               Dr. Tom Goldstein               
Bio

Yogesh Balaji is a PhD student in the Department of Computer Science at the University of Maryland, where he works with Prof. Rama Chellappa and Prof. Soheil Feizi. His research interests lie in the intersection of machine learning and computer vision. In particular, his research aims to understand and improve the robustness of neural networks to shifts in input distributions.

This talk is organized by Tom Hurst