log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Robust Learning under Distributional Shifts
Yogesh Balaji
Tuesday, November 12, 2019, 3:00-5:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Robustness to shifts in input distributions is crucial for reliable deployment of deep neural networks. Unfortunately, neural nets are extremely sensitive to distributional shifts, making them undesirable in safety-critical applications. For instance, perception system of a self-driving car trained on sunny weather conditions fails to perform well on snow. In this talk, I will present several algorithms for robust learning of deep neural networks against input distributional shifts.

First, I will present some results on likelihood computation using generative models, and how these likelihood estimates can be used for quantifying distributional shifts. Then, I will discuss robust learning algorithms for two broad classes of distributional shifts - naturally occuring covariate shifts, and artificially constructed adversarial shifts. For adapting to covariate shifts, I will present techniques using Generative Adversarial Networks (GANs) and regularization strategies. For adversarial shifts, I will discuss why current robust training algorithms have poor generalizing effect, and propose a technique for improving generalization.

Examining Committee: 
                          Chair:               Dr. Rama Chellappa
                          Dept rep:         Dr.  Soheil Feizi
                          Members:        Dr. Abhinav Shrivistava
This talk is organized by Tom Hurst