log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Reliable deep learning: a robustness perspective
Sahil Singla
Monday, November 22, 2021, 1:00-3:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Deep learning models achieve impressive accuracy on many benchmark tasks sometimes surpassing human-level performance. But it remains unclear whether the visual attributes used by these models for predictions are relevant to the desired object of interest or merely spurious artifacts that happen to co-occur with the object. A related limitation of these models is their vulnerability to adversarial perturbations: input perturbations imperceptible to a human that can arbitrarily change the prediction of the model. In this talk, I will present several algorithms for addressing these challenges.

First, to make the models provably robust against adversarial perturbations, we introduce computationally efficient methods for both the robustness certification and adversarial attack problems using the second order i.e. hessian information that provide state-of-the-art provable robustness guarantees. We provide verifiable conditions under which our method is able to compute points on the decision boundary that are provably closest to the input.

Second, we introduce a convolution layer with an orthogonal jacobian matrix called Skew Orthogonal Convolution that achieves state of the art standard and provably robust accuracy for deep convolutional neural networks on both the CIFAR-10 and CIFAR-100 datasets. We also derive provable guarantees on the approximation error to an orthogonal Jacobian.

Third, we introduce a scalable framework to discover the spurious visual attributes used in the inferences of a general model and localize them on a large number of images with minimal human supervision. Using this methodology, we introduce the Salient Imagenet dataset containing core and spurious masks for a large set of samples from Imagenet. We assess the performance of several popular Imagenet models and show that they rely heavily on various spurious features in their predictions.

Examining Committee:
Chair:
Department Representative:
Members:
Dr. Soheil Feizi  
Dr. David Jacobs
Dr. Tom Goldstein    
Bio

Sahil Singla is a 4th year PhD student in the Department of Computer Science at the University of Maryland, College Park. He is advised by Prof. Soheil Feizi. His research primarily focuses on provable defenses against adversarial examples and failure explanation of deep neural networks.

This talk is organized by Tom Hurst