log in  |  register  |  feedback?  |  help  |  web accessibility
Deep Learning Foundations: Interpretability, Robustness and Generative Models
Friday, February 8, 2019, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Deep learning models have demonstrated excellent empirical performance in different application domains. However, a satisfactory understanding of deep learning foundations continues to elude us. In this talk, I will explain our recent results in understanding some of the fundamental problems in supervised and unsupervised deep learning.


In unsupervised learning, I will first establish a principled connection between two modern generative approaches, namely Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs). Leveraging this connection, I will show how sample likelihoods can be computed in GANs facilitating their use in statistical inference applications. Next, I will explain why the standard Wasserstein distance can lead to undesired results when applied to mixture distributions with imbalanced mixture proportions. To resolve this issue, I will present a new distance measure namely the normalized Wasserstein distance and show its effectiveness in GANs, domain adaptation, adversarial clustering and hypothesis testing.


In supervised learning, I will explain impacts of high-order loss approximations and features in two related problems of deep learning interpretation and robustness. In particular, by obtaining a closed-form formula for the Hessian matrix of a deep ReLU network, I will characterize differences between first and second order interpretation methods. Finally, I will explain our recent results on attack-agnostic robustness certificates for a multi-label classification problem using deep ReLU networks. In particular, I will present a certificate that has a closed-form, is differentiable and is an order of magnitude faster to compute than existing methods even for deep networks.


Soheil Feizi is an assistant professor in the Computer Science Department at University of Maryland, College Park. He is also a member of University of Maryland’s Institute for Advanced Computer Studies (UMIACS) and is affiliated with the Department of Electrical and Computer Engineering at UMD. He is the 2019 recipient of the Simons-Berkeley Research Fellowship on deep learning foundations and is a core faculty at the Machine Learning Center at UMD. Before joining UMD, he was a post-doctoral research scholar at Stanford University. He received his Ph.D. in Electrical Engineering and Computer Science (EECS) with a minor degree in Mathematics from the Massachusetts Institute of Technology (MIT). His research interests are in the area of machine learning and statistical inference. He completed a M.Sc. in EECS at MIT, where he received the Ernst Guillemin award for his thesis, as well as the Jacobs Presidential Fellowship and the EECS Great Educators Fellowship. He also received the best student award at Sharif University of Technology from where he obtained his B.Sc.

This talk is organized by Brandi Adams