log in  |  register  |  feedback?  |  help  |  web accessibility
Deconstructing models and methods in deep learning
Pavel Izmailov
IRB- 4105, Zoom Link-https://umd.zoom.us/j/92977540316?pwd=NVF2WTc5SS9RSjFDOGlzcENKZnNxQT09
Thursday, February 16, 2023, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

Machine learning models are ultimately used to make decisions in the real world, where mistakes can be incredibly costly. We still understand surprisingly little about neural networks and the procedures that we use to train them, and, as a result, our models are brittle, often rely on spurious features, and generalize poorly under minor distribution shifts. Moreover, these models are often unable to faithfully represent uncertainty in their predictions, further limiting their applicability. In this talk, I will present works on neural network loss surfaces, probabilistic deep learning, uncertainty estimation and robustness to distribution shifts. In each of these works, we aim to build foundational understanding of models, training procedures, and their limitations, and then use this understanding to develop practically impactful, interpretable, robust and broadly applicable methods and models.


I am a final year PhD student in Computer Science at New York University, working with Andrew Gordon Wilson. I am primarily interested in understanding and improving deep neural networks. In particular my interests include out of distribution generalization, probabilistic deep learning, representation learning and large models. I am also excited about generative models, uncertainty estimation, semi-supervised learning, language models and other topics. Recently, our work on Bayesian model selection was recognized with an outstanding paper award at ICML 2022.

This talk is organized by Richa Mathur