log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
"How to make neural nets work, and how to make them not work"
Tom Goldstein
IRB 0318
Friday, September 17, 2021, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Also on Zoom: https://umd.zoom.us/j/96718034173?pwd=clNJRks5SzNUcGVxYmxkcVJGNDB4dz09

In this talk I'll survey recent work from my lab on two topics.  First, I'll look at the mystery of generalization in neural nets, and possible explanations for why generalization occurs.  Then I'll discuss adversarial and poisoning attacks on neural networks that cause unexpected behaviors and security vulnerabilities.

Bio

My research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. I work at the boundary between theory and practice, leveraging mathematical foundations, complex models, and efficient hardware to build practical, high-performance systems. I design optimization methods for a wide range of platforms ranging from powerful cluster/cloud computing environments to resource limited integrated circuits and FPGAs. Before joining the faculty at Maryland, I completed my PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. I have been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship.

This talk is organized by Richa Mathur