Also on Zoom: https://umd.zoom.us/j/96718034
In this talk I'll survey recent work from my lab on two topics. First, I'll look at the mystery of generalization in neural nets, and possible explanations for why generalization occurs. Then I'll discuss adversarial and poisoning attacks on neural networks that cause unexpected behaviors and security vulnerabilities.
My research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. I work at the boundary between theory and practice, leveraging mathematical foundations, complex models, and efficient hardware to build practical, high-performance systems. I design optimization methods for a wide range of platforms ranging from powerful cluster/cloud computing environments to resource limited integrated circuits and FPGAs. Before joining the faculty at Maryland, I completed my PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. I have been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, and a Sloan Fellowship.