log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Defense: Towards Trustworthy AI: Methods for Enhancing Robustness and Attribution
Vasu Singla
IRB-5137 or https://umd.zoom.us/my/vsingla
Friday, May 9, 2025, 10:00 am-12:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

While deep learning achieves remarkable success in computer vision, its vulnerability to attacks and the difficulty in interpreting predictions pose significant barriers to trustworthy AI. This talk focuses on enhancing two crucial aspects: robustness and attribution. In the first part, we examine adversarial robustness, revealing how activation function geometry impacts adversarial training outcomes and demonstrating, both theoretically and empirically, that CNN shift-invariance can negatively affect robustness. We also introduce a novel, potent data poisoning technique ("autoregressive perturbations") designed to resist standard defenses. In the second part, we address attribution, presenting a computationally efficient method using self-supervision to understand training data influence on predictions. We then tackle memorization in generative AI, quantifying data replication in text-to-image diffusion models and introducing effective mitigation strategies that preserve output quality. These contributions provide key insights and practical tools for building more robust, interpretable, and trustworthy AI.

Bio

Vasu Singla is a PhD Student at University of Maryland, co-advised by Prof. Tom Goldstein and Prof. David Jacobs. His research focuses on building robustness and privacy of ML systems.

 

This talk is organized by Migo Gui