Despite deep learning's wide range of applications, a satisfactory understanding of its fundamental properties such as robustness, interpretability, generalizability and broadly speaking the scope of its applicability still eludes us. These properties are essential to characterize performance guarantees and to identify and prevent failure modes of deep models.
Focusing on model sensitivity against various types of adversarial and natural distributional shifts, a common approach is to treat these issues as separate “diseases” and mitigate them independently. Pushing back on this widely used approach, I will show that in fact explicit tradeoffs exist between adversarial and natural distributional robustness. Instead, I will present some evidence advocating for a new school of thought to consider these issues as “symptoms” of a common disease: current models rely heavily on spurious and noisy features instead of meaningful and core ones in their predictions. This is partially owing to the lack of diverse samples and proper supervision in training. I will then present potential ways to tackle this root cause via developing new learning paradigms based on novel data formulations.
Soheil Feizi is an assistant professor in the Computer Science Department at University of Maryland, College Park. Before joining UMD, he was a post-doctoral research scholar at Stanford University. He received his Ph.D. from Massachusetts Institute of Technology (MIT) in EECS with a minor degree in mathematics. He received the ONR's Young Investigator Award in 2022 and the NSF CAREER Award in 2020. He is the recipient of several other awards including two best paper awards, a teaching award, a Simons-Berkeley Research Fellowship on deep learning foundations and multiple faculty awards from industry such as IBM, AWS and Qualcomm. He received the Ernst Guillemin award for his M.Sc. thesis, as well as the Jacobs Presidential Fellowship and the EECS Great Educators Fellowship at MIT.