log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Proposal: Adversarial Machine Learning in The Wild
Parsa Saadatpanah
Monday, November 25, 2019, 10:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Deep neural networks are making their way into our everyday lives at an increasing rate. While the adoption of these models has greatly improved our everyday lives, it has also opened the door to new vulnerabilities in real-world systems. More specifically, in the scope of this work we are interested in one class of vulnerabilities: adversarial attacks. Given the high importance and the sensitivity of some the tasks that these models are responsible for, it is of vital importance to study such vulnerabilities in real-world systems. In this work, we look at examples of deep neural network based real-world systems, vulnerabilities of such systems, and approaches for making such systems more robust.

Firstly, we study an example of leveraging a deep neural network in a business-critical real- world system. We discuss how deep neural networks are being leveraged to improve the quality of smart voice assistants. More specifically, we introduce how collaborative filtering models can be used to automatically detect and resolve the errors of a voice assistant. We then discuss the success of this approach in improving the quality of a real-world voice assistant.

Secondly, we discuss an example of how adversarial attacks can be leveraged to manipulate a real-world system. We study how adversarial attacks can successfully manipulate the YouTube's copyright detection model and the financial implications of this vulnerability. In particular, we show how adversarial examples created for a copyright detection model implemented by us transfer to another black box model.

Finally, we study the problem of transfer learning in an adversarially robust setting. We discuss how robust models contain robust feature extractors and how we can leverage them to train new classifiers that preserve the robustness of original model. We then study the case of fine- tuning in target domain while preserving the robustness. We show the success of our proposed solutions in preserving the robustness in target domain.


Examining Committee: 
 

 

                          Chair:               Dr. Tom Goldstein
                          Dept rep:         Dr.  John Dickerson
                          Members:        Dr. Furong Huang
Bio

Parsa Saadatpanah is a PhD student in the Department of Computer Science at the University of Maryland, College Park. His research interests include machine learning, adversarial machine learning, and recommendation systems.  

This talk is organized by Tom Hurst