log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Proposal: The Limitations of Deep Learning Methods in Realistic Adversarial Settings
Yigitcan Kaya
Wednesday, May 4, 2022, 3:00-5:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
The study of adversarial examples has evolved from a niche phenomenon to a well-established branch of machine learning (ML). In the conventional view of an adversarial attack, the adversary takes an input sample, e.g., an image of a dog, and applies a deliberate transformation to this input, e.g., a rotation. This then causes the victim model to abruptly change its prediction, e.g., the rotated image is classified as a cat. Most prior work has adapted this view across different applications and provided powerful attack algorithms as well as defensive strategies to improve robustness.

The progress in this domain has been influential for both research and practice and it has produced a perception of better security. Yet, security literature tells us that adversaries often do not follow a specific threat model and adversarial pressure can exist in unprecedented ways. In this dissertation, I will start from the threats studied in security literature to highlight the limitations of the conventional view and extend it for capturing realistic adversarial scenarios.

First, I will discuss how adversaries can pursue goals other than hurting the predictive performance of the victim. In particular, an adversary can wield adversarial examples to perform denial-of-service against emerging ML systems that rely on input-adaptiveness for efficient predictions. Our attack algorithm, DeepSloth, can transform the inputs to offset the computational benefits of these systems. Moreover, an existing conventional defense is ineffective against DeepSloth and poses a trade-off between efficiency and security.

Second, I will show how the conventional view leads to a false sense of security for anomalous input detection methods. These methods build modern statistical tools around deep neural networks and have shown to be successful in detecting conventional adversarial examples. As a general-purpose analogue of blending attacks in security literature, we introduce the Statistical Indistinguishability Attack (SIA). SIA bypasses a range of published detection methods by producing anomalous samples that are statistically similar to normal samples.

To complete the dissertation, I will focus on malware detection with ML, a domain where adversarial examples are not only crafted deliberately like in the conventional view, but they can exist inherently as a domain challenge. Security vendors often rely on ML for automating malware detection due to the large volume of new malware. A standard approach for detection is collecting runtime behaviors of a program and feeding them to an ML model. We first observe that the variability in program behaviors across different environments places an adversarial pressure on detectors that are not robust to this variability. In our ongoing work, this leads us to study the challenges behavior variability poses to ML, the real-world security implications of these challenges as well as potential countermeasures.


Examining Committee:
Chair:
Department Representative:
Dr. Tudor Dumitras                 
Dr. Tom Goldstein
Dr. Leonidas Lampropoulos

Dr. Furong Huang
Dr. David Wagner  (UC Berkeley)
Dr. Krishnaram Kenthapadi (Fiddler.AI)
Bio

Yigitcan Kaya is a fifth-year PhD student in the Computer Science department, advised by Tudor Dumitras. His research broadly focuses on studying the properties of machine learning methods in adversarial settings. He has published his work in top machine learning conferences (ICML, NeurIPS, ICLR) and has interned twice as an applied scientist at AWS. He is also a Fellow in the Future Faculty program organized by The Clark School.

This talk is organized by Tom Hurst