log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Improving Reliability of Machine Learning Models
Ping-yeh Chiang
Monday, September 11, 2023, 10:30 am-12:30 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Neural networks have consistently showcased exceptional performance in various applications. Yet, their deployment in adversarial settings is limited due to concerns about reliability. In this talk, we'll first explore methods to verify a model’s reliability in diverse scenarios, including classification, detection, auctions, and watermarking. We'll then discuss the challenges and limitations of these verification techniques in real-world situations and suggest potential remedies. We'll wrap up by examining the reliability of neural networks in the context of the model's implicit bias.

Our initial research investigated three critical areas where deep learning model reliability is crucial: object detection, deep auctions, and model watermarking. We found that without rigorous verification, systems could be vulnerable to accidents, manipulation of auction systems, and potential intellectual property theft. To counteract this, we introduced verification algorithms tailored to these respective scenarios.

However, while certificates affirm the resilience of our models within a predefined threat framework, they don't guarantee real-world infallibility. Hence, in the subsequent section, we explored strategies to improve model's adaptability to domain shifts. While the pyramid adversarial training technique is effective in improving reliability with respect to domain shift, it is very computationally intensive. In response, we devised an alternative technique, universal pyramid adversarial training, which offers comparable advantages while being 30-70% more efficient.

Finally, we try to understand the inherent non-robustness of neural networks through the lens of the model's implicit bias. Surprisingly, we found that the generalization ability of deep learning models comes almost entirely from the architecture and not the optimizer as commonly believed. This architectural bias might be a crucial factor in explaining the inherent non-robustness of neural networks.

Looking ahead, we intend to probe deeper into how neural networks' innate biases can lead to their frailties. Moreover, we posit that refining these implicit biases could offer avenues to enhance model reliability.
 
Examining Committee

Chair:

Dr. Tom Goldstein

Dean's Representative:

Dr. Min Wu

Members:

Dr. Rachel Rudinger

 

Dr. John Dickerson

 

Dr. Jia-Bin Huang

Bio

Ping-yeh Chiang is a PhD student of Computer Science, advised by Professor Tom Goldstein. His research is focused on the security of machine learning models, and is broadly interested in the general robustness of neural networks.

This talk is organized by Tom Hurst