log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
PhD Defense: Expanding Robustness in Responsible AI for Novel Bias Mitigation
Samuel Dooley
Friday, June 30, 2023, 10:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Conventional belief in the fairness community is that one should first find the highest performing model for a given problem and then apply a bias mitigation strategy. One starts with an existing model architecture and hyperparameters, and then adjusts model weights, learning procedures, or input data to make the model fairer using a pre-, post-, or in-processing bias mitigation technique. While existing methods for de-biasing machine learning systems use a fixed neural architecture and hyperparameter setting, I instead ask a fundamental question which has received little attention: how much does model-bias arise from the architecture and hyperparameters, and ask how can we exploit the extensive research in the fields of neural architecture search (NAS) and hyperparameter optimization (HPO) to search for more inherently fair models.

By thinking of bias mitigation in this new way, we really are expanding our conceptualization of robustness in responsible AI. Robustness is an emerging aspect of responsible AI and focuses on maintaining model performance in the face of uncertainties and variations for all subgroups of a data population. Often robustness deals with protecting models from intentional or unintentional manipulations in data, while handling noisy or corrupted data and preserving accuracy in real-world scenarios. In other words, robustness, as commonly defined, examines the output of a system under changes to input data. However, I will broaden the idea of what robustness in responsible AI is in a manner which defines new fairness metrics, yields insights into robustness of deployed AI systems, and proposes an entirely new bias mitigation strategy.

This thesis explores the connection between robust machine learning and responsible AI. It introduces a fairness metric that quantifies disparities in susceptibility to adversarial attacks. It also audits face detection systems for robustness to common natural noises, revealing biases in these systems. Finally, it proposes using neural architecture search to find fairer architectures, challenging the conventional approach of starting with accurate architectures and applying bias mitigation strategies.
 
Examining Committee

Chair:

Dr. John Dickerson

Dean's Representative:

Dr. Philip Resnik

Members:

Dr. Hal Daumé

 

Dr. Tom Goldstein

 

Dr. Furong Huang

 

Dr. Elissa Redmiles (MPI)

Bio

Samuel Dooley is a fifth year graduate student at the University of Maryland, advised by John P. Dickerson, working at the intersection of machine learning and society. He is a human-centered machine learning researcher where he develops large-scale, production systems and research how they impact individuals. He works on technologies like Human-Computer Interaction (HCI), Neural Architecture Search (NAS), Hyperparameter Optimization (HPO), and computer vision. He has a Master's in Statistics from George Washington University, and a Bachelor's in Mathematics from University of Chicago.

This talk is organized by Tom Hurst