log in  |  register  |  feedback?  |  help  |  web accessibility
Making machine learning predictably reliable
Andrew Ilyas
IRB 4105 or https://umd.zoom.us/j/95853135696?pwd=VVEwMVpxeElXeEw0ckVlSWNOMVhXdz09
Wednesday, March 6, 2024, 1:00-2:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Despite ML models' impressive performance, training and deploying them is currently a somewhat messy endeavor. But does it have to be? In this talk, I overview my work on making ML “predictably reliable”---enabling developers to know when their models will work, when they will fail, and why.

To begin, we use a case study of adversarial inputs to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a line of work that aims to develop a precise understanding of the ML pipeline, combining statistical tools with large-scale experiments to characterize the role of each individual design choice: from how to collect data, to what dataset to train on, to what learning algorithm to use.

Bio

Andrew Ilyas is a PhD student in Computer Science at MIT, where he is advised by Aleksander Madry and Constantinos Daskalakis. His research aims to improve the reliability and predictability of machine learning systems. He was previously supported by an Open Philanthropy AI Fellowship.

This talk is organized by Samuel Malede Zewdu