How do we ensure that the machine learning algorithms in high-stakes applications are fair, explainable, and lawful? Towards addressing this urgent question, this talk will provide some foundational perspectives that are deep-rooted in information theory, causality, and statistics. In the first part of the talk, I will discuss an emerging problem in policy-compliant explanations in finance, also called robust counterfactual explanations: how do we guide a rejected applicant to change the model outcome while also being robust to potential model updates? In the second part of the talk, I will discuss related questions that bridge the fields of fairness, explainability, and policy, e.g., how do we quantify the contribution of different features towards the disparity in a model? Can we check if the disparity in a model is purely due to critical occupational necessities or not? Lastly, I will also briefly talk about some of our other research interests in several related topics in trustworthy ML.
Her research interests broadly revolve around fair, explainable, and trustworthy machine learning by bringing in novel foundational perspectives deep-rooted in information theory, statistics, causality, and optimization. In her prior work, she has also examined problems in reliable computing for large-scale distributed machine learning, using tools from coding theory (an emerging area called “coded computing”).
She is a recipient of the 2024 NSF CAREER Award, 2023 JP Morgan Faculty Award, 2023 Northrop Grumman Seed Grant, 2022 Simons Institute Fellowship for Causality, 2021 AG Milnes Outstanding Thesis Award from CMU and 2019 K&L Gates Presidential Fellowship in Ethics and Computational Technologies. She has also pursued research internships at IBM Research and Dataminr.