While measures and mitigations for algorithmic bias have proliferated in the academic literature over the last several years, implementing them in practice is frequently not straightforward. This talk delves into the practical implementation of algorithmic bias metrics and mitigation techniques, exploring the challenges and solutions through three distinct case studies. The first case study explores the difficulty of measuring and communicating measures of algorithmic bias when the measures are taken over many different groups. In this example, we develop a low-dimensional summary statistic of algorithmic bias and discuss its statistical properties. In the second example, we build upon existing fair clustering techniques and apply them in the context of polling location assignment. Here, we confront the challenges arising from measuring distance between voters and polling locations and demonstrate how "fair" clustering based on poorly designed distance metrics could exacerbate disparities in voter turnout. Finally, in the third case study, unsatisfied by existing measures of bias for LLMs, I explore the extent of bias present based on my own usage patterns of the models.
Kristian Lum is an Associate Research Professor at the University of Chicago Data Science Institute. She has previously held positions at Twitter as a Sr. Staff ML Researcher and Research Lead for the Machine Learning Ethics, Accountability, and Transparency group; the University of Pennsylvania Department of Computer and Information Science; and at the Human Rights Data Analysis Group, where she led work on the use of algorithms in the criminal justice system. She is also a co-founder of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). Her current research focuses on algorithmic bias/fairness and social impact data science.