As machine learning models are increasingly deployed in high-stakes settings, the role of humans in the loop and the potential for human-AI collaboration have been a subject of debate. While many policy makers, domain experts, and various stakeholders advocate for the importance of humans’ discretionary power as a guardrail to prevent algorithmic harms, others have cast doubt on the effectiveness of such a sociotechnical design. In this talk, I will make a case for humans in the loop based on empirical findings, outline open questions in research on trustworthy systems that are well-suited for collaboration, and present recent findings that can inform the design of algorithms and interventions to facilitate human-AI collaboration.
Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using machine learning to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. She currently serves in the Executive Committee of ACM FAccT Conference.