log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Ex-Post Group Fairness and Individual Fairness in Ranking
IRB 3137 or Zoom: https://umd.zoom.us/j/6778156199?pwd=NkJKZG1Ib2Jxbmd5ZzNrVVlNMm91QT09
Tuesday, October 10, 2023, 12:15-1:15 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in algorithmic fairness, information retrieval, and machine learning literature. Recent works identify uncertainty in the utilities of items as a primary cause of unfairness and propose randomized rankings that achieve ex-ante fairer exposure and better robustness than deterministic rankings. However, this still may not guarantee representation fairness to the groups ex-post. In this talk, we will first discuss algorithms to sample a random group-fair ranking from the distribution that satisfies a set of natural axioms for randomized group-fair rankings. Our problem formulation works even when there is implicit bias, incomplete relevance information, or only an ordinal ranking is available instead of relevance scores or utility values. Next, we will look at its application to efficiently train stochastic learning-to-rank algorithms via in-processing for ex-post fairness. Finally, we will discuss an efficient algorithm that samples rankings from an individually fair distribution while ensuring ex-post group fairness.

 

This talk is based on joint works with Eshaan Bhansali (UC Berkeley), Amit Deshpande (Microsoft Research Bengaluru), Anand Louis (Indian Insitute of Science), and Anay Mehrotra (Yale University).

Bio

Sruthi is a final-year PhD Candidate in the department of Computer Science and Automation at the Indian Institute of Science, Bengaluru, where she is advised by Prof. Anand Louis. During her PhD, Sruthi has interned with Google Research Bengaluru in the "AI for Social Impact" Team and INRIA Saclay in the "FairPlay" Team. She was one of the recipients of the Google PhD Fellowship in the area of "Algorithms, Markets, and Optimization" for the year 2021. Her research focuses on algorithmic fairness. Her works on fairness in ranking and clustering have appeared in several peer-reviewed conferences.

This talk is organized by Kishen N Gowda