log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Rethinking Common Assumptions to Mitigate Racial Bias in Face Recognition Datasets
Alex Hanson - UMD
Wednesday, March 2, 2022, 1:00-1:30 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

In attempting to mitigate racial bias in face recognition, one common practice is for researchers to racially balance the training dataset. In fact, this is the motivation behind two popular datasets in this space — RFW and FairFace. In this work, we test this assumption by training skewed subsets of RFW. Surprisingly, some heavily skewed subsets outperform their balanced counterparts in both accuracy and fairness across race. 

This work received the best paper runner up award in the HTCV Workshop at ICCV in 2021.

Those who would like to join remotely can use the following Zoom link:

https://umd.zoom.us/j/94332767827?pwd=RElnZElhNjNualliaG5pMG9zUHBKQT09

 

 

 

Bio

Alex Hanson is a 3rd year PhD student advised by Abhinav Shrivastava. He is funded by the NDSEG fellowship.

This talk is organized by Chris Metzler