In attempting to mitigate racial bias in face recognition, one common practice is for researchers to racially balance the training dataset. In fact, this is the motivation behind two popular datasets in this space — RFW and FairFace. In this work, we test this assumption by training skewed subsets of RFW. Surprisingly, some heavily skewed subsets outperform their balanced counterparts in both accuracy and fairness across race.
This work received the best paper runner up award in the HTCV Workshop at ICCV in 2021.
Those who would like to join remotely can use the following Zoom link:
https://umd.zoom.us/j/94332767827?pwd=RElnZElhNjNualliaG5pMG9zUHBKQT09
Alex Hanson is a 3rd year PhD student advised by Abhinav Shrivastava. He is funded by the NDSEG fellowship.