log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
When the Majority is Wrong: Modeling Annotator Disagreement for Language Tasks
Wednesday, November 29, 2023, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Machine learning methods have long used majority vote among annotators for ground truth labels, but annotator disagreement often reflects real differences in opinion, not noise. This issue is particularly key for training large language models, which perform a wide range of often sensitive tasks for a diverse population. For example, a crucial problem in hate speech detection is whether a statement is offensive to the demographic that it targets, which may constitute a small fraction of the annotator pool. In this talk, I’ll present a model that predicts individual annotators’ ratings on potentially offensive text and combines this information with the predicted group targeted by the text to model the opinions of relevant stakeholders. I’ll also discuss ongoing challenges and opportunities of designing large language models that incorporate human feedback from multiple perspectives.

Bio

Eve Fleisig is a third-year PhD student at UC Berkeley, advised by Rediet Abebe and Dan Klein. Her research lies at the intersection of natural language processing and AI ethics, with a focus on preventing societal harms of generative models and incorporating human preferences into language models. Previously, she received a B.S. in computer science from Princeton University. She is a Berkeley Chancellor’s Fellow and recipient of the NSF Graduate Research Fellowship.

This talk is organized by Rachel Rudinger