log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Trust in the Absence of Verifiability
Hal Daumé
IRB 0318 or Zoom: https://umd.zoom.us/j/92721031800?pwd=dGhidU13dzl0cmI2eUM4SzJLNTZrZz09
Friday, September 8, 2023, 11:00-11:55 am Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

When NLP systems, in particular large language models, generate claims that are easily verified by people to be untrue, trust is irrelevant. What matters is when they generate claims that are not (easily) verified to be true. Which raises two questions: (1) How much complementarity is there between what LLMs "know" and what people do, and (2) Can LLMs themselves provide missing complementary information. I'll discuss some good news and bad news that provide partial answers to these questions.

Bio

Hal Daumé is a Volpi-Cupal endowed Professor of Computer Science at the University of Maryland, where he leads an NSF & NIST-funded institute on Trustworthy AI, as well as a Senior Principal Researcher at Microsoft Research. His research focus is on developing natural language processing systems that interact naturally with people, promote their self-efficacy, while mitigating societal harms. He has received several awards, including best paper at ACL 2018, NAACL 2016, CEAS 2011 and ECML 2009, as well as best demo at NeurIPS 2015. He has been program chair for ICML 2020 (together with Aarti Singh) and for NAACL 2013 (together with Katrin Kirchhoff), and he was an inaugural diversity and inclusion co-chair at NeurIPS 2018 (with Katherine Heller).

This talk is organized by Samuel Malede Zewdu