When NLP systems, in particular large language models, generate claims that are easily verified by people to be untrue, trust is irrelevant. What matters is when they generate claims that are not (easily) verified to be true. Which raises two questions: (1) How much complementarity is there between what LLMs "know" and what people do, and (2) Can LLMs themselves provide missing complementary information. I'll discuss some good news and bad news that provide partial answers to these questions.
Hal Daumé is a Volpi-Cupal endowed Professor of Computer Science at the University of Maryland, where he leads an NSF & NIST-funded institute on Trustworthy AI, as well as a Senior Principal Researcher at Microsoft Research. His research focus is on developing natural language processing systems that interact naturally with people, promote their self-efficacy, while mitigating societal harms. He has received several awards, including best paper at ACL 2018, NAACL 2016, CEAS 2011 and ECML 2009, as well as best demo at NeurIPS 2015. He has been program chair for ICML 2020 (together with Aarti Singh) and for NAACL 2013 (together with Katrin Kirchhoff), and he was an inaugural diversity and inclusion co-chair at NeurIPS 2018 (with Katherine Heller).