log in  |  register  |  feedback?  |  help  |  web accessibility
Adapting Language Models to a Dynamic World
Wednesday, May 3, 2023, 11:00 am-12:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)

NLP systems increasingly rely on large pre-trained language models and their memorized world knowledge, which tends to go out of date. In this talk, I will describe our lab’s work on model adaptation. First, I will investigate targeted updates to LMs – whether LMs can make inferences based on injected facts. We find that most existing methods for updating knowledge (gradient-based fine-tuning) show little propagation of injected knowledge while prepending the same information at inference time works robustly. Given this challenge, in the second part of the talk, I will focus on improving calibration by predicting which facts are prone to rapid change and readjusting model’s confidence accordingly. In the last part of the talk, I will present continuously updating models from user feedback. We study information-seeking scenarios where crowdworkers interact with deployed extractive QA systems. Together, this talk will present challenges and progresses in building models that can adapt to an evolving world.


Eunsol Choi is an assistant professor in the Computer Science department at the University of Texas at Austin. Her research area spans natural language processing and machine learning. She is particularly interested in interpreting and reasoning about text in the real world context. Prior to UT, she worked as a visiting scholar at Google AI. She received a Ph.D. from University of Washington and B.A from Cornell University. She is a recipient of Facebook research fellowship, Google faculty research award, and outstanding paper award at EMNLP. 

This talk is organized by Rachel Rudinger