log in  |  register  |  feedback?  |  help  |  web accessibility
Making and measuring progress in long-form language processing
Mohit Iyyer
IRB 4105 or https://umd.zoom.us/j/95853135696?pwd=VVEwMVpxeElXeEw0ckVlSWNOMVhXdz09
Tuesday, March 26, 2024, 1:00-2:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Recent advances in large language models (LLMs) have enabled them to process texts that are millions of words long, fueling demand for long-form language processing tasks such as the summarization or translation of books. However, LLMs struggle to take full advantage of the information within such long contexts, which contributes to factually incorrect and incoherent text generation. In this talk, I first demonstrate an issue that plagues even modern LLMs: their tendency to assign high probability to implausible long-form continuations of their input. I then describe a contrastive sequence-level ranking model that mitigates this problem at decoding time and can also be adapted to the RLHF alignment paradigm. Next, I consider the growing problem of long-form evaluation: as the length of the inputs and outputs of long-form tasks grows further, how do we even measure progress? I propose a high-level framework (applicable to both human and automatic evaluation) that first decomposes a long-form text into simpler atomic units before then evaluating each unit on a specific aspect. I demonstrate the framework's effectiveness at evaluating factuality and coherence on tasks such as biography generation and book summarization. Finally, I will discuss my future research vision, which aims to build collaborative, multilingual, and secure long-form language processing systems.

Bio

Mohit Iyyer is an associate professor in computer science at the University of Massachusetts Amherst, with a primary research interest in natural language generation. He is the recipient of best paper awards at NAACL (2016, 2018), an outstanding paper award at EACL 2023, and a best demo award at NeurIPS 2015, and he also received the 2022 Samsung AI Researcher of the Year award. He obtained his PhD in computer science from the University of Maryland, College Park in 2017 and spent the following year as a researcher at the Allen Institute for Artificial Intelligence.

This talk is organized by Samuel Malede Zewdu