log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Language models for science and science for language models
Friday, November 3, 2023, 1:15-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
General-purpose neural sequence models have become easy to train, easy to deploy, and (reasonably) accurate in a variety of settings. As a result, we can use "language models" to build black-box predictors for data of all types---not just human. How can we use these models to better *understand* our data and the processes that generate it? I'll present recent work on two very different scientific problems arising from applications of LMs: doing science *with* LMs (to understand the structure of sperm whale communication) and doing science *on* LMs (to discover new algorithms for arithmetic and language learning).
Bio

Jacob Andreas is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to build intelligent systems that can communicate effectively using language and learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been named a National Academy of Sciences Kavli Fellow, and has received the NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

This talk is organized by Emily Dacquisto