For decades, artificial intelligence and cognitive science have aimed to create systems that learn and use language like humans. The last few years have witnessed an acceleration in the pace of progress of this endeavor, opening up new avenues to use LLMs to speed up progress in linguistics and cognitive science, and to use cognitive science to advance and understand LLMs. In this talk, I will survey past, ongoing and planned work from my group that pursues these directions. Particular projects I will focus on include training an LLM on synthetic data from formal languages to improve its sample efficiency, and using Bayesian models to evaluate and improve LLM assistants’ ability to update their beliefs about users. I will also demonstrate how progress towards LLMs that are more human-like — for example as far as their data efficiency and memory constraints — can advance not only the study of human language comprehension, but also interactive AI systems, where more realistic user simulators will help us develop AI assistants that can better collaborate with and teach people.

