log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Human-Timescale Adaptation in an Open-Ended Task Space
Tuesday, November 28, 2023, 12:00-1:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent's capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.

 

Paper link: https://arxiv.org/abs/2301.07608

Bio

Dr. Edward Hughes is a scientific leader in the field of AI and an expert on fast adaptation. His teams have pioneered large-scale reinforcement learning, the paradigm of Cooperative AI, and ad-hoc collaboration between machines and humans. He draws inspiration from diverse sources, including cultural evolution, social psychology, economics, organisational design, and meta-learning. Edward is currently a Staff Research Engineer at Google DeepMind, where he leads the Game Theory and Multi-Agent research engineering team. He received his PhD in theoretical physics from Queen Mary University of London on applications of string theory to particle scattering.

 

Note: Please register using the Google Form on our website https://go.umd.edu/marl for access to the Google Meet and talk resources.

This talk is organized by Saptarashmi Bandyopadhyay