log in  |  register  |  feedback?  |  help  |  web accessibility
PhD Proposal: Scaling Policy Gradient Methods to Open-Ended Domains
Ryan Sullivan
Wednesday, May 1, 2024, 2:00-4:00 pm
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract
Abstract:
Curriculum learning has been a quiet yet crucial component of many of the major successes of reinforcement learning. AlphaGo learned to play the board game Go using self-play, which produces an implicit curriculum of increasingly challenging opponents. OpenAI Five was trained to play Dota by progressively adding complexity to the environment and randomizing game features to encourage robustness. GT Sophy, an agent that plays the racing game Gran Turismo at a professional level, learned from a manually curated distribution of racing scenarios. Notably, with the use of curriculum learning, many of these milestones were achieved with simple policy gradient methods. Despite its near ubiquity in successful reinforcement learning applications, curriculum learning is rarely the focus of research, and often mentioned as a minor implementation detail.

This began to change with the advent of open-endedness research. Open-ended environments have large, growing task spaces that present constantly evolving challenges to agents, similar to the real world. In these settings with countless tasks that agents may choose to devote time to, it is crucial to identify tasks that will teach transferable skills, and to learn those skills as efficiently as possible. Curriculum learning is therefore a required component of open-endedness research.

This work develops a stronger empirical understanding of policy gradient methods and curriculum learning in complex, multi-task environments. It proposes a new method for plotting reward surfaces, using them to identify challenges for policy-gradient methods in sparse-reward environments. We explore implementation tricks that have successfully improved the reward scale robustness of model-based RL algorithms and show that they are not effective when transferred to model-free PPO. Our findings demonstrate that direct policy optimization and clever implementation tricks are not enough for model-free policy gradient algorithms to solve challenging RL tasks. This motivates the use of curriculum learning, which circumvents these problems by training on easier subtasks.

We develop a general purpose library for curriculum learning and reimplement several popular algorithms using that framework, identifying shared components between methods and evaluating their impact across algorithms and environments. This allows us to transfer improvements between methods, resulting in new algorithms, and a stronger foundational understanding of automatic curriculum learning.
 
Examining Committee

Chair:

Dr. John Dickerson

Department Representative:

Dr. Ming Lin

Members:

Dr. Furong Huang

 

 

 
Bio

Ryan Sullivan is a 4th year PhD student advised by Dr. John Dickerson. His work focuses on using reinforcement learning and curriculum learning to solve hard multi-task problems, with a heavy focus on empirical results and creating open source research libraries.

This talk is organized by Migo Gui