log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
DM^2: Decentralized Multi-Agent Reinforcement Learning via Distribution Matching
Tuesday, April 11, 2023, 4:00-5:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Registration requested: The organizer of this talk requests that you register if you are planning to attend. There are two ways to register: (1) You can create an account on this site (click the "register" link in the upper-right corner) and then register for this talk; or (2) You can enter your details below and click the "Register for talk" button. Either way, you can always cancel your registration later.

Name:
Email:
Organization:

Abstract

Current approaches to multi-agent cooperation rely heavily on centralized mechanisms or explicit communication protocols to ensure convergence. This paper studies the problem of distributed multi-agent learning without resorting to centralized components or explicit communication. It examines the use of distribution matching to facilitate the coordination of independent agents. In the proposed scheme, each agent independently minimizes the distribution mismatch to the corresponding component of a target visitation distribution. The theoretical analysis shows that under certain conditions, each agent minimizing its individual distribution mismatch allows the convergence to the joint policy that generated the target distribution. Further, if the target distribution is from a joint policy that optimizes a cooperative task, the optimal policy for a combination of this task reward and the distribution matching reward is the same joint policy.

This insight is used to formulate a practical algorithm (DM^2), in which each individual agent matches a target distribution derived from concurrently sampled trajectories from a joint expert policy. Experimental validation on the StarCraft domain shows that combining (1) a task reward, and (2) a distribution matching reward for expert demonstrations for the same task, allows agents to outperform a naive distributed baseline. Additional experiments probe the conditions under which expert demonstrations need to be sampled to obtain the learning benefits.

 

Bio

Caroline is currently a Computer Science Ph.D. student at UT Austin, supervised by Prof. Peter Stone, Department of Computer Science at UT Austin, Executive Director of Sony AI, America and Director of Texas Robotics. Her research interests are decentralized cooperative multi-agent reinforcement learning, and leveraging demonstration knowledge to improve sample efficiency of reinforcement learning. She  received a B.S. in Mathematics and Computer Science from Duke University in 2020, where she researched interpretable machine learning methods for criminal recidivism prediction with Prof. Cynthia Rudin.

 

This talk is organized by Saptarashmi Bandyopadhyay