log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
RIIT: Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning
Jian Hu & Seth Austin Harding - National Taiwan University, Taipei
Remote
Friday, February 26, 2021, 10:00-11:00 am Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

In recent years, Multi-Agent Deep Reinforcement Learning (MADRL) has been successfully applied to various complex scenarios such as playing computer games and coordinating robot swarms. In this talk, we investigate the impact of “implementation tricks” for SOTA cooperative MADRL algorithms, such as QMIX, and provide some suggestions for tuning. In investigating implementation settings and how they affect fairness in MADRL experiments, we found some conclusions contrary to the previous work; we discuss how QMIX’s monotonicity condition is critical for cooperative tasks. Finally, we propose the new policy-based algorithm RIIT that achieves SOTA among policy-based algorithms.

https://arxiv.org/pdf/2102.03479.pdf

This talk is organized by Justin Terry