log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
Emergent Dominance Hierarchies in Reinforcement Learning Agents
Tuesday, March 19, 2024, 1:00-2:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Registration requested: The organizer of this talk requests that you register if you are planning to attend. There are two ways to register: (1) You can create an account on this site (click the "register" link in the upper-right corner) and then register for this talk; or (2) You can enter your details below and click the "Register for talk" button. Either way, you can always cancel your registration later.

Name:
Email:
Organization:

Abstract

Modern Reinforcement Learning (RL) algorithms are able to outperform humans in a wide variety of tasks. Multi-agent reinforcement learning (MARL) settings present additional challenges, and successful cooperation in mixed-motive groups of agents depends on a delicate balancing act between individual and group objectives. Social conventions and norms, often inspired by human institutions, are used as tools for striking this balance.

 

In this paper, we examine a fundamental, well-studied social convention that underlies cooperation in both animal and human societies: dominance hierarchies.

 

We adapt the ethological theory of dominance hierarchies to artificial agents, borrowing the established terminology and definitions with as few amendments as possible. We demonstrate that populations of RL agents, operating without explicit programming or intrinsic rewards, can invent, learn, enforce, and transmit a dominance hierarchy to new populations. The dominance hierarchies that emerge have a similar structure to those studied in chickens, mice, fish, and other species. 

 

Full-length paper: https://arxiv.org/abs/2401.12258 

 

Link to the code: https://github.com/cool-RR/chicken-coop

Bio

Ram is an AI Safety researcher at Bar-Ilan University. Ram is interested in exploring the use of social behavior in MARL environments for AI Safety goals such as AI Interpretability and AI Corrigibility. Ram has spent most of his career working as a software engineer, most recently for Google. Ram is a recurrent contributor to the open-source ecosystem, for which he was recognized as a Fellow by the Python Software Foundation. More information on Ram's research at https://r.rachum.com/

Note: Please register using the Google Form on our website https://go.umd.edu/marl for access to the Google Meet, Open-source Multi-Agent AI Research Community and talk resources.

This talk is organized by Saptarashmi Bandyopadhyay