In this talk I will discuss improving Multi-Agent Reinforcement Learning (MARL) performance by explicitly incorporating geometric and physical inductive biases into neural network architectures. Typical neural networks used in MARL struggle with sample efficiency and generalization in MARL domains due to their lack of inductive bias and guarantees. Equivariant Graph Neural Networks (EGNNs) offer a powerful solution by guaranteeing equivariance to geometric transformations like rotations and reflections, significantly improving learning speed and generalization in symmetric settings. This proposal demonstrates methods for integrating Equivariant Graph Neural Networks in Multi-agent RL, showing marked improvements in training performance and generalization on challenging MARL benchmarks. While powerful, the practical applicability of EGNNs is limited by the prevalence of asymmetries in real-world scenarios, which break strict symmetry assumptions of equivariant models. Thus we introduce Partially Equivariant Graph Neural Networks (PEnGUiN), can automatically adapt to any asymmetries, while retaining the sample efficiency of EGNNs. Finally, we propose building on these works by designing Equivariant Memory Layers and continuing to explore structured neural networks for learning and control.
Joshua McClellan is an AI and autonomy researcher at Johns Hopkins University Applied Physics Laboratory and a Ph.D. student in Computer Science at the University of Maryland, College Park. His research focuses on reinforcement learning, multi-agent systems, and geometric deep learning, with an emphasis on improving sample efficiency and generalization.