42150@AAAI

Total: 1

#1 Scaling Up Cooperative Multi-Agent Reinforcement Learning Through Hierarchical Heterogeneous Modular Architectures [PDF] [Copy] [Kimi] [REL]

Author: Minghong Geng

Multi-agent reinforcement learning enables sophisticated collaborative behaviors in autonomous systems, yet fundamental scalability barriers persist: existing methods struggle to coordinate large agent populations and face challenges with extended decision-making horizons. This research develops hierarchical approaches to scale up multi-agent learning systems through two complementary directions: structural scaling for coordinating increasing numbers of agents and temporal scaling for extending decision-making horizons. This paper presents four integrated contributions: a taxonomic survey establishing hierarchical architectures as the theoretical foundation for scalable multi-agent learning systems, a benchmark for long-horizon multi-objective multi-agent reinforcement learning, a framework integrating self-organizing neural networks with multiple reinforcement learning agents for hierarchical tri-level control, and a framework leveraging large language models for zero-shot multi-agent planning. Through comprehensive validation, this work demonstrates that hierarchical, heterogeneous, modular architectures provide unified, interpretable solutions to multi-agent scalability, bridging theoretical multi-agent reinforcement learning research with real-world deployment requirements.

Subject: AAAI.2026 - Doctoral Consortium Track