Safe Multi-Agent Reinforcement Learning for Behavior-Based Cooperative Navigation

Publication Authors M. Dawood; S. Pan; N. Dengler; S. Zhou; A. Schoellig; M. Bennewitz
Published in IEEE Robotics and Automation Letters (RA-L)
Year of Publication 2025
Abstract

In this paper, we address the problem of behavior-based cooperative navigation of mobile robots using safe multi-agent reinforcement learning (MARL).

Our work is the first to focus on cooperative navigation without individual reference targets for the robots, using a single target for the formation's centroid. This eliminates the complexities involved in having several path planners to control a team of robots.

To ensure safety, our MARL framework uses model predictive control (MPC) to prevent actions that could lead to collisions during training and execution.

We demonstrate the effectiveness of our method in simulation and on real robots, achieving safe behavior-based cooperative navigation without using individual reference targets, with zero collisions, and faster target reaching compared to baselines.

Finally, we study the impact of MPC safety filters on the learning process, revealing that we achieve faster convergence during training and we show that our approach can be safely deployed on real robots, even during early stages of the training.

Type of Publication Preprint
Lead Image No image
Lead Image Caption
Text
Images
Teaser Image 1
Teaser Image 2 No image
Files and Media
Local Video File
Local PDF File
Settings
Versioning enabled yes
Short name safe-marl
Layout
Blocks { "ddd857c1-ed38-493f-a3d5-f5a4a2761602": { "@type": "slate", "value": [ { "type": "p", "children": [ { "text": "" } ] } ], "plaintext": "" } }
Blocks Layout { "items": [ "ddd857c1-ed38-493f-a3d5-f5a4a2761602" ] }
Options
Categorization
Related Items
Contents

There are currently no items in this folder.