Demonstration-Enhanced Adaptive Multi-Objective Robot Navigation

Publication Authors J. de Heuvel; T. Sethuraman; M. Bennewitz
Published in Arxiv Pre-print
Year of Publication 2025
Abstract

Preference-aligned robot navigation in human environments is typically achieved through learning-based approaches, utilizing user feedback or demonstrations for personalization.

However, personal preferences are subject to change and might even be context-dependent.

Yet traditional reinforcement learning (RL) approaches with static reward functions often fall short in adapting to varying user preferences, inevitably reflecting demonstrations once training is completed.

This paper introduces a structured framework that combines demonstration-based learning with multi-objective reinforcement learning (MORL).

To ensure real-world applicability, our approach allows for dynamic adaptation of the robot navigation policy to changing user preferences without retraining.

It fluently modulates the amount of demonstration data reflection and other preference-related objectives.

Through rigorous evaluations, including a baseline comparison and sim-to-real transfer on two robots, we demonstrate our framework's capability to adapt to user preferences accurately while achieving high navigational performance in terms of collision avoidance and goal pursuance.

Type of Publication Preprint
Lead Image No image
Lead Image Caption
Text
Images
Teaser Image 1
Teaser Image 2
Files and Media
Local Video File
Local PDF File
Settings
Versioning enabled yes
Short name deheuvel2025demo_morl
Layout
Blocks { "80229947-fb06-47e4-8164-d78d37e8e8d7": { "@type": "slate", "value": [ { "type": "p", "children": [ { "text": "" } ] } ], "plaintext": "" } }
Blocks Layout { "items": [ "80229947-fb06-47e4-8164-d78d37e8e8d7" ] }
Options
Categorization
Related Items
Contents

There are currently no items in this folder.