Total: 1
Authors: Haru Kondoh, Asako Kanezaki
The field of multimodal robot navigation in indoor environments has garnered significant attention in recent years. However, as tasks and methods become more advanced, the action decision systems tend to become more complex and operate as black-boxes. For a reliable system, the ability to explain or describe its decisions is crucial; however, there tends to be a trade-off in that explainable systems cannot outperform non-explainable systems in terms of performance. In this paper, we propose incorporating the task of describing actions in language into the reinforcement learning of navigation as an auxiliary task. Existing studies have found it difficult to incorporate describing actions into reinforcement learning due to the absence of ground-truth data. We address this issue by leveraging knowledge distillation from pre-trained description generation models, such as vision-language models. We comprehensively evaluate our approach across various navigation tasks, demonstrating that it can describe actions while attaining high navigation performance. Furthermore, it achieves state-of-the-art performance in the particularly challenging multimodal navigation task of semantic audio-visual navigation.
Subject: ICCV.2025 - Poster
Include(OR):
Exclude:
Magic Token:
Kimi Language:
Desc Language:
Bug report? Issue submit? Please visit:
Github: https://github.com/bojone/papers.cool
Please read our Disclaimer before proceeding.
For more interesting features, please visit kexue.fm and kimi.ai.