Total: 1
Authors: Siqi Zhang, Yanyuan Qiao, Qunbo Wang, Zike Yan, Qi Wu, Zhihua Wei, Jing Liu
Vision-and-Language Navigation (VLN) tasks have gained prominence within artificial intelligence research due to their potential application in fields like home assistants. Many contemporary VLN approaches, while based on transformer architectures, have increasingly incorporated additional components such as external knowledge bases or map information to enhance performance. These additions, while boosting performance, also lead to larger models and increased computational costs. In this paper, to achieve both high performance and low computational costs, we propose a novel architecture with the **co**mbination of **s**elective **m**em**o**rization (COSMO), which integrates state-space modules (SSMs) and transformer modules. However, direct application of SSMs in VLN results in significant performance degradation. Therefore, we propose two VLN-customized selective state space modules: the Round Selective Scan (RSS) and the Cross-modal Selective State Space Module (CS3). RSS facilitates comprehensive inter-modal interactions within a single scan, while the CS3 module adapts the selective state space module into a dual-stream architecture, thereby enhancing the acquisition of cross-modal interactions. Experimental validations on three mainstream VLN benchmarks, REVERIE, R2R, and R2R-CE, not only demonstrate competitive navigation performance of our model but also show a significant reduction in computational costs. Code is available at \href https://github.com/siqiZ805/VLN-COSMO.git VLN-COSMO .
Subject: ICCV.2025 - Poster
Include(OR):
Exclude:
Stared Paper(s):
Magic Token:
Kimi Language:
Desc Language:
Bug report? Issue submit? Please visit:
Github: https://github.com/bojone/papers.cool
Please read our Disclaimer before proceeding.
For more interesting features, please visit kexue.fm and kimi.ai.