100@2024@ECCV

Total: 1

#1 Motion Mamba: Efficient and Long Sequence Motion Generation [PDF1] [Copy] [Kimi1] [REL]

Authors: Zeyu Zhang, Akide Liu, Ian Reid, Richard Hartley, Bohan Zhuang, Hao Tang

Human motion generation stands as a significant pursuit in generative computer vision, while achieving long-sequence and efficient motion generation remains challenging. Recent advancements in state space models (SSMs), notably Mamba, have showcased considerable promise in long sequence modeling with an efficient hardware-aware design, which appears to be a promising direction to build motion generation model upon it. Nevertheless, adapting the SSMs to motion generation faces hurdles since the lack of specialized design architecture for modeling motion sequence. To address these multifaceted challenges, we introduce three key contributions. Firstly, we proposed Motion Mamba, an innovative yet straightforward approach that presents the pioneering motion generation model utilized SSMs. Secondly, we designed a Hierarchical Temporal Mamba (HTM) block to process temporal data by traversing through a symmetric architecture aimed at preserving motion consistency between frames. We also designed a Bidirectional Spatial Mamba (BSM) block to bidirectionally process latent poses, in order to enhance accurate motion generation within a temporal frame. Lastly, the proposed method has outperformed other well-established methods on the HumanML3D and KIT-ML datasets, which demonstrates strong capabilities of high-quality long sequence motion modeling and real-time human motion generation.