Liu_Text-to-Any-Skeleton_Motion_Generation_Without_Retargeting@ICCV2025@CVF

Total: 1

#1 Text-to-Any-Skeleton Motion Generation Without Retargeting [PDF1] [Copy] [Kimi] [REL]

Authors: Qingyuan Liu, Ke Lv, Kun Dong, Jian Xue, Zehai Niu, Jinbao Wang

Recent advances in text-driven motion generation have shown notable advancements. However, these works are typically limited to standardized skeletons and rely on a cumbersome retargeting process to adapt to varying skeletal configurations of diverse characters. In this paper, we present OmniSkel, a novel framework that can directly generate high-quality human motions for any user-defined skeleton without retargeting. Specifically, we introduce skeleton-aware RVQ-VAE, which utilizes Kinematic Graph Cross Attention (K-GCA) to effectively integrate skeletal information into the motion encoding and reconstruction. Moreover, we propose a simple yet effective training-free approach, Motion Restoration Optimizer (MRO), to ensure zero bone length error while preserving motion smoothness. To facilitate our research, we construct SkeleMotion-3D, a large-scale text-skeleton-motion dataset based on HumanML3D. Extensive experiments demonstrate the excellent robustness and generalization of our method.

Subject: ICCV.2025 - Poster