Li_UniPose_A_Unified_Multimodal_Framework_for_Human_Pose_Comprehension_Generation@CVPR2025@CVF

Total: 1

#1 UniPose: A Unified Multimodal Framework for Human Pose Comprehension, Generation and Editing [PDF2] [Copy] [Kimi1] [REL]

Authors: Yiheng Li, Ruibing Hou, Hong Chang, Shiguang Shan, Xilin Chen

Human pose plays a crucial role in the digital age. While recent works have achieved impressive progress in understanding and generating human poses, they often support only a single modality of control signals and operate in isolation, limiting their application in real-world scenarios. This paper presents UniPose, a framework employing Large Language Models (LLMs) to comprehend, generate, and edit human poses across various modalities, including images, text, and 3D SMPL poses. Specifically, we apply a pose tokenizer to convert 3D poses into discrete pose tokens, enabling seamless integration into the LLM within a unified vocabulary. To further enhance the fine-grained pose perception capabilities, we facilitate UniPose with a mixture of visual encoders, among them a pose-specific visual encoder. Benefiting from a unified learning strategy, UniPose effectively transfers knowledge across different pose-relevant tasks, adapts to unseen tasks, and exhibits extended capabilities. This work serves as the first attempt at building a general-purpose framework for pose comprehension, generation, and editing. Extensive experiments highlight UniPose's competitive and even superior performance across various pose-relevant tasks.

Subject: CVPR.2025 - Highlight