Zhang_VIPerson_Flexibly_Generating_Virtual_Identity_for_Person_Re-Identification@ICCV2025@CVF

Total: 1

#1 VIPerson: Flexibly Generating Virtual Identity for Person Re-Identification [PDF1] [Copy] [Kimi] [REL]

Authors: Xiao-Wen Zhang, Delong Zhang, Yi-Xing Peng, Zhi Ouyang, Jingke Meng, Wei-Shi Zheng

Person re-identification (ReID) is to match the person images under different camera views. Training ReID models necessitates a substantial amount of labeled real-world data, leading to high labeling costs and privacy issues. Although several ReID data synthetic methods are proposed to address these issues, they fail to generate images with new identities or real-world camera styles. In this paper, we propose a novel pedestrian generation pipeline, VIPerson, to generate camera-realistic pedestrian images with flexible Virtual Identities for the Person ReID task. VIPerson focuses on three key factors in data synthesis: (I) Virtual identity diversity: Enhancing the latent diffusion model with our proposed dropout text embedding, we flexibly generate random and hard identities. (II) Scalable cross-camera variations: VIPerson introduces scalable variations of scenes and poses within each identity. (III) Camera-realistic style: Adopting an identity-agnostic approach to transfer realistic styles, we avoid privacy exposure of real identities. Extensive experimental results across a broad range of downstream ReID tasks demonstrate the superiority of our generated dataset over existing methods. In addition, VIPerson can be adapted to the identity expansion scenario, which widens the application of our pipeline. The dataset and code of VIPerson are available at https://isee-laboratory.github.io/VIPerson/.

Subject: ICCV.2025 - Poster