sFyTsO2qO3@OpenReview

Total: 1

#1 Disentangled Cross-Modal Representation Learning with Enhanced Mutual Supervision [PDF2] [Copy] [Kimi1] [REL]

Authors: Lu Gao, Wenlan Chen, Daoyuan Wang, Fei Guo, Cheng Liang

Cross-modal representation learning aims to extract semantically aligned representations from heterogeneous modalities such as images and text. Existing multimodal VAE-based models often suffer from limited capability to align heterogeneous modalities or lack sufficient structural constraints to clearly separate the modality-specific and shared factors. In this work, we propose a novel framework, termed **D**isentangled **C**ross-**M**odal Representation Learning with **E**nhanced **M**utual Supervision (DCMEM). Specifically, our model disentangles the common and distinct information across modalities and regularizes the shared representation learned from each modality in a mutually supervised manner. Moreover, we incorporate the information bottleneck principle into our model to ensure that the shared and modality-specific factors encode exclusive yet complementary information. Notably, our model is designed to be trainable on both complete and partial multimodal datasets with a valid Evidence Lower Bound. Extensive experimental results demonstrate significant improvements of our model over existing methods on various tasks including cross-modal generation, clustering, and classification.

Subject: NeurIPS.2025 - Poster