He_Semi-ViM_Bidirectional_State_Space_Model_for_Mitigating_Label_Imbalance_in@ICCV2025@CVF

Total: 1

#1 Semi-ViM: Bidirectional State Space Model for Mitigating Label Imbalance in Semi-Supervised Learning [PDF] [Copy] [Kimi] [REL]

Authors: Hongyang He, Hongyang Xie, Haochen You, Victor Sanchez

Semi-supervised learning (SSL) is often hindered by learning biases when imbalanced datasets are used for training, which limits its effectiveness in real-world applications. In this paper, we propose Semi-ViM, a novel SSL framework based on Vision Mamba, a bidirectional state space model (SSM) that serves as a superior alternative to Transformer-based architectures for visual representation learning. Semi-ViM effectively deals with label imbalance and improves model stability through two key innovations: LyapEMA, a stability-aware parameter update mechanism inspired by Lyapunov theory, and SSMixup, a novel mixup strategy applied at the hidden state level of bidirectional SSMs. Experimental results on ImageNet-1K and ImageNet-LT demonstrate that Semi-ViM significantly outperforms state-of-the-art SSL models, achieving 85.40% accuracy with only 10% of the labeled data, surpassing Transformer-based methods such as Semi-ViT.

Subject: ICCV.2025 - Poster