Ap8OIosN8p@OpenReview

Total: 1

#1 Robust Label Proportions Learning [PDF] [Copy] [Kimi] [REL]

Authors: Jueyu Chen, Wantao Wen, Yeqiang Wang, Erliang Lin, Yemin Wang, Yuheng Jia

Learning from Label Proportions (LLP) is a weakly-supervised paradigm that uses bag-level label proportions to train instance-level classifiers, offering a practical alternative to costly instance-level annotation. However, the weak supervision makes effective training challenging, and existing methods often rely on pseudo-labeling, which introduces noise. To address this, we propose RLPL, a two-stage framework. In the first stage, we use unsupervised contrastive learning to pretrain the encoder and train an auxiliary classifier with bag-level supervision. In the second stage, we introduce an LLP-OTD mechanism to refine pseudo labels and split them into high- and low-confidence sets. These sets are then used in LLPMix to train the final classifier. Extensive experiments and ablation studies on multiple benchmarks demonstrate that RLPL achieves comparable state-of-the-art performance and effectively mitigates pseudo-label noise.

Subject: NeurIPS.2025 - Poster