jTCiQpV0Lx@OpenReview

Total: 1

#1 Unlocker: Disentangle the Deadlock of Learning between Label-noisy and Long-tailed Data [PDF] [Copy] [Kimi] [REL]

Authors: Chen Shu, HongJun Xu, Ruichi Zhang, Mengke Li, Yonggang Zhang, Yang Lu, Bo Han, Yiu-ming Cheung, Hanzi Wang

In real world, the observed label distribution of a dataset often mismatches its true distribution due to noisy labels. In this situation, noisy labels learning (NLL) methods directly integrated with long-tail learning (LTL) methods tend to fail due to a dilemma: NLL methods normally rely on unbiased model predictions to recover true distribution by selecting and correcting noisy labels; while LTL methods like logit adjustment depends on true distributions to adjust biased predictions, leading to a deadlock of mutual dependency defined in this paper. To address this, we propose \texttt{Unlocker}, a bilevel optimization framework that integrates NLL methods and LTL methods to iteratively disentangle this deadlock. The inner optimization leverages NLL to train the model, incorporating LTL methods to fairly select and correct noisy labels. The outer optimization adaptively determines an adjustment strength, mitigating model bias from over- or under-adjustment. We also theoretically prove that this bilevel optimization problem is convergent by transferring the outer optimization target to an equivalent problem with a closed-form solution. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of our method in alleviating model bias and handling long-tailed noisy label data. Code is available at \url{https://anonymous.4open.science/r/neurips-2025-anonymous-1015/}.

Subject: NeurIPS.2025 - Poster