2025.acl-long.469@ACL

Total: 1

#1 Curriculum Debiasing: Toward Robust Parameter-Efficient Fine-Tuning Against Dataset Biases [PDF1] [Copy] [Kimi1] [REL]

Authors: Mingyu Lee, Yeachan Kim, Wing-Lam Mok, SangKeun Lee

Parameter-efficient fine-tuning (PEFT) addresses the memory footprint issue of full fine-tuning by modifying only a subset of model parameters. However, on datasets exhibiting spurious correlations, we observed that PEFT slows down the model’s convergence on unbiased examples, while the convergence on biased examples remains fast. This leads to the model’s overfitting on biased examples, causing significant performance degradation in out-of-distribution (OOD) scenarios. Traditional debiasing methods mitigate this issue by emphasizing unbiased examples during training but often come at the cost of in-distribution (ID) performance drops. To address this trade-off issue, we propose a curriculum debiasing framework that presents examples in a biased-to-unbiased order. Our framework initially limits the model’s exposure to unbiased examples, which are harder to learn, allowing it to first establish a foundation on easier-to-converge biased examples. As training progresses, we gradually increase the proportion of unbiased examples in the training set, guiding the model away from reliance on spurious correlations. Compared to the original PEFT methods, our method accelerates convergence on unbiased examples by approximately twofold and improves ID and OOD performance by 1.2% and 8.0%, respectively.

Subject: ACL.2025 - Long Papers