Hk4cCTukeI@OpenReview

Total: 1

#1 Triplets Better Than Pairs: Towards Stable and Effective Self-Play Fine-Tuning for LLMs [PDF] [Copy] [Kimi] [REL]

Authors: Yibo Wang, Hai-Long Sun, Guangda Huzhang, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang, Lijun Zhang

Recently, self-play fine-tuning (SPIN) has been proposed to adapt large language models to downstream applications with scarce expert-annotated data, by iteratively generating synthetic responses from the model itself. However, SPIN is designed to optimize the current reward advantages of annotated responses over synthetic responses at hand, which may gradually vanish during iterations, leading to \textit{unstable optimization}. Moreover, the utilization of reference policy induces a \textit{misalignment} issue between the reward formulation for training and the metric for generation. To address these limitations, we propose a novel \textbf{T}riplet-based \textbf{S}elf-\textbf{P}lay f\textbf{I}ne-tu\textbf{N}ing (TSPIN) method that integrates two key designs. First, beyond current advantages, TSPIN additionally incorporates historical advantages between iteratively generated responses and proto-synthetic responses produced by the initial policy. Even if the current advantages diminish, historical advantages remain effective, stabilizing the overall optimization. Second, TSPIN introduces the entropy constraint into the self-play framework, which is theoretically justified to support reference-free fine-tuning, eliminating the training-generation discrepancy. Empirical results on various tasks demonstrate not only the superior performance of TSPIN over SPIN, but also its stable evolution during iterations. Remarkably, compared to supervised fine-tuning, TSPIN achieves comparable or even better performance with only $25\\%$ samples, highlighting its effectiveness when faced with scarce annotated data.

Subject: NeurIPS.2025 - Poster