stpe7UeETz@OpenReview

Total: 1

#1 Corrector Sampling in Language Models [PDF] [Copy] [Kimi] [REL]

Authors: Itai Gat, Neta Shaul, Uriel Singer, Yaron Lipman

Autoregressive language models accumulate errors due to their fixed, irrevocable left-to-right token generation. To address this, we propose a new sampling method called Resample-Previous-Tokens (RPT). RPT mitigates error accumulation by iteratively revisiting and potentially replacing tokens in a window of previously generated text. Fine-tuning a pretrained 8B parameter model with RPT for only 100B resulted in ~10% relative improvements on reasoning and coding benchmarks compared to the standard sampling.

Subject: NeurIPS.2025 - Poster