rXFzVRZsbt@OpenReview

Total: 1

#1 Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods [PDF] [Copy] [Kimi] [REL]

Authors: Oussama Zekri, Nicolas Boulle

Discrete diffusion models have recently gained significant attention due to their ability to process complex discrete structures for language modeling. However, fine-tuning these models with policy gradient methods, as is commonly done in Reinforcement Learning from Human Feedback (RLHF), remains a challenging task. We propose an efficient, broadly applicable, and theoretically justified policy gradient algorithm, called Score Entropy Policy Optimization (SEPO), for fine-tuning discrete diffusion models over non-differentiable rewards. Our numerical experiments across several discrete generative tasks demonstrate the scalability and efficiency of our method.

Subject: NeurIPS.2025 - Poster