SFsGKZU61H@OpenReview

Total: 1

#1 Learning to Integrate Diffusion ODEs by Averaging the Derivatives [PDF1] [Copy] [Kimi] [REL]

Authors: Wenze Liu, Xiangyu Yue

To accelerate diffusion model inference, numerical solvers perform poorly at extremely small steps, while distillation techniques often introduce complexity and instability. This work presents an intermediate strategy, balancing performance and cost, by learning ODE integration using loss functions derived from the derivative-integral relationship, inspired by Monte Carlo integration and Picard iteration. From a geometric perspective, the losses operate by gradually extending the tangent to the secant, thus are named as secant losses. The target of secant losses is the same as that of diffusion models, or the diffusion model itself, leading to great training stability. By fine-tuning or distillation, the secant version of EDM achieves a $10$-step FID of $2.14$ on CIFAR-10, while the secant version of SiT-XL/2 attains a $4$-step FID of $2.27$ and an $8$-step FID of $1.96$ on ImageNet-$256\times256$. Code is available at \url{https://github.com/poppuppy/secant_expectation}.

Subject: NeurIPS.2025 - Poster