CSj72Rr2PB@OpenReview

Total: 1

#1 Bias Mitigation in Graph Diffusion Models [PDF1] [Copy] [Kimi1] [REL]

Authors: Meng Yu, Kun Zhan

Most existing graph generative diffusion models suffer from significant exposure bias during graph sampling. We observe that the forward diffusion’s maximum perturbation distribution in most models deviates from the standard normal distribution, while reverse sampling consistently starts from a standard normal distribution. This mismatch results in a reverse starting bias, which, together with the exposure bias, degrades generation quality. The exposure bias typically accumulates and propagates throughout the sampling process. In this paper, we effectively address both biases. To mitigate reverse starting bias, we employ a newly designed Langevin sampling algorithm to align with the forward maximum perturbation distribution, establishing a new reverse starting point. To address the exposure bias, we introduce a fraction correction mechanism based on a newly defined score difference. Our approach, which requires no network modifications, is validated across multiple models, datasets, and tasks, achieving state-of-the-art results.

Subject: ICLR.2025 - Poster