5xwyxupsLL@OpenReview

Total: 1

#1 PipeFusion: Patch-level Pipeline Parallelism for Diffusion Transformers Inference [PDF] [Copy] [Kimi] [REL]

Authors: Jiarui Fang, Jinzhe Pan, Aoyu Li, Xibo Sun, WANG Jiannan

This paper presents PipeFusion, an innovative parallel methodology to tackle the high latency issues associated with generating high-resolution images using diffusion transformers (DiTs) models. PipeFusion partitions images into patches and the model layers across multiple GPUs. It employs a patch-level pipeline parallel strategy to orchestrate communication and computation efficiently. By capitalizing on the high similarity between inputs from successive diffusion steps, PipeFusion reuses one-step stale feature maps to provide context for the current pipeline step. This approach notably reduces communication costs compared to existing DiTs inference parallelism, including tensor parallel, sequence parallel and DistriFusion. PipeFusion enhances memory efficiency through parameter distribution across devices, ideal for large DiTs like Flux.1. Experimental results demonstrate that PipeFusion achieves state-of-the-art performance on 8$\times$L40 PCIe GPUs for Pixart, Stable-Diffusion 3, and Flux.1 models. Our Source code is available at \url{https://github.com/xdit-project/xDiT}.

Subject: NeurIPS.2025 - Poster