Dong_NoiseController_Towards_Consistent_Multi-view_Video_Generation_via_Noise_Decomposition_and@ICCV2025@CVF

Total: 1

#1 NoiseController: Towards Consistent Multi-view Video Generation via Noise Decomposition and Collaboration [PDF] [Copy] [Kimi] [REL]

Authors: Haotian Dong, Xin Wang, Di Lin, Yipeng Wu, Qin Chen, Ruonan Liu, Kairui Yang, Ping Li, Qing Guo

High-quality video generation is crucial for many fields, including the film industry and autonomous driving. However, generating videos with spatiotemporal consistencies remains challenging. Current methods typically utilize attention mechanisms or modify noise to achieve consistent videos, neglecting global spatiotemporal information that could help ensure spatial and temporal consistency during video generation. In this paper, we propose the ***NoiseController***, consisting of **Multi-Level Noise Decomposition**, **Multi-Frame Noise Collaboration**, and **Joint Denoising**, to enhance spatiotemporal consistencies in video generation. In multi-level noise decomposition, we first decompose initial noises into scene-level foreground/background noises, capturing distinct motion properties to model multi-view foreground/background variations. Furthermore, each scene-level noise is further decomposed into individual-level shared and residual components. The shared noise preserves consistency, while the residual component maintains diversity. In multi-frame noise collaboration, we introduce an inter-view spatiotemporal collaboration matrix and an intra-view impact collaboration matrix, which captures mutual cross-view effects and historical cross-frame impacts to enhance video quality. The joint denoising contains two parallel denoising U-Nets to remove each scene-level noise, mutually enhancing video generation. We evaluate our ***NoiseController*** on public datasets focusing on video generation and downstream tasks, demonstrating its state-of-the-art performance.

Subject: ICCV.2025 - Poster