Wd9Y1C3KXs@OpenReview

Total: 1

#1 TC-Light: Temporally Coherent Generative Rendering for Realistic World Transfer [PDF] [Copy] [Kimi] [REL]

Authors: Yang Liu, Chuanchen Luo, Zimo Tang, Yingyan Li, yuran Yang, Yuanyong Ning, Lue Fan, Junran Peng, Zhaoxiang Zhang

Illumination and texture rerendering are critical dimensions for world-to-world transfer, which is valuable for applications including sim2real and real2real visual data scaling up for embodied AI. Existing techniques generatively re-render the input video to realize the transfer, such as video relighting models and conditioned world generation models. Nevertheless, these models are predominantly limited to the domain of training data (e.g., portrait) or fall into the bottleneck of temporal consistency and computation efficiency, especially when the input video involves complex dynamics and long durations. In this paper, we propose **TC-Light**, a novel paradigm characterized by the proposed two-stage post optimization mechanism. Starting from the video preliminarily relighted by an inflated video relighting model, it optimizes appearance embedding in the first stage to align global illumination. Then it optimizes the proposed canonical video representation, i.e., **Unique Video Tensor (UVT)**, to align fine-grained texture and lighting in the second stage. To comprehensively evaluate performance, we also establish a long and highly dynamic video benchmark. Extensive experiments show that our method enables physically plausible re-rendering results with superior temporal coherence and low computation cost. The code and video demos are available at our [Project Page](https://dekuliutesla.github.io/tclight/).

Subject: NeurIPS.2025 - Poster