Bi_AdaDCP_Learning_an_Adapter_with_Discrete_Cosine_Prior_for_Clear-to-Adverse@ICCV2025@CVF

Total: 1

#1 AdaDCP: Learning an Adapter with Discrete Cosine Prior for Clear-to-Adverse Domain Generalization [PDF] [Copy] [Kimi] [REL]

Authors: Qi Bi, Yixian Shen, Jingjun Yi, Gui-Song Xia

Vision Foundation Model (VFM) provides an inherent generalization ability to unseen domains for downstream tasks. However, fine-tuning VFM to parse various adverse scenes (e.g., fog, snow, night) is particularly challenging, as these samples are difficult to collect. Using easy-to-acquire clear scenes as the source domain is a feasible solution, but a huge domain gap exists between clear and adverse scenes due to their dramatically different appearances. To address this challenge, this paper proposes AdaDCP, a VFM adapter with discrete cosine prior. The innovation originates from the observation that, the frequency components from a VFM exhibit either variant or invariant properties on adverse weather conditions after discrete cosine transform. Technically, the weather-invariant property learning preceives most of the scene content that is invariant to the adverse condition. The weather-variant property learning, in contrast, perceives the weather-specific information from different types of adverse conditions. Finally, the weather-invariant property alignment implicitly enforces the weather-variant components to incorporate the weather-invariant information, therefore mitigating the clear-to-adverse domain gap. Experiments conducted on eight unseen adverse scene segmentation datasets show its state-of-the-art performance.

Subject: ICCV.2025 - Poster