Kumar_IntroStyle_Training-Free_Introspective_Style_Attribution_using_Diffusion_Features@ICCV2025@CVF

Total: 1

#1 IntroStyle: Training-Free Introspective Style Attribution using Diffusion Features [PDF1] [Copy] [Kimi] [REL]

Authors: Anand Kumar, Jiteng Mu, Nuno Vasconcelos

Text-to-image (T2I) models have recently gained widespread adoption. This has spurred concerns about safeguarding intellectual property rights and an increasing demand for mechanisms that prevent the generation of specific artistic styles. Existing methods for style extraction typically necessitate the collection of custom datasets and the training of specialized models. This, however, is resource-intensive, time-consuming, and often impractical for real-time applications. We present a novel, training-free framework to solve the style attribution problem, using the features produced by a diffusion model alone, without any external modules or retraining. This is denoted as Introspective Style attribution(IntroStyle) and is shown to have superior performance to state-of-the-art models for style attribution. We also introduce a synthetic dataset of Artistic Style Split (ArtSplit) to isolate artistic style and evaluate fine-grained style attribution performance. Our experimental results on WikiArt and DomainNet datasets show that \ours is robust to the dynamic nature of artistic styles, outperforming existing methods by a wide margin.

Subject: ICCV.2025 - Poster