9513@2024@ECCV

Total: 1

#1 MONTAGE: Monitoring Training for Attribution of Generative Diffusion Models [PDF1] [Copy] [Kimi2] [REL]

Authors: Jonathan Brokman, Omer Hofman, Roman Vainshtein, Amit Giloni, Toshiya Shimizu, Inderjeet Singh, Oren Rachmil, Alon Zolfi, Asaf Shabtai, Yuki Unno, Hisashi Kojima

Diffusion models, which revolutionized image generation, are facing challenges related to intellectual property. These challenges arise when a generated image is influenced by one or more copyrighted images from the training data. Hence, pinpointing influential images from the training dataset, a task known as data attribution, becomes crucial for the clarity of content origins. We introduce MONTAGE, a pioneering data attribution method. Unlike existing approaches that overlook the internal workings of the training process, MONTAGE integrates a novel technique to monitor generations throughout the training via internal model representations. It is tailored for customized diffusion models, where training access is a practical assumption. This approach, coupled with a new loss function, enables enhanced accuracy as well as granularity of the attributions. The advantage of MONTAGE is evaluated in two granularity levels: Semantic concept (including mix-concept images) and individual image, showing promising results. This underlines MONTAGE's role towards solving copyright concerns in AI-generated digital art and media while enriching the understanding of the generative process.

Subject: ECCV.2024 - Poster