Zhai_Efficient_Input-level_Backdoor_Defense_on_Text-to-Image_Synthesis_via_Neuron_Activation@ICCV2025@CVF

Total: 1

#1 Efficient Input-level Backdoor Defense on Text-to-Image Synthesis via Neuron Activation Variation [PDF2] [Copy] [Kimi] [REL]

Authors: Shengfang Zhai, Jiajun Li, Yue Liu, Huanran Chen, Zhihua Tian, Wenjie Qu, Qingni Shen, Ruoxi Jia, Yinpeng Dong, Jiaheng Zhang

In recent years, text-to-image (T2I) diffusion models have gained significant attention for their ability to generate high-quality images reflecting text prompts. However, their growing popularity has also led to the emergence of backdoor threats, posing substantial risks. Currently, effective defense strategies against such threats are lacking due to the diversity of backdoor targets in T2I synthesis. In this paper, we propose NaviT2I, an efficient input-level backdoor defense framework against diverse T2I backdoors. Our approach is based on the new observation that trigger tokens tend to induce significant neuron activation variation in the early stage of the diffusion generation process, a phenomenon we term Early-step Activation Variation. Leveraging this insight, NaviT2I navigates T2I models to prevent malicious inputs by analyzing Neuron activation variations caused by input tokens. Extensive experiments show that NaviT2I significantly outperforms the baselines in both effectiveness and efficiency across diverse datasets, various T2I backdoors, and different model architectures including UNet and DiT. Furthermore, we show that our method remains effective under potential adaptive attacks.

Subject: ICCV.2025 - Highlight