Xiang_Jailbreaking_the_Non-Transferable_Barrier_via_Test-Time_Data_Disguising@CVPR2025@CVF

Total: 1

#1 Jailbreaking the Non-Transferable Barrier via Test-Time Data Disguising [PDF] [Copy] [Kimi] [REL]

Authors: Yongli Xiang, Ziming Hong, Lina Yao, Dadong Wang, Tongliang Liu

Non-transferable learning (NTL) has been proposed to protect model intellectual property (IP) by creating a "non-transferable barrier" to restrict generalization from authorized to unauthorized domains. Recently, well-designed attack, which restores the unauthorized-domain performance by fine-tuning NTL models on few authorized samples, highlights the security risks of NTL-based applications. However, such attack requires modifying model weights, thus being invalid in the black-box scenario. This raises a critical question: can we trust the security of NTL models deployed as black-box systems? In this work, we reveal the first loophole of black-box NTL models by proposing a novel attack method (dubbed as JailNTL) to jailbreak the non-transferable barrier through test-time data disguising, The main idea of JailNTL is to disguise unauthorized data so it can be identified as authorized by the NTL model, thereby bypassing the non-transferable barrier without modifying the NTL model weights. Specifically, JailNTL encourages unauthorized-domain disguising in two levels, including: (i) *data-intrinsic disguising (DID)* for eliminating domain discrepancy and preserving class-related content at the input-level, and (ii) *model-guided disguising (MGD)* for mitigating output-level statistics difference of the NTL model. Empirically, when attacking state-of-the-art (SOTA) NTL models in the black-box scenario, JailNTL achieves an accuracy increase of up to 54.3% in the unauthorized domain by using only 1% authorized samples, largely exceeding existing SOTA white-box attacks.

Subject: CVPR.2025 - Poster