48@2024@IJCAI

Total: 1

#1 Protecting Object Detection Models from Model Extraction Attack via Feature Space Coverage [PDF] [Copy] [Kimi] [REL]

Authors: Zeyu Li ; Yuwen Pu ; Xuhong Zhang ; Yu Li ; Jinbao Li ; Shouling Ji

The model extraction attack is an attack pattern aimed at stealing well-trained machine learning models' functionality or privacy information. With the gradual popularization of AI-related technologies in daily life, various well-trained models are being deployed. As a result, these models are considered valuable assets and attractive to model extraction attackers. Currently, the academic community primarily focuses on defense for model extraction attacks in the context of classification, with little attention to the more commonly used task scenario of object detection. Therefore, we propose a detection framework targeting model extraction attacks against object detection models in this paper. The framework first locates suspicious users based on feature coverage in query traffic and uses an active verification module to confirm whether the identified suspicious users are attackers. Through experiments conducted in multiple task scenarios, we validate the effectiveness and detection efficiency of the proposed method.