Pony_Over-the-Air_Adversarial_Flickering_Attacks_Against_Video_Recognition_Networks@CVPR2021@CVF

Total: 1

#1 Over-the-Air Adversarial Flickering Attacks Against Video Recognition Networks [PDF1] [Copy] [Kimi1] [REL]

Authors: Roi Pony, Itay Naeh, Shie Mannor

Deep neural networks for video classification, just like image classification networks, may be subjected to adversarial manipulation. The main difference between image classifiers and video classifiers is that the latter usually use temporal information contained within the video. In this work we present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation that in some cases may be unnoticeable by human observers and is implementable in the real world. After demonstrating the manipulation of action classification of single videos, we generalize the procedure to make universal adversarial perturbation, achieving high fooling ratio. In addition, we generalize the universal perturbation and produce a temporal-invariant perturbation, which can be applied to the video without synchronizing the perturbation to the input. The attack was implemented on several target models and the transferability of the attack was demonstrated. These properties allow us to bridge the gap between simulated environment and real-world application, as will be demonstrated in this paper for the first time for an over-the-air flickering attack.