052bea680e96a54dbce8f28951885905@2019@MLSYS

Total: 1

#1 To Compress Or Not To Compress: Understanding The Interactions Between Adversarial Attacks And Neural Network Compression [PDF1] [Copy] [Kimi] [REL]

Authors: Ilia Shumailov ; Yiren Zhao ; Robert Mullins ; Ross Anderson

As deep neural networks (DNNs) become widely used, pruned and quantised models are becoming ubiquitous on edge devices; such compressed DNNs lower the computational requirements. Meanwhile, multiple recent studies show ways of constructing adversarial samples that make DNNs misclassify. We therefore investigate the extent to which adversarial samples are transferable between uncompressed and compressed DNNs. We find that such samples remain transferable for both pruned and quantised models. For pruning, adversarial samples at high sparsities are marginally less transferable. For quantisation, we find the transferability of adversarial samples is highly sensitive to integer precision.