Universalization of Any Adversarial Attack using Very Few Test Examples

Kamath, Sandesh and Deshpande, Amit and Subrahmanyam, K V and Balasubramanian, Vineeth N (2022) Universalization of Any Adversarial Attack using Very Few Test Examples. In: 5th ACM India Joint 9th ACM IKDD Conference on Data Science and 27th International Conference on Management of Data, CODS-COMAD 2022, 7 January 2022 through 10 January 2022, Virtual, Online.

[img] Text
ACM_International_Conference_Proceeding_Series1.pdf - Published Version
Available under License Creative Commons Attribution.

Download (2MB)


Deep learning models are known to be vulnerable not only to input-dependent adversarial attacks but also to input-agnostic or universal adversarial attacks. Dezfooli et al. [8, 9] construct universal adversarial attack on a given model by looking at a large number of training data points and the geometry of the decision boundary near them. Subsequent work [5] constructs universal attack by looking only at test examples and intermediate layers of the given model. In this paper, we propose a simple universalization technique to take any input-dependent adversarial attack and construct a universal attack by only looking at very few adversarial test examples. We do not require details of the given model and have negligible computational overhead for universalization. We theoretically justify our universalization technique by a spectral property common to many input-dependent adversarial perturbations, e.g., gradients, Fast Gradient Sign Method (FGSM) and DeepFool. Using matrix concentration inequalities and spectral perturbation bounds, we show that the top singular vector of input-dependent adversarial directions on a small test sample gives an effective and simple universal adversarial attack. For standard models on CIFAR10 and ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks [5, 9] for reasonable norms of perturbation. © 2022 ACM.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, Vineeth Nhttps://orcid.org/0000-0003-2656-0375
Item Type: Conference or Workshop Item (Paper)
Additional Information: Sandesh Kamath would like to thank Microsoft Research India for funding a part of this work through his postdoctoral research fellowship at IIT Hyderabad.
Uncontrolled Keywords: Adversarial; Neural networks; Universal
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 15 Jul 2022 11:00
Last Modified: 15 Jul 2022 11:00
URI: http://raiith.iith.ac.in/id/eprint/9735
Publisher URL: http://doi.org/10.1145/3493700.3493718
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 9735 Statistics for this ePrint Item