Paper
12 January 2023 Analysis of adversarial attack based on FGSM
Author Affiliations +
Proceedings Volume 12509, Third International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI 2022); 125090N (2023) https://doi.org/10.1117/12.2656064
Event: Third International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI 2022), 2022, Guangzhou, China
Abstract
With more and more applications of deep learning, including neural networks, the lack of explanatory make it easy to under external attacks. This paper mainly focuses on adversarial attack, that is, by adding slight perturbation to the input data, which cannot be detected by human being, can lead to a wrong output of the model and maximize the model’s prediction error, resulting in a distrastic decline in the performance of the model, including prediction accuracy, etc. But so far, there is still no sufficient theoretical support for why adversarial attack that can not be detected can lead to serious performance degradation of neural network models, some attempts at explaining this phenomenon focused on the reason of over-fitteing or the linear or unlinear nature of the neural network. In this paper, several experiments based on Fast Gradient Sign Attack Algorithm to resist adversarial attack are designed and implemented towards the model of neural network on the tasks of the image recognition and classification, and some regular experimental results are obtained. The experimental results provide some evidence for the argument that the reason why adversarial examples and adversarial attack can work is over-fitting, but it is possible that the evaluation methods can not measure such special kind of overfitting.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zihua Fang "Analysis of adversarial attack based on FGSM", Proc. SPIE 12509, Third International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI 2022), 125090N (12 January 2023); https://doi.org/10.1117/12.2656064
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Performance modeling

Data modeling

Image classification

Convolution

Image processing

Evolutionary algorithms

Back to Top