One Pixel Attack for Fooling Deep Neural Networks

被引:1342
作者
Su, Jiawei [1 ]
Vargas, Danilo Vasconcellos [1 ]
Sakurai, Kouichi [1 ,2 ]
机构
[1] Kyushu Univ, Fac Informat Sci & Elect Engn, Grad Sch, Fukuoka, Fukuoka 8190395, Japan
[2] Adv Telecommun Res Inst Int, Kyoto, Japan
基金
日本科学技术振兴机构;
关键词
Perturbation methods; Neural networks; Robustness; Image color analysis; Image recognition; Additives; Convolutional neural network; differential evolution (DE); image recognition; information security; DIFFERENTIAL EVOLUTION; ADAPTATION; STRATEGY;
D O I
10.1109/TEVC.2019.2890858
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent research has revealed that the output of deep neural networks (DNNs) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution (DE). It requires less adversarial information (a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and 16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least one target class by modifying just one pixel with 74.03% and 22.91% confidence on average. We also show the same vulnerability on the original CIFAR-10 dataset. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness.
引用
收藏
页码:828 / 841
页数:14
相关论文
共 61 条
[21]  
Carlini N., 2016, ARXIV160704311
[22]  
Carlini Nicholas., 2017, CoRR
[23]   A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms [J].
Civicioglu, Pinar ;
Besdok, Erkan .
ARTIFICIAL INTELLIGENCE REVIEW, 2013, 39 (04) :315-346
[24]   Evading Classifiers by Morphing in the Dark [J].
Dang, Hung ;
Huang, Yue ;
Chang, Ee-Chien .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :119-133
[25]   Differential Evolution: A Survey of the State-of-the-Art [J].
Das, Swagatam ;
Suganthan, Ponnuthurai Nagaratnam .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2011, 15 (01) :4-31
[26]  
Goodfellow I J, 2015, P INT C LEARN REPR I
[27]  
Grosse K., 2016, ARXIV
[28]   Sequential Sampling for Noisy Optimisation with CMA-ES [J].
Groves, Matthew ;
Branke, Juergen .
GECCO'18: PROCEEDINGS OF THE 2018 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2018, :1023-1030
[29]   Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation [J].
Hansen, M ;
Ostermeier, A .
1996 IEEE INTERNATIONAL CONFERENCE ON EVOLUTIONARY COMPUTATION (ICEC '96), PROCEEDINGS OF, 1996, :312-317
[30]   Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES) [J].
Hansen, N ;
Muller, SD ;
Koumoutsakos, P .
EVOLUTIONARY COMPUTATION, 2003, 11 (01) :1-18