Skip to main content
Log in

Channel-augmented joint transformation for transferable adversarial attacks

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples that fool the models with tiny perturbations. Although adversarial attacks have achieved incredible attack success rates in the white-box setting, most existing adversaries often exhibit weak transferability in the black-box setting, especially for models with defense mechanisms. In this work, we reveal the cross-model channel redundancy and channel invariance of DNNs and thus propose two channel-augmented methods to improve the transferability of adversarial examples, namely, the channel transformation (CT) method and the channel-invariant Patch (CIP) method. Specifically, channel transformation shuffles and rewrites channels to enhance cross-model feature redundancy in convolution, and channel-invariant patches distinctly weaken different channels to achieve loss-preserving transformation. We compute the aggregated gradients of the transformed dataset to create adversaries with higher transferability. In addition, the two proposed methods can be naturally combined with each other and with almost all other gradient-based methods to further improve performance. Empirical results on the ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks. Specifically, our attack improves the average attack success rate from 86.9% to 91.0% on normally trained models and from 44.6% to 68.3% on adversarially trained models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Availability of data and materials

All datasets used in this research are public datasets.

Code Availability

The implementation of the code is available at: (https://github.com/KWPCCC/MFAA)

References

  1. Touvron H, Bojanowski P, Caron M, Cord M, El-Nouby A, Grave E, Izacard G, Joulin A, Synnaeve G, Verbeek J et al (2022) Resmlp: Feedforward networks for image classi cation with data-efficient training. IEEE Trans Pattern Anal Mach Intell 45(4):5314–5321

    Google Scholar 

  2. Shi S, Jiang L, Deng J, Wang Z, Guo C, Shi J, Wang X, Li H (2023) Pv-rcnn++: Point-voxel feature set abstraction with local vector rep- resentation for 3d object detection. Int J Comput Vis 131(2):531–551

    Article  Google Scholar 

  3. Chen Y, Zhang P, Kong T, Li Y, Zhang X, Qi L, Sun J, Jia J (2022) Scale-aware automatic augmentations for object detection with dynamic training. IEEE Trans Pattern Anal Mach Intell 45(2):2367–2383

    Article  Google Scholar 

  4. Zang Y, Zhou K, Huang C, Loy CC (2023) Semi-supervised and long-tailed object detection with cascadematch. Int J Comput Vision 131(4):987–1001

    Article  Google Scholar 

  5. Mao J, Shi S, Wang X, Li H (2023) 3d object detection for autonomous driving: A comprehensive survey. Int J Comput Vis pp 1–55

  6. Wang H, Liang H, Li Z, Zhou P, Chen L (2023) A fast coarse-to-fine point cloud registration based on optical flow for autonomous vehicles. Appl Intell pp 1–18

  7. Wang Y, Mao Q, Zhu H, Deng J, Zhang Y, Ji J, Li H, Zhang Y (2023) Multi-modal 3d object detection in autonomous driving: a survey. Int J Comput Vis pp 1–31

  8. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: ICLR

  9. Kazemi E, Kerdreux T, Wang L (2023) Minimally distorted structured adversarial attacks. Int J Comput Vision 131(1):160–176

    Article  Google Scholar 

  10. Wei X, Guo Y, Yu J, Zhang B (2022) Simultaneously optimizing perturbations and positions for black-box adversarial patch attacks. IEEE Transactions on pattern analysis and machine intelligence

  11. Stutz D, Chandramoorthy N, Hein M, Schiele B (2022) Random and adversarial bit error robustness: Energy-efficient and secure dnn accelerators. IEEE Trans Pattern Anal Mach Intell 45(3):3632–3647

    Article  Google Scholar 

  12. Li X, Wang Z, Zhang B, Sun F, Hu X (2023) Recognizing object by components with human prior knowledge enhances adversarial robustness of deep neural networks. IEEE Trans Pattern Anal Mach Intell

  13. Lee S, Kim H, Lee J (2022) Graddiv: Adversarial robustness of randomized neural networks via gradient diversity regularization. IEEE Trans Pattern Anal Mach Intell 45(2):2645–2651

    Article  Google Scholar 

  14. Wang D, Jin W, Wu Y, Khan A (2023) Atgan: Adversarial training-based gan for improving adversarial robustness generalization on image classification. Appl Intell pp 1–17

  15. Guo C, Gardner J, You Y, Wilson AG, Weinberger K (2019) Simple black-box adversarial attacks. In: ICML

  16. Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

  17. Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial intelligence safety and security

  18. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: CVPR

  19. Wang Z, Guo H, Zhang Z, Liu W, Qin Z, Ren K (2021) Feature importance-aware transferable adversarial attacks. In: ICCV

  20. Zhang J, Wu W, Huang J-t, Huang Y, Wang W, Su Y, Lyu MR (2022) Improving adversarial transferability via neuron attribution-based attacks. In: CVPR

  21. Zhang Y, Tan Y-a, Chen T, Liu X, Zhang Q, Li Y (2022) Enhancing the transferability of adversarial examples with random patch. In: IJCAI

  22. Wang X, He K (2021) Enhancing the transferability of adversarial attacks through variance tuning. In: CVPR

  23. Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL (2019) Improving transferability of adversarial examples with input diversity. In: CVPR

  24. Dong Y, Pang T, Su H, Zhu J (2019) Evading defenses to transferable adversarial examples by translation-invariant attacks. In: CVPR

  25. Lin J, Song C, He K, Wang L, Hopcroft JE (2019) Nesterov accelerated gradient and scale invariance for adversarial attacks. In: ICLR

  26. Wu W, Su Y, Lyu MR, King I (2021) Improving the transferability of adversarial samples with adversarial transformations. In: CVPR

  27. Yang J, Zhang Z, Xiao S, Ma S, Li Y, Lu W, Gao X (2023) Efficient data-driven behavior identification based on vision transformers for human activity understanding. Neurocomputing 530:104–115

  28. Zhao Y, Xiao S, Yang J, Lu W, Gao X (2023) No-reference qquality index of tone-mapped images based on authenticity, preservation, and scene expressiveness. Signal Process 203:108782

    Article  Google Scholar 

  29. Qiu J, Chen C, Liu S, Zhang H-Y, Zeng B (2021) Slimconv: Reducing channel redundancy in convolutional neural networks by features recombining. IEEE Trans Image Process 30:6434–6445

    Article  Google Scholar 

  30. Liu Y, Chen X, Liu C, Song D (2016) Delving into transferable adversarial examples and black-box attacks. In: ICLR

  31. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: ICLR

  32. Liu Z, Liu Q, Liu T, Xu N, Lin X, Wang Y, Wen W (2019) Feature distillation: Dnn-oriented jpeg compression against adversarial examples. In: CVPR

  33. Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations. In: ICLR (2018)

  34. Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

  35. Cao J, Luo M, Yu J, Yang M-H, He R (2022) Scoremix: A scalable augmentation strategy for training gans with limited data. IEEE Transactions on pattern analysis and machine intelligence

  36. Li D, Hu J, Wang C, Li X, She Q, Zhu L, Zhang T, Chen Q (2021) Involution: Inverting the inherence of convolution for visual recognition. In: CVPR

  37. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: CVPR

  38. Wu W, Su Y, Chen X, Zhao S, King I, Lyu MR, Tai Y-W (2020) Boosting the transferability of adversarial samples via attention. In: CVPR

  39. Wang X, He X, Wang J, He K (2021) Admix: Enhancing the transferability of adversarial attacks. In: ICCV

  40. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: CVPR

  41. Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2017) Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI

  42. Tramr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2018) Ensemble adversarial training: Attacks and defenses. In: ICLR

  43. Liao F, Liang M, Dong Y, Pang T, Hu X, Zhu J (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: CVPR

  44. Xie C, Wang J, Zhang Z, Ren Z, Yuille A (2018) Mitigating adversarial effects through randomization. In: International conference on learning representations

  45. Gu S, Yi P, Zhu T, Yao Y, Wang W (2019) Detecting adversarial examples in deep neural networks using normalizing filters. UMBC Stud Collect

  46. Salman H, Li J, Razenshteyn I, Zhang P, Zhang H, Bubeck S, Yang G (2019) Provably robust deep learning via adversarially trained smoothed classifiers. Adv Neural Inform Process Syst 32

  47. Cohen J, Rosenfeld E, Kolter Z (2019) Certified adversarial robustness via randomized smoothing. In: ICML

  48. Naseer M, Khan S, Hayat M, Khan FS, Porikli F (2020) A self-supervised approach for adversarial robustness. In: CVPR

Download references

Acknowledgements

This work was supported in part by the Open Fund of the Advanced Cryptography and System Security Key Laboratory of Sichuan Province [No. SKLACSS-202215], in part by the National Key R &D Program of China [No.J2019-V-0001-0092], in part by a Major Science and Technology Project of Sichuan Province [No. 2022YFG0174] and in part by the Innovative Research Foundation of Ship General Performance [No. 26422206]

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wuping Ke.

Ethics declarations

Conflict of Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, D., Ke, W., Li, X. et al. Channel-augmented joint transformation for transferable adversarial attacks. Appl Intell 54, 428–442 (2024). https://doi.org/10.1007/s10489-023-05171-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-05171-6

Keywords

Navigation