Skip to main content

Frequency Domain Model Augmentation for Adversarial Attack

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13664))

Included in the following conference series:

Abstract

For black-box attacks, the gap between the substitute model and the victim model is usually large, which manifests as a weak attack performance. Motivated by the observation that the transferability of adversarial examples can be improved by attacking diverse models simultaneously, model augmentation methods which simulate different models by using transformed images are proposed. However, existing transformations for spatial domain do not translate to significantly diverse augmented models. To tackle this issue, we propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models. Specifically, we apply a spectrum transformation to the input and thus perform the model augmentation in the frequency domain. We theoretically prove that the transformation derived from frequency domain leads to a diverse spectrum saliency map, an indicator we proposed to reflect the diversity of substitute models. Notably, our method can be generally combined with existing attacks. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our method, e.g., attacking nine state-of-the-art defense models with an average success rate of 95.4%. Our code is available in https://github.com/yuyang-long/SSA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the implementation, DCT is applied to each color channel independently.

  2. 2.

    https://github.com/cleverhans-lab/cleverhans/tree/master/cleverhans_v3.1.0/examples/nips17_adversarial_competition/dataset.

  3. 3.

    Note that Admix is equipped with SI-FGSM by default.

References

  1. Ahmed, N., Natarajan, T.R., Rao, K.R.: Discrete cosine transform. IEEE Trans. Comput. 23, 90–93 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bojarski, M., et al.: End to end learning for self-driving cars. CoRR abs/1604.07316 (2016)

    Google Scholar 

  3. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: Symposium on Security and Privacy (2017)

    Google Scholar 

  4. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: CVPR (2018)

    Google Scholar 

  5. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: CVPR (2019)

    Google Scholar 

  6. Duan, R., Chen, Y., Niu, D., Yang, Y., Qin, A.K., He, Y.: AdvDrop: adversarial attack to DNNS by dropping information. In: ICCV (2021)

    Google Scholar 

  7. Dziugaite, Karolina, G., Ghahramani, Z., Roy., D.M.: A study of the effect of jpg compression on adversarial images. CoRR abs/1608.00853 (2016)

    Google Scholar 

  8. Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: SIGGRAPH (2001)

    Google Scholar 

  9. Gao, L., Cheng, Y., Zhang, Q., Xu, X., Song, J.: Feature space targeted attacks by statistic alignment. In: IJCAI (2021)

    Google Scholar 

  10. Gao, L., Zhang, Q., Song, J., Liu, X., Shen, H.T.: Patch-wise attack for fooling deep neural network. In: ECCV (2020)

    Google Scholar 

  11. Gao, L., Zhang, Q., Song, J., Shen, H.T.: Patch-wise++ perturbation for adversarial targeted attacks. CoRR abs/2012.15503 (2020)

    Google Scholar 

  12. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  13. Guo, C., Frank, J.S., Weinberger, K.Q.: Low frequency adversarial perturbation. In: Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, 22–25 July 2019, vol. 115, pp. 1127–1137 (2019)

    Google Scholar 

  14. Guo, C., Rana, M., Cissé, M., van der Maaten, L.: Countering adversarial images using input transformations. In: ICLR (2018)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  16. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR (2017)

    Google Scholar 

  17. Jia, J., Cao, X., Wang, B., Gong, N.Z.: Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. In: ICLR (2020)

    Google Scholar 

  18. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)

    Google Scholar 

  19. Li, J., et al.: Projection & probability-driven black-box attack. In: CVPR (2020)

    Google Scholar 

  20. Li, X., et al.: QAIR: practical query-efficient black-box attacks for image retrieval. In: CVPR (2021)

    Google Scholar 

  21. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: CVPR (2018)

    Google Scholar 

  22. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: ICLR (2020)

    Google Scholar 

  23. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: ICLR (2017)

    Google Scholar 

  24. Liu, Y., Cheng, Y., Gao, L., Liu, X., Zhang, Q., Song, J.: Practical evaluation of adversarial robustness via adaptive auto attack. CoRR abs/2203.05154 (2022)

    Google Scholar 

  25. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  26. Mao, X., Chen, Y., Wang, S., Su, H., He, Y., Xue, H.: Composite adversarial attacks. In: AAAI (2021)

    Google Scholar 

  27. Moosavi-Dezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017)

    Google Scholar 

  28. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR (2016)

    Google Scholar 

  29. Naseer, M., Khan, S.H., Hayat, M., Khan, F.S., Porikli, F.: A self-supervised approach for adversarial robustness. In: CVPR (2020)

    Google Scholar 

  30. Naseer, M., Khan, S.H., Khan, M.H., Khan, F.S., Porikli, F.: Cross-domain transferability of adversarial perturbations. In: NeurPIS (2019)

    Google Scholar 

  31. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence \(o(1/k^2)\). In: Doklady AN USSR (1983)

    Google Scholar 

  32. Papernot, N., McDaniel, P.D., Goodfellow, I.J., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Karri, R., Sinanoglu, O., Sadeghi, A., Yi, X. (eds.) Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ACM (2017)

    Google Scholar 

  33. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60, 259–268 (1992)

    Google Scholar 

  34. Sallab, A.E., Abdou, M., Perot, E., Yogamani, S.K.: Deep reinforcement learning framework for autonomous driving. CoRR abs/1704.02532 (2017)

    Google Scholar 

  35. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: ICCV (2017)

    Google Scholar 

  36. Sharma, Y., Ding, G.W., Brubaker, M.A.: On the effectiveness of low frequency perturbations. In: IJCAI, pp. 3389–3396 (2019)

    Google Scholar 

  37. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: AAAI (2017)

    Google Scholar 

  38. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR (2016)

    Google Scholar 

  39. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  40. Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: CVPR (2014)

    Google Scholar 

  41. Thomas, A., Elibol, O.: Defense against adversarial attacks-3rd place (2017). https://github.com/anlthms/nips-2017/blob/master/poster/defense.pdf

  42. Tramèr, F., Kurakin, A., Goodfellow, I.J., Boneh, D., McDaniel, P.D.: Ensemble adversarial training: attacks and defenses. In: ICLR (2018)

    Google Scholar 

  43. Wang, H., et al.: CosFace: large margin cosine loss for deep face recognition. In: CVPR (2018)

    Google Scholar 

  44. Wang, H., Wu, X., Huang, Z., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In: CVPR, pp. 8681–8691 (2020)

    Google Scholar 

  45. Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: CVPR, pp. 1924–1933 (2021)

    Google Scholar 

  46. Wang, X., He, X., Wang, J., He, K.: Admix: Enhancing the transferability of adversarial attacks. In: ICCV (2021)

    Google Scholar 

  47. Wang, Z., Guo, H., Zhang, Z., Liu, W., Qin, Z., Ren, K.: Feature importance-aware transferable adversarial attacks. In: ICCV (2021)

    Google Scholar 

  48. Wang, Z., Yang, Y., Shrivastava, A., Rawal, V., Ding, Z.: Towards frequency-based explanation for robust CNN. CoRR abs/2005.03141 (2020)

    Google Scholar 

  49. Wu, D., Wang, Y., Xia, S., Bailey, J., Ma, X.: Skip connections matter: On the transferability of adversarial examples generated with resnets. In: ICLR (2020)

    Google Scholar 

  50. Wu, W., Su, Y., Lyu, M.R., King, I.: Improving the transferability of adversarial samples with adversarial transformations. In: CVPR (2021)

    Google Scholar 

  51. Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A.L.: Mitigating adversarial effects through randomization. In: ICLR (2018)

    Google Scholar 

  52. Xie, C., Wu, Y., van der Maaten, L., Yuille, A.L., He, K.: Feature denoising for improving adversarial robustness. In: CVPR (2019)

    Google Scholar 

  53. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: CVPR (2019)

    Google Scholar 

  54. Yin, D., Lopes, R.G., Shlens, J., Cubuk, E.D., Gilmer, J.: A Fourier perspective on model robustness in computer vision. In: NeurIPS (2019)

    Google Scholar 

  55. Zhang, J., Song, J., Gao, L., Liu, Y., Shen, H.T.: Progressive meta-learning with curriculum. IEEE Trans. Circ. Syst. Video Technol. 32, 5916 – 5930 (2022)

    Google Scholar 

  56. Zhang, Q., Li, X., Chen, Y., Song, J., Gao, L., He, Y., Xue, H.: Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. In: ICLR (2022)

    Google Scholar 

  57. Zhang, Q., Zhang, C., Li, C., Song, J., Gao, L., Shen, H.T.: Practical no-box adversarial attacks with training-free hybrid image transformation. CoRR abs/2203.04607 (2022)

    Google Scholar 

  58. Zhang, Q., Zhu, X., Song, J., Gao, L., Shen, H.T.: Staircase sign method for boosting adversarial attacks. CoRR abs/2104.09722 (2021)

    Google Scholar 

  59. Zhao, Z., Liu, Z., Larson, M.A.: Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: CVPR (2020)

    Google Scholar 

  60. Zou, J., Pan, Z., Qiu, J., Liu, X., Rui, T., Li, W.: Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In: ECCV (2020)

    Google Scholar 

Download references

Acknowledgment

This work is supported by the National Natural Science Foundation of China (Grant No. 62122018, No. 61772116, No. 61872064, No. U20B2063).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jingkuan Song .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1760 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Long, Y. et al. (2022). Frequency Domain Model Augmentation for Adversarial Attack. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13664. Springer, Cham. https://doi.org/10.1007/978-3-031-19772-7_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19772-7_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19771-0

  • Online ISBN: 978-3-031-19772-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics