Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83616
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳銘憲zh_TW
dc.contributor.advisorMing-Syan Chenen
dc.contributor.author廖耕新zh_TW
dc.contributor.authorKeng-Hsin Liaoen
dc.date.accessioned2023-03-19T21:12:00Z-
dc.date.available2023-11-10-
dc.date.copyright2022-08-30-
dc.date.issued2022-
dc.date.submitted2002-01-01-
dc.identifier.citation[1] Nasir Ahmed, T. Natarajan, and Kamisetty R Rao. Discrete cosine transform. IEEE transactions on Computers, 100(1):90–93, 1974.
[2] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Inter- national conference on machine learning, pages 274–283. PMLR, 2018.
[3] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. IEEE, 2017.
[4] Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing convolutional neural networks in the frequency domain. In Proceed- ings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1475–1484, 2016.
[5] Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robust- bench: a standardized adversarial robustness benchmark. arXiv preprint arXiv: 2010.09670, 2020.
[6] Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, and James J DiCarlo. Simulating a primary visual cortex at the front of cnns improves robustness to image perturbations. Advances in Neural Information Processing Sys- tems, 33:13073–13087, 2020.
[7] Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, and Fabio Roli. Why do adversarial attacks trans- fer? explaining transferability of evasion and poisoning attacks. In 28th USENIX security symposium (USENIX security 19), pages 321–338, 2019.
[8] Russell L DeValois, Karen K De Valois, and Karen K DeValois. Spatial vision. Number 14. Oxford University Press on Demand, 1990.
[9] Minjing Dong, Yunhe Wang, Xinghao Chen, and Chang Xu. Towards stable and robust addernets. Advances in Neural Information Processing Systems, 34:13255– 13265, 2021.
[10] MaxEhrlichandLarrySDavis.Deepresiduallearninginthejpegtransformdomain. In Proceedings of the IEEE International Conference on Computer Vision, pages 3484–3493, 2019.
[11] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. Imagenet-trained CNNs are biased towards tex- ture; increasing shape bias improves accuracy and robustness. In International Con- ference on Learning Representations, 2019.
[12] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international confer- ence on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010.
[13] Ian Goodfellow. Defense against the dark arts: An overview of adversarial example security research and future research directions. arXiv preprint arXiv:1806.04169, 2018.
[14] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harness- ing adversarial examples. In International Conference on Learning Representations, 2015.
[15] LionelGueguen,AlexSergeev,BenKadlec,RosanneLiu,andJasonYosinski.Faster neural networks straight from jpeg. Advances in Neural Information Processing Systems, 31, 2018.
[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[17] Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, and Seong Joon Oh. Rethinking spatial dimensions of vision transformers. In Proceed- ings of the IEEE/CVF International Conference on Computer Vision, pages 11936– 11945, 2021.
[18] Jeremy Howard. Imagenette.
[19] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
[20] Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. Fsdr: Frequency space domain randomization for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6891–6902, 2021.
[21] Jiaxing Huang, Dayan Guan, Aoran Xiao, and Shijian Lu. Rda: Robust domain adaptation via fourier adversarial attacking. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision, pages 8988–8999, 2021.
[22] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Ad- vances in neural information processing systems, 32, 2019.
[23] Nathan Inkawhich, Wei Wen, Hai Helen Li, and Yiran Chen. Feature space pertur- bations yield more transferable adversarial examples. In Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition, pages 7066–7074, 2019.
[24] Qiyu Kang, Yang Song, Qinxu Ding, and Wee Peng Tay. Stable neural ode with lyapunov-stable equilibrium points for defending against adversarial attacks. Ad- vances in Neural Information Processing Systems, 34:14925–14937, 2021.
[25] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[26] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
[27] Andrew Lavin and Scott Gray. Fast algorithms for convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4013–4021, 2016.
[28] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
[29] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 11976–11986, 2022.
[30] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
[31] Najib J Majaj, Denis G Pelli, Peri Kurshan, and Melanie Palomares. The role of spatial frequency channels in letter identification. Vision research, 42(9):1165–1184, 2002.
[32] Luis M Martinez and Jose-Manuel Alonso. Complex receptive fields in primary visual cortex. The neuroscientist, 9(5):317–331, 2003.
[33] Maria-ElenaNilsbackandAndrewZisserman.Automatedflowerclassificationover a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 722–729. IEEE, 2008.
[34] NicolasPapernot,PatrickMcDaniel,andIanGoodfellow.Transferabilityinmachine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
[35] Zequn Qin, Pengyi Zhang, Fei Wu, and Xi Li. Fcanet: Frequency channel attention networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 783–792, 2021.
[36] J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986.
[37] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
[38] Murray B Sachs, Jacob Nachmias, and John G Robson. Spatial-frequency channels in human vision. JOSA, 61(9):1176–1186, 1971.
[39] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedan- tam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep net- works via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
[40] Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! Advances in Neural Information Processing Systems, 32, 2019.
[41] XingShen,JiruiYang,ChunboWei,BingDeng,JianqiangHuang,Xian-ShengHua, Xiaoliang Cheng, and Kewei Liang. Dct-mask: Discrete cosine transform mask rep- resentation for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8720–8729, 2021.
[42] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[43] DavidStutz,MatthiasHein,andBerntSchiele.Disentanglingadversarialrobustness and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6976–6987, 2019.
[44] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Inter- national Conference on Learning Representations, 2014.
[45] Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In International Conference on Machine Learning, pages 10096–10106. PMLR, 2021.
[46] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. Advances in Neural Information Processing Systems, 33:1633–1645, 2020.
[47] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Alek- sander Madry. Robustness may be at odds with accuracy. In International Confer- ence on Learning Representations, 2019.
[48] Chun-ChenTu,PaishunTing,Pin-YuChen,SijiaLiu,HuanZhang,JinfengYi,Cho- Jui Hsieh, and Shin-Ming Cheng. Autozoom: Autoencoder-based zeroth order op- timization method for attacking black-box neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 742–749, 2019.
[49] Patrik Vuilleumier, Jorge L Armony, Jon Driver, and Raymond J Dolan. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Na- ture neuroscience, 6(6):624–631, 2003.
[50] Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, and Tomaso Poggio. Bi- ologically inspired mechanisms for adversarial robustness. Advances in Neural In- formation Processing Systems, 33:2135–2146, 2020.
[51] Gregory K Wallace. The jpeg still picture compression standard. IEEE transactions on consumer electronics, 38(1):xviii–xxxiv, 1992.
[52] HaohanWang,XindiWu,ZeyiHuang,andEricPXing.High-frequencycomponent helps explain the generalization of convolutional neural networks. In Proceedings
of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8684–8694, 2020.
[53] Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adver- sarial training. arXiv preprint arXiv:2001.03994, 2020.
[54] Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. Advances in Neural Information Processing Systems, 33:2958–2969, 2020.
[55] Kai Xu, Minghai Qin, Fei Sun, Yuhao Wang, Yen-Kuang Chen, and Fengbo Ren. Learning in the frequency domain. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1740–1749, 2020.
[56] Rikiya Yamashita, Mizuho Nishio, Richard Kinh Gian Do, and Kaori Togashi. Con- volutional neural networks: an overview and application in radiology. Insights into imaging, 9(4):611–629, 2018.
[57] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pages 7472–7482. PMLR, 2019.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83616-
dc.description.abstract卷積神經網絡 (CNN) 常用於大多數電腦視覺任務。然而,CNN 模型對於對抗性 攻擊的脆弱性引起了人們對於將這些模型部署到安全性系統的擔憂。相比之下,人類視覺系統 (HVS) 利用空間頻率處理視覺信號,且具備不受對抗性攻擊影響的性質。因此,本文提出了一系列實證研究,探索 CNN 模型在空間頻域中的脆弱性。具體來說,我們利用離散餘弦轉換來構建 Spatial-Frequency (SF) 層以生成輸入圖像的塊狀頻譜,接著更近一步利用 SF 層替換原始 CNN 模型的初始特徵提取層,進而生成 Spatial Frequency CNNs (SF-CNNs) 。透過廣泛的實驗,我們觀察到 SF-CNN 模型在白盒和黑盒攻擊下都比原始的 CNN 模型更具穩健性。為了進一步解釋 SF-CNN 的穩健性,我們使用兩種混合策略將 SF 層與具有相同內核大小的可訓練卷積層進行比較,結果顯示低頻訊號對 SF-CNN 的穩健性貢獻最大。我們相信透過這些實驗觀察可以指引未來朝向更穩健的 CNN 模型設計。zh_TW
dc.description.abstractConvolutional Neural Networks (CNNs) have dominated the majority of computer vision tasks. However, CNNs’ vulnerability to adversarial attacks has raised concerns about de- ploying these models to safety-critical applications. In contrast, the Human Visual System (HVS), which utilizes spatial frequency channels to process visual signals, is immune to adversarial attacks. As such, this paper presents an empirical study exploring the vulnerability of CNN models in the frequency domain. Specifically, we utilize the discrete cosine transform (DCT) to construct the Spatial-Frequency (SF) layer to produce a block-wise frequency spectrum of an input image and formulate Spatial Frequency CNNs (SF-CNNs) by replacing the initial feature extraction layers of widely-used CNN backbones with the SF layer. Through extensive experiments, we observe that SF-CNN models are more robust than their CNN counterparts under both white-box and black-box attacks. To further explain the robustness of SF-CNNs, we compare the SF layer with a trainable convolutional layer with identical kernel sizes using two mixing strategies to show that the lower frequency components contribute the most to the adversarial robustness of SF-CNNs. We believe our observations can guide the future design of robust CNN models.en
dc.description.provenanceMade available in DSpace on 2023-03-19T21:12:00Z (GMT). No. of bitstreams: 1
U0001-2207202211213600.pdf: 10068229 bytes, checksum: d13f59ca8aa3b2820b33817e3035d323 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents誌謝 i
摘要 ii
Abstract iii
Contents iv
List of Figures vi
List of Tables viii
1 Introduction 1
2 Related works 5
2.1 Adversarial attack and defense..................... 5
2.2 Learning in the frequency domain ................... 7
3 Methodology 8
3.1 Spatial frequency layer......................... 8
3.2 Spatial frequency CNN......................... 9
4 Adversarial robustness of SF-CNN 13
4.1 Robustness against white-box attacks ................. 14
4.1.1 Perturbation in the pixel domain................ 14
4.1.2 Perturbation in the frequency domain . . . . . . . . . . . . . 15
4.2 Robustness against transfer attack ................... 17
4.2.1 Transfer attacks from VGG11 ................. 18
4.2.2 Transfer attacks from SF-VGG11 ............... 20
5 Further analysis of SF Layer 22
5.1 The impact of image frequency .................... 23
5.2 Mixture models of SF and C88 layers ................. 24
5.2.1 Interpolation model....................... 25
5.2.2 Substitutionmodel ....................... 27
6 Conclusions 29
Bibliography 30
Appendix A — Additional Experiments 38
A.1 Detection of adversarial example.................... 38
A.2 Evaluation with adversarial training .................. 39
A.2.1 More Grad-cam visualization results . . . . . . . . . . . . . . 40
-
dc.language.isoen-
dc.subject頻譜學習zh_TW
dc.subject對抗式攻擊防禦zh_TW
dc.subject對抗式攻擊穩健性zh_TW
dc.subject空間頻率zh_TW
dc.subjectFrequency Learningen
dc.subjectAdversarial Defenseen
dc.subjectAdversarial Robustnessen
dc.subjectSpatial Frequencyen
dc.title基於空間頻率域下模型之對抗式攻擊的穩健性zh_TW
dc.titleEvaluating Adversarial Robustness in the Spatial Frequency Domainen
dc.typeThesis-
dc.date.schoolyear110-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee王鈺強;沈之涯;戴志華zh_TW
dc.contributor.oralexamcommitteeYu-Chiang Wang;Chih-Ya Shen;Chih-Hua Taien
dc.subject.keyword對抗式攻擊防禦,對抗式攻擊穩健性,空間頻率,頻譜學習,zh_TW
dc.subject.keywordAdversarial Defense,Adversarial Robustness,Spatial Frequency,Frequency Learning,en
dc.relation.page42-
dc.identifier.doi10.6342/NTU202201630-
dc.rights.note未授權-
dc.date.accepted2022-08-24-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電機工程學系-
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-110-2.pdf
  未授權公開取用
9.83 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved