Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21413
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor雷欽隆
dc.contributor.authorYi-Chen Linen
dc.contributor.author林羿辰zh_TW
dc.date.accessioned2021-06-08T03:33:23Z-
dc.date.copyright2019-08-18
dc.date.issued2019
dc.date.submitted2019-08-06
dc.identifier.citation[1] R. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Rich feature hierarchies for accurate object detection and semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[2] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, 'Selective search for object recognition,' International journal of computer vision, vol. 104, no. 2, pp. 154-171, 2013.
[3] R. Girshick, 'Fast r-cnn,' in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448.
[4] S. Ren, K. He, R. Girshick, and J. Sun, 'Faster r-cnn: Towards real-time object detection with region proposal networks,' in Advances in neural information processing systems, 2015, pp. 91-99.
[5] K. Eykholt et al., 'Robust physical-world attacks on deep learning visual classification,' in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1625-1634.
[6] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, 'You only look once: Unified, real-time object detection,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[7] J. Redmon and A. Farhadi, 'YOLO9000: better, faster, stronger,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
[8] J. Redmon and A. Farhadi, 'Yolov3: An incremental improvement,' arXiv preprint arXiv:1804.02767, 2018.
[9] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, 'Feature pyramid networks for object detection,' in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117-2125.
[10] C. Szegedy et al., 'Intriguing properties of neural networks,' arXiv preprint arXiv:1312.6199, 2013.
[11] I. J. Goodfellow, J. Shlens, and C. Szegedy, 'Explaining and harnessing adversarial examples,' arXiv preprint arXiv:1412.6572, 2014.
[12] A. Kurakin, I. Goodfellow, and S. Bengio, 'Adversarial machine learning at scale,' arXiv preprint arXiv:1611.01236, 2016.
[13] N. Carlini and D. Wagner, 'Towards evaluating the robustness of neural networks,' in 2017 IEEE Symposium on Security and Privacy (SP), 2017, pp. 39-57: IEEE.
[14] J. Su, D. V. Vargas, and K. Sakurai, 'One pixel attack for fooling deep neural networks,' IEEE Transactions on Evolutionary Computation, 2019.
[15] R. Storn and K. Price, 'Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,' Journal of global optimization, vol. 11, no. 4, pp. 341-359, 1997.
[16] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, 'Adversarial patch,' arXiv preprint arXiv:1712.09665, 2017.
[17] X. Liu, H. Yang, Z. Liu, L. Song, H. Li, and Y. Chen, 'DPatch: An Adversarial Patch Attack on Object Detectors,' arXiv preprint arXiv:1806.02299, 2018.
[18] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, 'Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,' in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 1528-1540: ACM.
[19] J. Lu, H. Sibai, E. Fabry, and D. Forsyth, 'No need to worry about adversarial examples in object detection in autonomous vehicles,' arXiv preprint arXiv:1707.03501, 2017.
[20] S.-T. Chen, C. Cornelius, J. Martin, and D. H. Chau, 'Robust physical adversarial attack on faster r-cnn object detector,' arXiv preprint arXiv:1804.05810, vol. 2, no. 3, p. 4, 2018.
[21] D. Pascale, 'RGB coordinates of the Macbeth ColorChecker,' The BabelColor Company, vol. 6, 2006.
[22] I. TeraSoft. (2018). MATLAB Deep Learning Competition-Accelerate the Power of AI. Available: www.terasoft.com.tw/matlabdeeplearning/2018/index.asp
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21413-
dc.description.abstract近年來隨著深度學習的成功發展,日常生活中出現了許多使用深度學習技術的應用。在零售產業上出現了使用深度學習的模型來進行自助結帳,但深度學習的模型容易受到對抗性攻擊所影響,此種狀況下的應用是有安全疑慮的。
本文中提出了在現實中可用來攻擊此種自助結帳系統的方法。藉由在商品上貼上產生特定模式的貼紙,造成辨識系統錯誤判讀。貼紙的紋路由產生對抗樣本的演算法產生,並透過微分進化演算法尋找最適合的擺放位置。
透過這個方法提出了兩種不同目標的的攻擊方式,一種目的在減少模型準確率,另外一種則是將物體轉換為特定的類別。實驗測試在YOLOv3與Faster R-CNN的模型上,可達成有效的攻擊,並且證明了此種攻擊具有可移轉性。根據我們的實驗結果,單純使用深度學習技術的自助結帳系統並不可靠,當遇到惡意的使用者時,可能產生辨識錯誤造成商家損失。
zh_TW
dc.description.abstractIn recent years, with the successful development of deep learning, many applications adopting deep learning techniques have been used in our daily lives. In the retail industry, deep learning models have been used for self-checkout, but deep learning models are vulnerable to adversarial attacks. Such applications have security concerns.
This thesis presents a method that can be used to attack such self-checkout systems in practical. The object detection model can be misled by attaching a sticker with a specific pattern to the product. The sticker is generated by an adversarial attack algorithm and is stuck to a specific location which is generated by a differential evolution algorithm.
Two different purposes of the above attack are proposed through this method, one for reducing the precision of the model and the other for converting objects into a specific category. Experimental tests on the models of YOLOv3 and Faster R-CNN can achieve effective attacks and prove that such attacks are transferability. According to our experimental results, the self-checkout system only using deep learning object detection model is not reliable. When encountering a malicious user, it may cause identification errors and cause losses to the store.
en
dc.description.provenanceMade available in DSpace on 2021-06-08T03:33:23Z (GMT). No. of bitstreams: 1
ntu-108-R06921026-1.pdf: 1899196 bytes, checksum: 1aec77009386d95f8d062eb9b50b9805 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS iv
LIST OF FIGURES vi
LIST OF TABLES vii
Chapter 1 Introduction 1
Chapter 2 Related work 3
Chapter 3 Methodology 6
3.1 Targeted attack and untargeted attack 6
3.1.1 Untargeted attack 6
3.1.2 Targeted attack 7
3.2 Models 7
3.2.1 Faster R-CNN 7
3.2.2 YOLOv3 7
3.3 Patch location 10
3.4 Adversarial attack method 11
3.4.1 Basic iterative method (BIM) 11
3.4.2 Iterative least-likely class method (ILCM) 12
3.4.3 CW attack 12
3.4.4 Compare different attack method 13
3.5 Color adjustment 15
Chapter 4 Evaluation 16
4.1 Implementation 16
4.1.1 Dataset 16
4.1.2 Model preparation 17
4.2 Experiment result 19
4.2.1 Untargeted attack ability 19
4.2.2 Different location 22
4.2.3 Transferability of attack 25
4.2.4 Train model with patch 26
4.2.5 Convert to a specific category 27
4.2.6 Practical experiment 30
4.3 Discussion 33
Chapter 5 Conclusion 35
Bibliography 36
Appendix I - Dataset Label 38
dc.language.isoen
dc.title針對深度學習型自助結帳系統的對抗性攻擊zh_TW
dc.titleAdversarial attack against deep learning based self-checkout systemsen
dc.typeThesis
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee顏嗣鈞,郭斯彥
dc.subject.keyword對抗性攻擊,物件辨識,微分進化演算法,zh_TW
dc.subject.keywordadversarial attack,object detection,differential evolution,en
dc.relation.page38
dc.identifier.doi10.6342/NTU201902693
dc.rights.note未授權
dc.date.accepted2019-08-07
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
1.85 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved