請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/1353
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳銘憲(Ming-Syan Chen) | |
dc.contributor.author | Shih-Hong Tsai | en |
dc.contributor.author | 蔡仕竑 | zh_TW |
dc.date.accessioned | 2021-05-12T09:37:01Z | - |
dc.date.available | 2018-08-18 | |
dc.date.available | 2021-05-12T09:37:01Z | - |
dc.date.copyright | 2018-08-18 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-15 | |
dc.identifier.citation | [1] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.
[2] B. Biggio, G. Fumera, and F. Roli. Security evaluation of pattern classifiers under attack. IEEE transactions on knowledge and data engineering, 26(4):984–996, 2014. [3] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012. [4] J. Bradshaw, A. G. d. G. Matthews, and Z. Ghahramani. Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476, 2017. [5] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39–57. IEEE, 2017. [6] C. Donahue, J. McAuley, and M. Puckette. Synthesizing audio with generative adversarial networks. arXiv preprint arXiv:1802.04208, 2018. [7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [8] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [9] C. Guo, M. Rana, M. Cissé, and L. van der Maaten. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117, 2017. [10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. [12] A. Krizhevsky, I. Sutskever, and G. Hinton. The cifar-10 dataset. :http://www.cs.toronto.edu/kriz/cifar.html, 2014. [13] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016. [14] Y. LeCun. The mnist database of handwritten digits. http://yann. lecun.com/exdb/mnist/, 1998. [15] X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, M. E. Houle, G. Schoenebeck, D. Song, and J. Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613, 2018. [16] A. Madry, A. Makelov, Schmidt, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. CIFAR10 Adversarial Examples Challenge. https://github.com/MadryLab/cifar10_challenge. Accessed: 2018-05-06. [17] A. Madry, A. Makelov, Schmidt, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. MNIST Adversarial Examples Challenge. https://github.com/MadryLab/mnist_challenge. Accessed: 2018-05-06. [18] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. [19] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. [20] S. M. Moosavi Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number EPFL-CONF-218057, 2016. [21] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016. [22] A. Osokin, A. Chessel, R. E. C. Salas, and F. Vaggi. Gans for biological image synthesis. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 2252–2261. IEEE, 2017. [23] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017. [24] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. [25] S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and A. Courville. Adversarial generation of natural language. arXiv preprint arXiv:1705.10929, 2017. [26] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016. [27] C. Sitawarin, A. N. Bhagoji, A. Mosenia, M. Chiang, and P. Mittal. Darts: Deceiving autonomous cars with toxic signs. arXiv preprint arXiv:1802.06430, 2018. [28] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [29] C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018. [30] W. Xu, D. Evans, and Y. Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. [31] Z. Zhao, D. Dua, and S. Singh. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342, 2017. [32] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/handle/123456789/1353 | - |
dc.description.abstract | 對抗例(adversarial examples) 指的是那些為了使神經網絡錯誤分類而特製的資料。當我們討論創造這些對抗例的方法時,我們通常會聯想到基於擾動的方法─ 在正常的資料上添加不可見的擾動來製造對抗例。對人類來說,由擾動法產生的對抗例將完全保留其原本資料的視覺外觀,從而使人類分不出和正常資料的差異,但DNN 模型會將兩者視為完全不同的外觀,從而產生誤導性的預測。然而,在本文中,我們認為只依賴這個將現有資料轉化成對抗例的架構會限制對抗例的多樣性。我們提出了一個基於非擾動的框架,該框架以基於條件約束生成對抗網絡的生成模型直接生成對抗例。因此,生成的對抗例不會與任何現有的資料有外觀上的相似性,從而擴大了對抗例的多樣性,增加了防禦對抗例的難度。並且,我們將這個框架擴展到預先訓練的條件約束生成對抗網絡模型,其中,我們能將現有的普通生成模型經過些微的訓練後,轉變成一個專門生成對抗例的「對抗例生成模型」。我們針對MNIST 和CIFAR10 資料集進行了實驗,結果令人滿意,表明這種方法可做為先前對抗例製造策略的替代方案。 | zh_TW |
dc.description.abstract | Adversarial examples are malicious data designed with the intention of causing misbehavior of neural networks. Typically, these examples are featured in terms of similar physical appearance to normal images, yet discrepancy in the prediction result when similar normal image and adversarial example are evaluated by the same DNN model. To create such examples, current methods rely mainly on techniques that overlay invisible perturbations onto normal images. The resulting adversarial examples therefore resemble the original images, but with different output in DNN’s result. In this work, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbation-based framework that generates native adversarial examples from class-conditional generative adversarial networks. As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an ”adversarial-example generator”. We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies. | en |
dc.description.provenance | Made available in DSpace on 2021-05-12T09:37:01Z (GMT). No. of bitstreams: 1 ntu-107-R05921030-1.pdf: 953787 bytes, checksum: fe7d01725f58963dcc9bf6e491480adf (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 口試委員會審定書 i
Acknowledgements iii Chinese Abstract v English Abstract vii Contents ix List of Figures xi List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Background 3 Chapter 3 Adaptive GAN 7 3.1 ProblemSetting 7 3.2 Architecture 7 3.3 Objective 9 3.4 Turning Pre-trained Generator into Adversarial-example Generator 11 Chapter 4 Experimental results 12 4.1 TheEffectofMaskedLossFunction 12 4.2 AdversarialAttackonMNIST 13 4.3 Adversarial Attack on CIFAR-10 14 4.4 Cross-domainattacks 16 Chapter 5 Conclusion 18 Bibliography 21 | |
dc.language.iso | en | |
dc.title | 利用具可適性之生成對抗網路客製化對抗例生成器 | zh_TW |
dc.title | AdaptiveGAN: CustomizingGeneratorsforAdversarial Examples | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 廖弘源,楊得年,陳怡伶,帥宏翰 | |
dc.subject.keyword | 對抗例,生成對抗網絡,條件約束生成對抗網絡, | zh_TW |
dc.subject.keyword | adversarial examples,GAN (generative adversarial network),class-conditional GAN, | en |
dc.relation.page | 23 | |
dc.identifier.doi | 10.6342/NTU201803656 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2018-08-16 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf | 931.43 kB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。