Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63282
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor王勝德(Sheng-De Wang)
dc.contributor.authorChin-Yuan Yehen
dc.contributor.author葉津源zh_TW
dc.date.accessioned2021-06-16T16:32:33Z-
dc.date.available2020-06-09
dc.date.copyright2020-06-09
dc.date.issued2020
dc.date.submitted2020-05-01
dc.identifier.citation[1] S. Agarwal and L. R. Varshney. Limits of deepfake detection: A robust estimation viewpoint. arXiv preprint arXiv:1905.03493, 2019.
[2] N. Akhtar and A. Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410–14430, 2018.
[3] A. Athalye and N. Carlini. On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:1804.03286, 2018.
[4] A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
[5] D. Bashkirova, B. Usman, and K. Saenko. Adversarial self-defense for cycleconsistent gans. In Advances in Neural Information Processing Systems, pages 635–645, 2019.
[6] B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, 2018.
[7] B. Biggio, P. Russu, L. Didaci, F. Roli, et al. Adversarial biometric recognition: A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine, 32(5):31–41, 2015.
[8] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
[9] M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar, et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation, 2018.
[10] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
[11] C. Chu, A. Zhmoginov, and M. Sandler. Cyclegan, a master of steganography. arXiv preprint arXiv:1712.02950, 2017.
[12] U. A. Ciftci and I. Demir. Fakecatcher: Detection of synthetic portrait videos using biological signals. arXiv preprint arXiv:1901.02212, 2019.
[13] S. Cole. This horrifying app undresses a photo of any woman with a single click, Jun 2019. URL https://www.vice.com/en_ca/article/kzm59x/deepnude-appcreates-fake-nudes-of-any-woman.
[14] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath. Generative adversarial networks: An overview. IEEE Signal Processing Magazine, 35(1):53–65, 2018.
[15] S. Ding, Y. Tian, F. Xu, Q. Li, and S. Zhong. Trojan attack on deep generative models in autonomous driving. In International Conference on Security and Privacy in Communication Systems, pages 299–318. Springer, 2019.
[16] B. Dolhansky, R. Howes, B. Pflaum, N. Baram, and C. C. Ferrer. The deepfake detection challenge (dfdc) preview dataset. arXiv preprint arXiv:1910.08854, 2019.
[17] github/deepfakes. deepfakes-faceswap, Dec 2019. URL https://github.com/ deepfakes/faceswap/tree/790b04a3147c4408ef9f497f7f83b3700bf0f530.
[18] github/lwlodo. Official deepnude algorithm source code, Jul 2019. URL https://github.com/lwlodo/deep_nude/tree/a4a2e3fb83026c932cf96cbecb281032ce1be97b.
[19] github/MarekKowalski. Faceswap, Feb 2018. URL https://github.com/MarekKowalski/FaceSwap/tree/7e48bd4e509d9864db2a9c816f10ef4e2ad5f2ba.
[20] G. Gondim-Ribeiro, P. Tabacof, and E. Valle. Adversarial attacks on variational autoencoders. arXiv preprint arXiv:1806.04646, 2018.
[21] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
[22] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. corr (2015), 2015.
[23] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068, 2014.
[24] D. Güera and E. J. Delp. Deepfake video detection using recurrent neural networks. In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 1–6. IEEE, 2018.
[25] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
[26] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
[27] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[28] J. Kos, I. Fischer, and D. Song. Adversarial examples for generative models. in 2018 ieee security and privacy workshops (spw), 2018.
[29] Y. LeCun. The mnist database of handwritten digits. http://yann. lecun.com/exdb/mnist/, 1998.
[30] C.-H. Lee, Z. Liu, L. Wu, and P. Luo. Maskgan: Towards diverse and interactive facial image manipulation. arXiv preprint arXiv:1907.11922, 2019.
[31] D. Lee. Deepfakes porn has serious consequences, Feb 2018. URL https://www.bbc.com/news/technology-42912529.
[32] H. Lee, S. Han, and J. Lee. Generative adversarial trainer: Defense to adversarial perturbations with gan. arXiv preprint arXiv:1705.03387, 2017.
[33] P. Leskin. This controversial deepfake app lets anyone easily create fake nudes of any woman with just a click, and it’s a frightening look into the future of revenge porn, Jun 2019. URL https://www.businessinsider.com/deepnude-app-makesdeepfake-nudes-women-easy-revenge-porn-bullying-2019-6.
[34] Y. Li and S. Lyu. Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656, 2, 2018.
[35] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1778–1787, 2018.
[36] D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989.
[37] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pages 3730–3738, 2015.
[38] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
[39] M.-H. Maras and A. Alexandrou. Determining authenticity of video evidence in the age of artificial intelligence and in the wake of deepfake videos. The InternationalJournal of Evidence & Proof, 23(3):255–262, 2019.
[40] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. 2011.
[41] T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi. Deep learning for deepfakes creation and detection. arXiv preprint arXiv:1909.11573, 2019.
[42] D. Pasquini, M. Mingione, and M. Bernaschi. Adversarial out-domain examples for generative models. In 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pages 272–280. IEEE, 2019.
[43] A. Prakash, N. Moran, S. Garber, A. DiLillo, and J. Storer. Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8571–8580, 2018.
[44] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[45] A. Robertson. Github is banning copies of ‘deepfakes’porn app deepnude, Jul 2019. URL https://www.theverge.com/2019/7/9/20687902/github-bansdeepnude-deepfakes-ai-nudity-app-copies.
[46] A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner. Faceforensics++: Learning to detect manipulated facial images. arXiv preprint arXiv:1901.08971, 2019.
[47] P. Samangouei, M. Kabkab, and R. Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605, 2018.
[48] S. Samuel. A guy made a deepfake app to turn photos of women into nudes. it didn’t go well., Jun 2019. URL https://www.vox.com/2019/6/27/18761639/aideepfake-deepnude-app-nude-women-porn.
[49] S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman. Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (TOG), 36(4):95, 2017.
[50] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. iclr, abs/1312.6199, 2014, 2014.
[51] P. Tabacof, J. Tavares, and E. Valle. Adversarial images for variational autoencoders. arXiv preprint arXiv:1612.00155, 2016.
[52] T. Telford. ‘the world is not yet ready for deepnude’: Creator kills app that uses ai to fake naked images of women, Jun 2019. URL https://www.washingtonpost.com/business/2019/06/28/the-world-is-not-yetready-deepnude-creator-kills-app-that-uses-ai-fake-naked-images-women/.
[53] J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2387–2395, 2016.
[54] J. Thies, M. Zollhöfer, and M. Nießner. Deferred neural rendering: Image synthesis using neural textures. arXiv preprint arXiv:1904.12356, 2019.
[55] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
[56] L. R. Varshney and J. Z. Sun. Why do we perceive logarithmically? Significance, 10(1):28–31, 2013.
[57] J. Vincent. New ai deepfake app creates nude images of women in seconds, Jun 2019. URL https://www.theverge.com/2019/6/27/18760896/deepfake-nudeai-app-women-deepnude-non-consensual-pornography.
[58] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. Highresolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8798–8807, 2018.
[59] M. Willetts, A. Camuto, S. Roberts, and C. Holmes. Disentangling improves vaes’ robustness to adversarial attacks. arXiv preprint arXiv:1906.00230, 2019.
[60] C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018.
[61] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5907–5915, 2017.
[62] Z. Zhao, D. Dua, and S. Singh. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342, 2017.
[63] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/63282-
dc.description.abstract在這篇論文中,我提出新的方法將對抗式攻擊(Adversarial Attack)應用於攻擊圖像轉換式生成對抗網絡(Image Translation Generative Adversarial Network),包括循環式生成對抗網絡(CycleGAN)、條件式對抗網絡(pix2pix)、以及高解析度條件式對抗網絡(pix2pixHD),皆獲得良好的成果。圖像轉換式生成對抗網絡是一系列能將圖像在不同圖像集合之間轉換的有用模型。但若在不良的意圖下,這些技術也可被應用於深度偽造演算法中,創造出諸如「將衣物由人物相片中移除」的軟體。有鑒於此潛在的威脅,本論文提出以對抗式攻擊法對付圖像轉換式生成對抗網絡,好讓經由對抗式方法微調的圖片不至於輕易的被圖像轉換式生成對抗網絡模型所變造。
在所有攻擊步驟裡可能採用的對抗損失函數中,本論文發現直觀的運用生成對抗網絡模型中的評比者網絡(Discriminator)無法企及良好的成效,但運用距離函數則相當有效。希望這能對後續為保護圖片不受惡意圖像轉換式生成對抗網絡所變造的研究提供指引。
zh_TW
dc.description.abstractIn this thesis, I propose a novel method to apply Adversarial Attack on image translation Generative Adversarial Networks, including CycleGAN, pix2pix, and pix2pixHD, and achieve satisfying results. Image translation GANs are powerful models that can achieve imagehyp{}tohyp{}image translation between different image domains. If used with malicious intent, these techniques can be used as deepfake algorithms that could, for example, remove clothes on a body in a photograph. Given this potential threat, this thesis proposes to apply Adversarial Attack against image translation GANs so that images perturbed by adversarial methods would not be easily counterfeited by an image translation GAN model. That is, feeding image translation GANs with images slightly perturbed by the proposed method would not result in the designed outcome.
From all the alternatives of adversarial loss functions to be used in the attack procedure, this work finds that naively using the Discriminator part of the GAN models would not lead to a successful result, while using distance functions proves to be very useful. This work hopes to provide a guideline to those who wish to defend personal images from malicious use of image translation GANs in the future.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T16:32:33Z (GMT). No. of bitstreams: 1
ntu-109-R06921105-1.pdf: 7947182 bytes, checksum: 814c0dcf76de902f8d718ec32ef08914 (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents口試委員會審定書 . . . . . . . . . . . . . . . . . . . . . .iii
誌謝 . . . . . . . . . . . . . . . . . . . . . .v
Acknowledgements . . . . . . . . . . . . . . . . . . . . . .vii
摘要 . . . . . . . . . . . . . . . . . . . . . .ix
Abstract . . . . . . . . . . . . . . . . . . . . . .xi
1 Introduction . . . . . . . . . . . . . . . . . . . . . .1
2 Background . . . . . . . . . . . . . . . . . . . . . .5
2.1 Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Deepfake and DeepNude . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Adversarial Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Related Works . . . . . . . . . . . . . . . . . . . . . .11
3.1 Adversarial Attacks on VAE . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Adversarial Vulnerability of CycleGAN . . . . . . . . . . . . . . . . . . 13
3.3 Other Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Methodology . . . . . . . . . . . . . . . . . . . . . .15
4.1 Identifying Adversarial Losses for CycleGAN . . . . . . . . . . . . . . . 15
4.2 Basic Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Experiments and Results . . . . . . . . . . . . . . . . . . . . . .19
5.1 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Qualitative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Evaluation Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6 Case Study . . . . . . . . . . . . . . . . . . . . . .27
6.1 Comparison of Distance Functions . . . . . . . . . . . . . . . . . . . . . 27
6.2 Repeated Inference for Nullifying Attack Results . . . . . . . . . . . . . 27
6.3 Transfer and Ensemble Attack . . . . . . . . . . . . . . . . . . . . . . . 28
7 Conclusions . . . . . . . . . . . . . . . . . . . . . .33
Bibliography . . . . . . . . . . . . . . . . . . . . . .35
dc.language.isoen
dc.subject圖像轉換zh_TW
dc.subject對抗式攻擊zh_TW
dc.subject深度偽造zh_TW
dc.subject生成對抗網絡zh_TW
dc.subjectAdversarial Attacken
dc.subjectDeepFakeen
dc.subjectImage Translationen
dc.subjectGANen
dc.title以對抗式攻擊破壞圖像轉換類型之生成對抗網路zh_TW
dc.titleBreaking Image-Translation GANs with Adversarial Attacksen
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee李宏毅(Hung-Yi Lee),王鈺強(Yu-Chiang Wang)
dc.subject.keyword對抗式攻擊,生成對抗網絡,圖像轉換,深度偽造,zh_TW
dc.subject.keywordAdversarial Attack,GAN,Image Translation,DeepFake,en
dc.relation.page41
dc.identifier.doi10.6342/NTU202000718
dc.rights.note有償授權
dc.date.accepted2020-05-01
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-109-1.pdf
  未授權公開取用
7.76 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved