請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69237完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 歐陽明(Ming Ouhyoung) | |
| dc.contributor.author | Ci-Syuan Yang | en |
| dc.contributor.author | 楊騏瑄 | zh_TW |
| dc.date.accessioned | 2021-06-17T03:11:08Z | - |
| dc.date.available | 2019-07-23 | |
| dc.date.copyright | 2018-07-23 | |
| dc.date.issued | 2018 | |
| dc.date.submitted | 2018-07-17 | |
| dc.identifier.citation | [1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 214–223, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
[2] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. arXiv preprint arXiv:1711.09020, 2017. [3] M. Goin and T. Rees. A prospective study of patients’ psychological reactions to rhinoplasty. Annals of plastic surgery, 27 3:210–5, 1991. [4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014. [5] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. CoRR, abs/1704.00028, 2017. [6] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [7] X. Huang, Y. Li, O. Poursaeed, J. E. Hopcroft, and S. J. Belongie. Stacked generative adversarial networks. CoRR, abs/1612.04357, 2016. [8] P. Isola, J. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. CoRR, abs/1611.07004, 2016. [9] V. Kazemi and J. Sullivan. One millisecond face alignment with an ensemble of regression trees. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1867–1874, June 2014. [10] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1857–1865, 06–11 Aug 2017. [11] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [12] I. Korshunova, W. Shi, J. Dambre, and L. Theis. Fast face-swap using convolutional neural networks. CoRR, abs/1611.09577, 2016. [13] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. [14] M. Li, W. Zuo, and D. Zhang. Deep identity-aware transfer of facial attributes. CoRR, abs/1610.05586, 2016. [15] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. CoRR, abs/1411.7766, 2014. [16] S. A. Rabi and P. Aarabi. Face fusion: An automatic method for virtual plastic surgery. In 2006 9th International Conference on Information Fusion, pages 1–7, July 2006. [17] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. [18] W. Shen and R. Liu. Learning residual images for face attribute manipulation. CoRR, abs/1612.05363, 2016. [19] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. CoRR, abs/1607.08022, 2016. [20] T. Yamada, Y. Mori, K. Minami, K. Mishima, T. Sugahara, and M. Sakuda. Computer aided three-dimensional analysis of nostril forms: Application in normal and operated cleft lip patients. 27:345–53, 01 2000. [21] Z. Zhang, Y. Song, and H. Qi. Age progression/regression by conditional adversarial autoencoder. CoRR, abs/1702.08423, 2017. [22] J. J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. CoRR, abs/1609.03126, 2016. [23] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69237 | - |
| dc.description.abstract | 本論文的目標是讓使用者模擬將自己的五官之一整形成理想人物的五官,並且使得被置換後的五官可以與使用者其餘未整形之五官恰當地融合。在先前的研究當中,五官置換(face features replacement)的方法通常為先進行五官特徵的偵測(feature detection)接以取代置換並輔以阿爾法混合(alpha blending)為主。然而,當使用者的照片中頭部姿勢與理想人物照片中的頭部姿勢有相當程度的不同之時或是光照情況差異較大之時,即便使用良好的混成方法(blending techniques),其合成的結果照片也往往不令人滿意。因此在過往五官的整形置換模擬必須限制在使用正臉的照片。本論文採用生成對抗式網路(generative adversarial network, GAN)的架構,並在損失函數(loss function)中加入重建損失(reconstruction loss)以及引導損失(guiding loss),以得到我們的結果。 | zh_TW |
| dc.description.abstract | Our goal is to replace an individual's facial features with corresponding features of another individual and then fuse the replaced features with the original face. In previous studies, face features replacement can be done by face feature detection and simple replacement. However, when the pose of two faces are quite different, the synthesized image become barely plausible even with good blending techniques. Therefore, current face feature replacement techniques are limited to frontal face only. Our approach leverages the GAN to handle this limitation. Our proposed framework is automatic and does not need any markers on input image. Furthermore, by the introduction of reconstruction loss and guiding loss in GAN, the output image of our approach can preserve the content in source image. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T03:11:08Z (GMT). No. of bitstreams: 1 ntu-107-R05922101-1.pdf: 5063002 bytes, checksum: 6148d89cc55fd886e4bd79d10c1e6625 (MD5) Previous issue date: 2018 | en |
| dc.description.tableofcontents | 口試委員會審定書iii
誌謝v 摘要vii Abstract ix 1 Introduction 1 2 Related Work 3 2.1 Face Feature Detection and replacement . . . . . . . . . . . . . . . . . . 3 2.2 Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . . . 3 2.3 Image-to-Image Translation . . . . . . . . . . . . . . . . . . . . . . . . 4 3 Overall System 5 3.1 Adversarial Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Modified Reconstruction Loss . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Guiding Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.4 Full Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Guiding Function 11 4.1 Facial Landmark Detection . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.2 Locate Lips Bounding Rectangle . . . . . . . . . . . . . . . . . . . . . . 12 4.3 Replacement and Blending . . . . . . . . . . . . . . . . . . . . . . . . . 13 5 Implementation 17 5.1 Generator Network Architecture . . . . . . . . . . . . . . . . . . . . . . 17 5.2 Discriminator Network Architecture . . . . . . . . . . . . . . . . . . . . 17 6 Experiment 19 6.1 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.2 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6.3 Training Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6.4 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 7 Conclusion 27 Bibliography 29 | |
| dc.language.iso | en | |
| dc.subject | 生成對抗式網路 | zh_TW |
| dc.subject | 影像處理 | zh_TW |
| dc.subject | Generative Adversarial Network (GAN) | en |
| dc.subject | Image Processing | en |
| dc.title | 使用生成對抗式網路模擬五官置換 | zh_TW |
| dc.title | Face Features Replacement Using Generative Adversarial
Network | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 106-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 傅楸善(Chiou-Shann Fuh),梁容輝(Rung-Huei Liang) | |
| dc.subject.keyword | 生成對抗式網路,影像處理, | zh_TW |
| dc.subject.keyword | Generative Adversarial Network (GAN),Image Processing, | en |
| dc.relation.page | 31 | |
| dc.identifier.doi | 10.6342/NTU201801563 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2018-07-18 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-107-1.pdf 未授權公開取用 | 4.94 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
