請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71335完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
| dc.contributor.author | Zhuohan Chen | en |
| dc.contributor.author | 陳卓晗 | zh_TW |
| dc.date.accessioned | 2021-06-17T05:59:02Z | - |
| dc.date.available | 2019-02-15 | |
| dc.date.copyright | 2019-02-15 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-02-14 | |
| dc.identifier.citation | [1] M. Chai, L. Luo, K. Sunkavalli, N. Carr, S. Hadap, and K. Zhou. High-quality hair modeling from a single portrait photo. ACM Transactions on Graphics (TOG), 34(6):204, 2015.
[2] M. Chai, T. Shao, H. Wu, Y. Weng, and K. Zhou. Autohair: fully automatic hair modeling from a single image. ACM Transactions on Graphics, 35(4), 2016. [3] M. Chai, L. Wang, Y. Weng, X. Jin, and K. Zhou. Dynamic hair manipulation in images and videos. ACM Transactions on Graphics (TOG), 32(4):75, 2013. [4] M. Chai, L. Wang, Y. Weng, Y. Yu, B. Guo, and K. Zhou. Single-view hair modeling for portrait manipulation. ACM Transactions on Graphics (TOG), 31(4):116, 2012. [5] H. Chang, J. Lu, F. Yu, and A. Finkelstein. Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. In 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [6] B.-C. Chen, C.-S. Chen, and W. H. Hsu. Cross-age reference coding for age-invariant face recognition and retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), 2014. [7] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8789–8797, 2018. [8] J. B. F. Zhou and Z. Lin. Exemplar-based graph matching for robust facial landmark localization. In IEEE International Conference on Computer Vision (ICCV), 2013. [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [10] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007. [11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5967–5976. IEEE, 2017. [12] N. Jetchev and U. Bergmann. The conditional analogy gan: Swapping fashion articles on people images. ICCVW, 2(6):8, 2017. [13] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017. [14] M. Li, W. Zuo, and D. Zhang. Deep identity-aware transfer of facial attributes. arXiv preprint arXiv:1610.05586, 2016. [15] T. Li, R. Qian, C. Dong, S. Liu, Q. Yan, W. Zhu, and L. Lin. Beautygan: Instancelevel facial makeup transfer with deep generative adversarial network. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 645–653. ACM, 2018. [16] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. [17] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. [18] S. Mo, M. Cho, and J. Shin. Instagan: Instance-aware image-to-image translation. arXiv preprint arXiv:1812.10889, 2018. [19] A. Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583, 2016. [20] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 2642–2651. JMLR. org, 2017. [21] Y. Shih, S. Paris, C. Barnes, W. T. Freeman, and F. Durand. Style transfer for headshot portraits. ACM Transactions on Graphics (TOG), 33(4):148, 2014. [22] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016. [23] F. Yu, V. Koltun, and T. Funkhouser. Dilated residual networks. In Computer Vision and Pattern Recognition, volume 1, page 2, 2017. [24] Y. Zhou, L. Hu, J. Xing, W. Chen, H.-W. Kung, X. Tong, and H. Li. Hairnet: Single-view hair reconstruction using convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 235–251, 2018. [25] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint, 2017. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71335 | - |
| dc.description.abstract | 本篇論文提出了一個自動化修改頭髮的方法,其中第一個神經網絡能夠自動將影像中人的頭髮給消去,第二個神經網絡則可以將另一張影像中人的髮型加到一張光頭的人的頭上。在過去的許多研究中,都涉及到了人的肖像的風格轉換,包括年齡、性別以及頭髮顏色的轉換,甚至還有面部表情的轉換,並且大部分的研究都取得了很大的成果。但是,針對髮型轉換的研究則很少有人涉及,已經存在的方法也並不能直接應用于髮型的轉換產生正確的頭髮。我們的方法使用了修改后的循環一致性對抗卷積神經網絡。首先,我們將一個人的頭像和它臉部的模板輸入到一個深度卷積神經網絡當中,將他的頭髮抹去變成光頭。之後,我們再將結果影像和另一張參考的影像,以及原來的人臉部的模板,一同輸入到另一個深度卷積神經網絡中,生成一張帶有原來影像的人臉和參考影像中的人的髮型的結果。實驗結果顯示出我們不僅可以將一個人的頭髮給消除變成光頭,也可以自然將另一個人的頭髮轉換到這個人的頭上。 | zh_TW |
| dc.description.abstract | This thesis proposed an automatic method for editing a portrait photo so that the people looks have the same hairstyle as other people in the reference photo. Recent works have shown great improvements in a lot kind of human portrait image-to-image style transfer, including the transfer of age, gender, hair color, and even facial expressions. However, style transfer for hairstyle is still untouched, existing methods can not change a person’s hairstyle correctly. Our approach relies on a modified framework of cycle-consistent generative adversarial networks. We first feed a portrait photo with its face mask into a UNET to remove a person’s hair in that photo. Then this photo is combined with another reference photo and a face mask. Then we feed this combined photo into another UNET to generate a new portrait photo, which has the original person face and the hairstyle from the referenced photo. The results show that we can not only remove a person’s hair in a portrait photo but also add hairstyle from another reference photo into the portrait. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T05:59:02Z (GMT). No. of bitstreams: 1 ntu-108-R05944045-1.pdf: 16997331 bytes, checksum: 6b24947096aabe69be9cdf728b301ca6 (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 論文口試委員審定書i
致謝ii 摘要iii Abstract iv List of Figures vii List of Tables ix Chapter 1 Introduction 1 Chapter 2 Related Works 3 2.1 Hairstyle generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . 4 2.3 Style transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 3 Formulation 7 3.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Training Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 4 Implementation 14 4.1 Data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Warping function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3 Training detailes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 5 Experimental Results 19 5.1 Hairstyle removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Hairstyle transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.3 User study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.4 Failure cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.5 Hairstyle dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 6 Conclusion 28 Bibliography 29 | |
| dc.language.iso | zh-TW | |
| dc.subject | 髮型轉換 | zh_TW |
| dc.subject | 循環對抗卷積神經網絡 | zh_TW |
| dc.subject | 風格轉換 | zh_TW |
| dc.subject | CycleGAN | en |
| dc.subject | Style Transfer | en |
| dc.subject | Hairstyle Transfer | en |
| dc.title | 基於卷積神經網絡的髮型消除與轉換 | zh_TW |
| dc.title | Hairstyle Removal and Transfer using Convolutional Neural Networks | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 107-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 葉正聖(Jeng-Sheng Yeh),吳賦哲 | |
| dc.subject.keyword | 風格轉換,髮型轉換,循環對抗卷積神經網絡, | zh_TW |
| dc.subject.keyword | Style Transfer,Hairstyle Transfer,CycleGAN, | en |
| dc.relation.page | 31 | |
| dc.identifier.doi | 10.6342/NTU201900560 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2019-02-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 16.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
