Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71335
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕(Yung-Yu Chuang)
dc.contributor.authorZhuohan Chenen
dc.contributor.author陳卓晗zh_TW
dc.date.accessioned2021-06-17T05:59:02Z-
dc.date.available2019-02-15
dc.date.copyright2019-02-15
dc.date.issued2019
dc.date.submitted2019-02-14
dc.identifier.citation[1] M. Chai, L. Luo, K. Sunkavalli, N. Carr, S. Hadap, and K. Zhou. High-quality hair modeling from a single portrait photo. ACM Transactions on Graphics (TOG), 34(6):204, 2015.
[2] M. Chai, T. Shao, H. Wu, Y. Weng, and K. Zhou. Autohair: fully automatic hair modeling from a single image. ACM Transactions on Graphics, 35(4), 2016.
[3] M. Chai, L. Wang, Y. Weng, X. Jin, and K. Zhou. Dynamic hair manipulation in images and videos. ACM Transactions on Graphics (TOG), 32(4):75, 2013.
[4] M. Chai, L. Wang, Y. Weng, Y. Yu, B. Guo, and K. Zhou. Single-view hair modeling for portrait manipulation. ACM Transactions on Graphics (TOG), 31(4):116, 2012.
[5] H. Chang, J. Lu, F. Yu, and A. Finkelstein. Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. In 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[6] B.-C. Chen, C.-S. Chen, and W. H. Hsu. Cross-age reference coding for age-invariant face recognition and retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), 2014.
[7] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8789–8797, 2018.
[8] J. B. F. Zhou and Z. Lin. Exemplar-based graph matching for robust facial landmark localization. In IEEE International Conference on Computer Vision (ICCV), 2013.
[9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
[10] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
[11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5967–5976. IEEE, 2017.
[12] N. Jetchev and U. Bergmann. The conditional analogy gan: Swapping fashion articles on people images. ICCVW, 2(6):8, 2017.
[13] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim. Learning to discover cross-domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017.
[14] M. Li, W. Zuo, and D. Zhang. Deep identity-aware transfer of facial attributes. arXiv preprint arXiv:1610.05586, 2016.
[15] T. Li, R. Qian, C. Dong, S. Liu, Q. Yan, W. Zhu, and L. Lin. Beautygan: Instancelevel facial makeup transfer with deep generative adversarial network. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 645–653. ACM, 2018.
[16] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
[17] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
[18] S. Mo, M. Cho, and J. Shin. Instagan: Instance-aware image-to-image translation. arXiv preprint arXiv:1812.10889, 2018.
[19] A. Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583, 2016.
[20] A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pages 2642–2651. JMLR. org, 2017.
[21] Y. Shih, S. Paris, C. Barnes, W. T. Freeman, and F. Durand. Style transfer for headshot portraits. ACM Transactions on Graphics (TOG), 33(4):148, 2014.
[22] Y. Taigman, A. Polyak, and L. Wolf. Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200, 2016.
[23] F. Yu, V. Koltun, and T. Funkhouser. Dilated residual networks. In Computer Vision and Pattern Recognition, volume 1, page 2, 2017.
[24] Y. Zhou, L. Hu, J. Xing, W. Chen, H.-W. Kung, X. Tong, and H. Li. Hairnet: Single-view hair reconstruction using convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 235–251, 2018.
[25] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint, 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71335-
dc.description.abstract本篇論文提出了一個自動化修改頭髮的方法,其中第一個神經網絡能夠自動將影像中人的頭髮給消去,第二個神經網絡則可以將另一張影像中人的髮型加到一張光頭的人的頭上。在過去的許多研究中,都涉及到了人的肖像的風格轉換,包括年齡、性別以及頭髮顏色的轉換,甚至還有面部表情的轉換,並且大部分的研究都取得了很大的成果。但是,針對髮型轉換的研究則很少有人涉及,已經存在的方法也並不能直接應用于髮型的轉換產生正確的頭髮。我們的方法使用了修改后的循環一致性對抗卷積神經網絡。首先,我們將一個人的頭像和它臉部的模板輸入到一個深度卷積神經網絡當中,將他的頭髮抹去變成光頭。之後,我們再將結果影像和另一張參考的影像,以及原來的人臉部的模板,一同輸入到另一個深度卷積神經網絡中,生成一張帶有原來影像的人臉和參考影像中的人的髮型的結果。實驗結果顯示出我們不僅可以將一個人的頭髮給消除變成光頭,也可以自然將另一個人的頭髮轉換到這個人的頭上。zh_TW
dc.description.abstractThis thesis proposed an automatic method for editing a portrait photo so that the people looks have the same hairstyle as other people in the reference photo. Recent works have shown great improvements in a lot kind of human portrait image-to-image style transfer, including the transfer of age, gender, hair color, and even facial expressions. However, style transfer for hairstyle is still untouched, existing methods can not change a person’s hairstyle correctly. Our approach relies on a modified framework of cycle-consistent generative adversarial networks. We first feed a portrait photo with its face mask into a UNET to remove a person’s hair in that photo. Then this photo is combined with another reference photo and a face mask. Then we feed this combined photo into another UNET to generate a new portrait photo, which has the original person face and the hairstyle from the referenced photo. The results show that we can not only remove a person’s hair in a portrait photo but also add hairstyle from another reference photo into the portrait.en
dc.description.provenanceMade available in DSpace on 2021-06-17T05:59:02Z (GMT). No. of bitstreams: 1
ntu-108-R05944045-1.pdf: 16997331 bytes, checksum: 6b24947096aabe69be9cdf728b301ca6 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents論文口試委員審定書i
致謝ii
摘要iii
Abstract iv
List of Figures vii
List of Tables ix
Chapter 1 Introduction 1
Chapter 2 Related Works 3
2.1 Hairstyle generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Generative Adversarial Networks . . . . . . . . . . . . . . . . . . . . 4
2.3 Style transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Chapter 3 Formulation 7
3.1 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Training Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 4 Implementation 14
4.1 Data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Warping function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Training detailes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 5 Experimental Results 19
5.1 Hairstyle removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Hairstyle transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 User study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Failure cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.5 Hairstyle dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 6 Conclusion 28
Bibliography 29
dc.language.isozh-TW
dc.subject髮型轉換zh_TW
dc.subject循環對抗卷積神經網絡zh_TW
dc.subject風格轉換zh_TW
dc.subjectCycleGANen
dc.subjectStyle Transferen
dc.subjectHairstyle Transferen
dc.title基於卷積神經網絡的髮型消除與轉換zh_TW
dc.titleHairstyle Removal and Transfer using Convolutional Neural Networksen
dc.typeThesis
dc.date.schoolyear107-1
dc.description.degree碩士
dc.contributor.oralexamcommittee葉正聖(Jeng-Sheng Yeh),吳賦哲
dc.subject.keyword風格轉換,髮型轉換,循環對抗卷積神經網絡,zh_TW
dc.subject.keywordStyle Transfer,Hairstyle Transfer,CycleGAN,en
dc.relation.page31
dc.identifier.doi10.6342/NTU201900560
dc.rights.note有償授權
dc.date.accepted2019-02-14
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
16.6 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved