Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70776
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊永裕(Yung-Yu Chuang)
dc.contributor.authorSzu-Ying Chenen
dc.contributor.author陳思穎zh_TW
dc.date.accessioned2021-06-17T04:38:05Z-
dc.date.available2018-08-09
dc.date.copyright2018-08-09
dc.date.issued2018
dc.date.submitted2018-08-07
dc.identifier.citation[1] K. E. Ak, A. A. Kassim, J. Hwee Lim, and J. Yew Tham. Learning attribute rep- resentations with localization for flexible fashion search. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), June 2018.
[2] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4):509–522, Apr 2002.
[3] K.Gong, X.Liang, D.Zhang, X.Shen, andL.Lin. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
[4] X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis. Viton: An image-based virtual try- on network. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2018.
[5] N. Jetchev and U. Bergmann. The conditional analogy gan: Swapping fashion arti- cles on people images. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017.
[6] L. Kuan-Hsien, C. Ting-Yen, and C. Chu-Song. Mvc: A dataset for view-invariant clothing retrieval and attribute prediction. In ACM International Conference on Multimedia Retrieval, ICMR, 2016.
[7] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[8] M. Mirza and S. Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
[9] Z. Shizhan, F. Sanja, U. Raquel, L. Dahua, and C. L. Chen. Be your own prada: Fashion synthesis with structural coherence. In International Conference on Com- puter Vision (ICCV), 2017.
[10] E. Simo-Serra and H. Ishikawa. Fashion style in 128 floats: Joint ranking and classi- fication using weak data for feature extraction. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[11] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
[12] X. Song, F. Feng, J. Liu, Z. Li, L. Nie, and J. Ma. Neurostylist: Neural compatibility modeling for clothing matching. In ACM on Multimedia Conference, pages 753–761, 10 2017.
[13] W. Wang, Y. Xu, J. Shen, and S.-C. Zhu. Attentive fashion grammar network for fashion landmark detection and clothing category classification. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[14] H. Wei-Lin and G. Kristen. Learning the latent ”look”: Unsupervised discovery of a style-coherent embedding from fashion images. In International Conference on Computer Vision (ICCV), 2017.
[15] H.Wei-Lin and G.Kristen. Creating capsule wardrobes from fashion images. InProceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2018.
[16] A.-H. Ziad, S. Rainer, and G. Kristen. Fashion forward: Forecasting visual style in fashion. In International Conference on Computer Vision (ICCV), 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70776-
dc.description.abstract本論文的目標是讓使用者能夠用照片的方式來進行試穿衣服,也就是使用者提供一張自己的照片,以及另一張他人的照片,便能夠將他人身上的衣服換穿到自己的照片上。
其他虛擬試穿的方法都是針對人和衣服的正面照片來做處理,而我們的方法能夠處理正面、微轉向左側、及微轉向右側的方向。相較於其他方法,我們的方法更具有普遍性,而衣服的細節紋理也更為清晰。在用戶研究中,約90%的例子裡,比起其他方法,使用者比較喜歡我們的結果。
zh_TW
dc.description.abstractThe goal of this thesis is to enable users to try on clothes by photos, that is, users provide their own photo and another photo of others, and they can change the clothes of others to their own.
Other virtual try-on methods are focus on front-view of the person and the clothes, and our method can handle front, slightly turn to the left, and slightly turn to the right viewing directions. Compared to other methods, our method is more universal, and the details of the clothes are more clear. In the user study, in about 90% cases, the respondents more like our results than the results of other method.
en
dc.description.provenanceMade available in DSpace on 2021-06-17T04:38:05Z (GMT). No. of bitstreams: 1
ntu-107-R05922069-1.pdf: 5961719 bytes, checksum: 0cae683f0509a3c446a787d295895770 (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
1 Introduction 1
2 Related Work 2
2.1 VITON ................................... 2
2.2 CAGAN................................... 3
3 Methodology 4
3.1 Data collection ............................... 4
3.2 The proposed approach........................... 5
3.2.1 CAGAN .............................. 6
3.2.2 Segmentation............................ 7
3.2.3 Transform.............................. 9
3.2.4 Combination ............................ 10
3.3 Experiments................................. 13
3.3.1 Implementation details....................... 13
3.3.2 Comparison of different loss.................... 13
4 Evaluation 15
4.1 Qualitative Evaluation ........................... 15
4.1.1 Matrix Visualization ........................ 15
4.1.2 Comparison with VITON and CAGAN .............. 16
4.2 User study.................................. 17
5 Conclusion 19
5.1 Conclusion ................................. 19
5.2 Discussion.................................. 20
Bibliography 21
dc.language.isoen
dc.subject感知損失zh_TW
dc.subject虛擬試衣zh_TW
dc.subject卷積神經網路zh_TW
dc.subjectconvolutional neural networken
dc.subjectperceptual lossen
dc.subjectvirtual try-onen
dc.title基於衣服轉換之深度虛擬試衣技術zh_TW
dc.titleDeep Virtual Try-on with Clothes Transformen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee朱宏國(Hung-Kuo Chu),朱威達(Wei-Ta Chu)
dc.subject.keyword卷積神經網路,虛擬試衣,感知損失,zh_TW
dc.subject.keywordconvolutional neural network,virtual try-on,perceptual loss,en
dc.relation.page23
dc.identifier.doi10.6342/NTU201802721
dc.rights.note有償授權
dc.date.accepted2018-08-08
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
5.82 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved