請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70776
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊永裕(Yung-Yu Chuang) | |
dc.contributor.author | Szu-Ying Chen | en |
dc.contributor.author | 陳思穎 | zh_TW |
dc.date.accessioned | 2021-06-17T04:38:05Z | - |
dc.date.available | 2018-08-09 | |
dc.date.copyright | 2018-08-09 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-07 | |
dc.identifier.citation | [1] K. E. Ak, A. A. Kassim, J. Hwee Lim, and J. Yew Tham. Learning attribute rep- resentations with localization for flexible fashion search. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), June 2018.
[2] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4):509–522, Apr 2002. [3] K.Gong, X.Liang, D.Zhang, X.Shen, andL.Lin. Look into person: Self-supervised structure-sensitive learning and a new benchmark for human parsing. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [4] X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis. Viton: An image-based virtual try- on network. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2018. [5] N. Jetchev and U. Bergmann. The conditional analogy gan: Swapping fashion arti- cles on people images. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017. [6] L. Kuan-Hsien, C. Ting-Yen, and C. Chu-Song. Mvc: A dataset for view-invariant clothing retrieval and attribute prediction. In ACM International Conference on Multimedia Retrieval, ICMR, 2016. [7] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [8] M. Mirza and S. Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014. [9] Z. Shizhan, F. Sanja, U. Raquel, L. Dahua, and C. L. Chen. Be your own prada: Fashion synthesis with structural coherence. In International Conference on Com- puter Vision (ICCV), 2017. [10] E. Simo-Serra and H. Ishikawa. Fashion style in 128 floats: Joint ranking and classi- fication using weak data for feature extraction. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [11] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [12] X. Song, F. Feng, J. Liu, Z. Li, L. Nie, and J. Ma. Neurostylist: Neural compatibility modeling for clothing matching. In ACM on Multimedia Conference, pages 753–761, 10 2017. [13] W. Wang, Y. Xu, J. Shen, and S.-C. Zhu. Attentive fashion grammar network for fashion landmark detection and clothing category classification. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [14] H. Wei-Lin and G. Kristen. Learning the latent ”look”: Unsupervised discovery of a style-coherent embedding from fashion images. In International Conference on Computer Vision (ICCV), 2017. [15] H.Wei-Lin and G.Kristen. Creating capsule wardrobes from fashion images. InProceedings of IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2018. [16] A.-H. Ziad, S. Rainer, and G. Kristen. Fashion forward: Forecasting visual style in fashion. In International Conference on Computer Vision (ICCV), 2017. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70776 | - |
dc.description.abstract | 本論文的目標是讓使用者能夠用照片的方式來進行試穿衣服,也就是使用者提供一張自己的照片,以及另一張他人的照片,便能夠將他人身上的衣服換穿到自己的照片上。
其他虛擬試穿的方法都是針對人和衣服的正面照片來做處理,而我們的方法能夠處理正面、微轉向左側、及微轉向右側的方向。相較於其他方法,我們的方法更具有普遍性,而衣服的細節紋理也更為清晰。在用戶研究中,約90%的例子裡,比起其他方法,使用者比較喜歡我們的結果。 | zh_TW |
dc.description.abstract | The goal of this thesis is to enable users to try on clothes by photos, that is, users provide their own photo and another photo of others, and they can change the clothes of others to their own.
Other virtual try-on methods are focus on front-view of the person and the clothes, and our method can handle front, slightly turn to the left, and slightly turn to the right viewing directions. Compared to other methods, our method is more universal, and the details of the clothes are more clear. In the user study, in about 90% cases, the respondents more like our results than the results of other method. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T04:38:05Z (GMT). No. of bitstreams: 1 ntu-107-R05922069-1.pdf: 5961719 bytes, checksum: 0cae683f0509a3c446a787d295895770 (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 摘要 iii Abstract iv 1 Introduction 1 2 Related Work 2 2.1 VITON ................................... 2 2.2 CAGAN................................... 3 3 Methodology 4 3.1 Data collection ............................... 4 3.2 The proposed approach........................... 5 3.2.1 CAGAN .............................. 6 3.2.2 Segmentation............................ 7 3.2.3 Transform.............................. 9 3.2.4 Combination ............................ 10 3.3 Experiments................................. 13 3.3.1 Implementation details....................... 13 3.3.2 Comparison of different loss.................... 13 4 Evaluation 15 4.1 Qualitative Evaluation ........................... 15 4.1.1 Matrix Visualization ........................ 15 4.1.2 Comparison with VITON and CAGAN .............. 16 4.2 User study.................................. 17 5 Conclusion 19 5.1 Conclusion ................................. 19 5.2 Discussion.................................. 20 Bibliography 21 | |
dc.language.iso | en | |
dc.title | 基於衣服轉換之深度虛擬試衣技術 | zh_TW |
dc.title | Deep Virtual Try-on with Clothes Transform | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 朱宏國(Hung-Kuo Chu),朱威達(Wei-Ta Chu) | |
dc.subject.keyword | 卷積神經網路,虛擬試衣,感知損失, | zh_TW |
dc.subject.keyword | convolutional neural network,virtual try-on,perceptual loss, | en |
dc.relation.page | 23 | |
dc.identifier.doi | 10.6342/NTU201802721 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-08-08 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 5.82 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。