Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97169
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳炳宇zh_TW
dc.contributor.advisorBing-Yu Chenen
dc.contributor.author鈕愷夏zh_TW
dc.contributor.authorKai-Hsia Niuen
dc.date.accessioned2025-02-27T16:30:35Z-
dc.date.available2025-02-28-
dc.date.copyright2025-02-27-
dc.date.issued2024-
dc.date.submitted2024-11-11-
dc.identifier.citation[1] K. Aberman, Y. Weng, D. Lischinski, D. Cohen-Or, and B. Chen. Unpaired motion style transfer from video to animation. ACM Trans. Graph., 39(4):64–1, 2020.
[2] K. Amaya, A. Bruderlin, and T. Calvert. Emotion from motion. In Proc. Graphics Interface, volume 96, pages 222–229. Toronto, Canada, 1996.
[3] A. Aristidou, Q. Zeng, E. Stavrakis, K. Yin, D. Cohen-Or, Y. Chrysanthou, and B. Chen. Emotion control of unstructured dance movements. In Proc. SCA, pages 1–10, 2017.
[4] G. Barquero, S. Escalera, and C. Palmero. Belfusion: Latent diffusion for behavior- driven human motion prediction. In Proc. ICCV, pages 2317–2327, 2023.
[5] A. Blattmann, T. Milbich, M. Dorkenwald, and B. Ommer. Behavior-driven synthe- sis of human dynamics. In Proc. CVPR, pages 12236–12246, 2021.
[6] M. Brand and A. Hertzmann. Style machines. In Proc. SIGGRAPH, SIGGRAPH ’00, page 183–192, USA, 2000. ACM Press/Addison-Wesley Publishing Co.
[7] K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio. On the proper- ties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[8] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proc. CVPR, pages 2414–2423, 2016.
[9] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. Image analo- gies. In Proc. SIGGRAPH, page 327–340, New York, NY, USA, 2001. Association for Computing Machinery.
[10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, nov 1997.
[11] J. Hoffman, E. Tzeng, T. Park, J.-Y. Zhu, P. Isola, K. Saenko, A. Efros, and T. Dar- rell. Cycada: Cycle-consistent adversarial domain adaptation. In Proc. ICML, pages 1989–1998. Pmlr, 2018.
[12] D. Holden, I. Habibie, I. Kusajima, and T. Komura. Fast neural style transfer for motion data. IEEE Computer Graphics and Applications, 37(4):42–49, 2017.
[13] E. Hsu, K. Pulli, and J. Popović. Style translation for human motion. ACM Trans. Graph., 24(3):1082–1089, jul 2005.
[14] X.HuangandS.Belongie.Arbitrarystyletransferinreal-timewithadaptiveinstance normalization. In Proc. ICCV, pages 1501–1510, 2017.
[15] D.-K. Jang, S. Park, and S.-H. Lee. Motion puzzle: Arbitrary motion style transfer by body part. ACM Trans. Graph., 41(3):1–16, 2022.
[16] D.-K. Jang, Y. Ye, J. Won, and S.-H. Lee. Mocha: Real-time motion characterization via context matching. In Proc. SIGGRAPH Asia, SA ’23, New York, NY, USA, 2023. Association for Computing Machinery.
[17] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proc. ECCV, pages 694–711. Springer, 2016.
[18] D.P.KingmaandJ.Ba.Adam:Amethodforstochasticoptimization.arXivpreprint arXiv:1412.6980, 2014.
[19] J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang. Visual attribute transfer through deep image analogy. ACM Trans. Graph., 36(4):1–15, 2017.
[20] C. K. Liu, A. Hertzmann, and Z. Popović. Learning physics-based motion style with nonlinear inverse optimization. ACM Trans. Graph., 24(3):1071–1081, 2005.
[21] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In Proc. NeurIPS, pages 8024–8035. Curran Associates, Inc., 2019.
[22] A. Šubrtová, M. Lukáč, J. Čech, D. Futschik, E. Shechtman, and D. Sỳkora. Diffu- sion image analogies. In Proc. SIGGRAPH, pages 1–10, 2023.
[23] T. Tao, X. Zhan, Z. Chen, and M. van de Panne. Style-erd: Responsive and coherent online motion style transfer. In Proc. CVPR, pages 6593–6603, 2022.
[24] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
[25] M.Unuma,K.Anjyo,andR.Takeuchi.Fourierprinciplesforemotion-basedhuman figure animation. In Proc. SIGGRAPH, pages 91–96, 1995.
[26] S. Xia, C. Wang, J. Chai, and J. Hodgins. Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans. Graph., 34(4):1–10, 2015.
[27] M. E. Yumer and N. J. Mitra. Spectral style transfer for human motion between independent actions. ACM Trans. Graph., 35(4):1–8, 2016.
[28] Y. Zhang, F. Tang, W. Dong, H. Huang, C. Ma, T.-Y. Lee, and C. Xu. Domain en- hanced arbitrary image style transfer via contrastive learning. In Proc. SIGGRAPH, pages 1–8, 2022.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97169-
dc.description.abstract人體運動風格轉換是遊戲設計師和動畫師能應用的重要技術,可以增強運動片段中的角色表現力。然而,目前的數據驅動方法要麼需要配對的運動數據,要麼會由於模型對於運動的解纏不良造成產生的結果不具有所需的內容和風格。因此本研究提出了一種新穎的中性運動解纏(NMD)模型,可以產生高質量的人體運動風格轉換結果,特別是在異質性運動之間。我們的框架通過對抗式的訓練過程,同時學習重建中性運動和輸入運動來分解輸入運動的內容和風格。此外,我們提出了一種新穎的分群損失,進一步增強了內容和風格潛在空間的解纏。我們通過全面的實驗、消融研究和用戶研究評估了我們的方法。結果表明,特別是在異質性運動之間轉換風格時,我們的方法可以生成比現有的運動風格轉換更好的結果。zh_TW
dc.description.abstractMotion style transfer is a valuable technique for game designers and animators that enhances the expressiveness of animation clips. However, current data-driven methods either require paired motion data or produce results that do not have desired content and style, due to poor motion disentanglement. This work presents a novel neutral motion disentanglement (NMD) model that produces high-quality motion style transferred results, especially between heterogeneous motions. Our framework decomposes the content and style of an input motion by learning to reconstruct the neutral motion and the input motion simultaneously through an adversarial training process. Additionally, we propose a novel clustering loss that further enhances the disentanglement of the content and style latent spaces. We evaluate our method through thorough experiments, ablation studies, and user studies. The results suggest that our method can generate better transferred results than existing motion style transfer, particularly for transferring styles between heterogeneous motions.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-27T16:30:35Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-27T16:30:35Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents摘要 i
Abstract iii
目次 v
表次 vii
圖次 ix
第一章 介紹 1
第二章 相關工作 5
2.1 圖像風格轉移 5
2.2 運動風格轉移 6
2.2.1 行為解纏模型 6
第三章 方法 9
3.1 概覽與運動資料表示 9
3.2 NMD: 中性運動解纏模型 9
3.3 訓練流程與損失 11
3.4 中性運動重建階段 11
3.5 耦合運動重建階段 12
3.6 聚類損失 12
第四章 實驗與評估 15
4.1 實作細節與資料集 15
4.2 資料前處理 15
4.3 定量分析 16
4.4 使用者研究 17
4.4.1 真實性 18
4.4.2 內容保留與風格轉移性 18
4.5 定量分析 19
4.6 消融研究 21
4.6.1 中性解碼器 21
4.6.2 聚類損失 21
4.7 潛在空間視覺化 22
4.7.1 內容編碼 22
4.7.2 風格編碼 22
第五章 限制與未來研究 23
5.1 姿勢風格運動解纏 23
5.2 中性運動解纏品質 23
第六章 結論 25
參考文獻 27
附錄 A — 補充圖片 31
-
dc.language.isozh_TW-
dc.title中性運動解纏於異質性人體運動風格轉換zh_TW
dc.titleHeterogeneous Motion Style Transfer via Neutral Motion Disentanglementen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee林文杰;張鈞法zh_TW
dc.contributor.oralexamcommitteeWen-Chieh Lin;Chun-Fa Changen
dc.subject.keyword運動風格轉換,運動合成,異質性運動,深度學習,對比式學習,zh_TW
dc.subject.keywordmotion style transfer,motion synthesis,heterogeneous motions,deep learning,contrastive learning,en
dc.relation.page33-
dc.identifier.doi10.6342/NTU202404058-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2024-11-12-
dc.contributor.author-college管理學院-
dc.contributor.author-dept資訊管理學系-
dc.date.embargo-lift2025-02-28-
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf6.83 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved