請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/44876
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳炳宇(Bing-Yu Chen) | |
dc.contributor.author | Chia-Hung Wei | en |
dc.contributor.author | 魏嘉宏 | zh_TW |
dc.date.accessioned | 2021-06-15T03:57:06Z | - |
dc.date.available | 2010-06-20 | |
dc.date.copyright | 2010-06-20 | |
dc.date.issued | 2010 | |
dc.date.submitted | 2010-06-14 | |
dc.identifier.citation | [1]Matthew Brand and Aaron Hertzmann. “Style Machines”. In Computer Graphics(Siggraph '00 Proceedings), pages 183-192, 2000.
[2]Yong Cao, Wen C. Tien ,Petros Faloutsos and Frédéric Pighin, Expressive Speech-Driven Facial Animation, ACM Transactions on Graphics (October 2005)Volume 24 , Issue 4 Pages: 1283 - 1302 [3]Yan Li, Feng Yu, Ying-Qing Xu, Eric Chang, Heung-Yeung Shum: Speech-driven cartoon animation with emotions. ACM Multimedia 2001: 365-371 [4]Keith Grochow, Steven L. Martin, Aaron Hertzmann, Zoran Popović. Style-based Inverse Kinematics. ACM Transactions on Graphics (Proceedings of SIGGRAPH2004) [5]Wang, J. M., Fleet, D. J., Hertzmann, A. Gaussian Process Dynamical Models for Human Motion. In IEEE Transactions on Pattern Recognition and Machine Intelligence.February, 2008. pp. 283-298. [6]Wang, J. M., Fleet, D. J., Hertzmann, A. Gaussian Process Dynamical Models. In Proc. NIPS 2005. December, 2005. Vancouver, Canada. pp. 1441-1448. [7]James Davis, Maneesh Agrawala, Erika Chuang, Zoran Popović, David Salesin , A Sketching Interface for Articulated Figure Animation, International Conference on Computer Graphics and Interactive Techniques archive ACM SIGGRAPH 2007. [8]Cheng-Yun Karen Liu, Zoran Popovic and Aaron Hertzmann, Towards a generative model of natural motion. Learning Physics-based Motion Style with Inverse Optimization, in ACM Transactions on Graphics Vol. 24 Num. 3 (SIGGRAPH 2005). [9] E. de Aguiar, C. Theobalt, S. Thrun, and H.-P. Seidel, Automatic Conversion of Mesh Animations into Skeleton-based Animations. To appear in Proc. of EUROGRAPHICS 2008 (Computer Graphics Forum, vol. 27 issue 2) [10]Ulrich Neumann, Zhigang Deng, efase: Expressive facial animation synthesis and editing with phoneme-level controls, ACM SIGGGRAPH/Eurographics Symposium on Computer Animation [11]Mao, C., Qin, S.F., Wright, D.K, A Sketch based Gesture Interface for Rough 3D Stick Figure Animation, In: Proc. of Eurographics Workshop on Sketch Based Inteface and Modeling,2005. [12]Zoran Popovic, Adrien Treuille and Yongjoon Lee, Near-optimal Character Animation with Continuous User Control ,in ACM Transactions on Graphics Vol. 26 Num. 3 (SIGGRAPH 2007). [13]Zora Popovic and Andy Witkin, 'Physically Based Motion Transformation.' in Computer Graphics (SIGGRAPH) 1999 [14]A.Russell. A Circumplex Model of Affect. In Journal of Personality and Social Psychology, 39, pages 1161-78, 1980. [15]M. d a S ilva, Y A be, J . P opović, Simulation of Human Motion Data using Short-Horizon Model-Predictive Control, Computer Graphics Forum,2008,Volume: 27, Issue: 2, Publisher: Citeseer, Pages: 371-380 [16]M. Unuma and K. Anjyo, Fourier Principles for Emotion-based Human Figure Animation, In Proceedings of SIGGRAPH 1995, pages 91-96, 1995 [17]Jue Wang, Steve Drucker, Maneesh Agrawala, Michael Cohen.The Cartoon Animation Filter . In Proceedings of SIGGRAPH 2006, July 2006. pp. 1169-1173. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/44876 | - |
dc.description.abstract | 在這篇論文裡面,我們提出建立一個擁有情緒成分的動作資料庫,為3D 圖學的虛擬動畫人物,賦予擬真性更高的肢體動作。一般來說我們無論是在遊戲產業或是商業用途上,在建立3D 的虛擬動畫人物時,大部份都是透過動畫師來製作,不過單是透過這樣的方式,做出來的人物動作還是都會不夠逼真,基本上,人類的動作是會因為自體本身的情緒而有著不同的反應,所以我們期待可以利用3D的動作捕捉技術(motion capture),將肢體動作中,受到情緒成分的波動而影響的行為模式擷取出來,並且建立一完整的資料庫,進而用來改善現在動畫角色的真實性。
我們透過利用動作捕捉技術將專業演員的肢體動作擷錄下來,並且根據這些行為,在直觀上做觀察的分析,同時也利用機器學習的數值計算,精確的定義出情緒動作的行為參數,然後以此建立起各種情緒之間的關係。如此無論是在單一情緒裡層次大小上的差異,或是不同的情緒的轉換,我們都可以透過這樣的關係模式,衍生出一連串更為細微的動作變化。 我們透過找出這些情緒的關聯性,可以將一個沒有情緒狀態或者只是單一情緒的動畫動作,重建出一系列連續的動畫動作變換,而這些動畫動作和原始動畫最大的差異,就是我們根據以上的關聯性,可以任意的將它在不同情緒的空間做轉換,如此我們就可以快速且方便的加大其擬真的程度,用在產業上,也可以加速製程並相對的節省下更多的成本。 | zh_TW |
dc.description.abstract | In this paper, we present an advice of building a database of emotional motion sets,in order to making a 3D virtue character moves more like real human kind. Generally,
no matter in the entertainments or business, when creating a 3D virtue character, they all depend on the animators, but the creating models still look unreal. Because human motion will be influenced by human emotions. Then we use the motion capture technology to get some principle components to describe the relationship between human motion and human emotions, through building the database to prove the realistic of the virtue character behavior. Using motion capture technology, we can get the professional actor’s body language, and base on these information, we can define the parameters of the emotional behavior equation through the senses and machine learning concepts. Then we can reproduce an whole new motion set with different emotion, different strength flawless. We can transform an emotionless motion set to a series animation with strongly emotions. The difference between the new one and the old one is we can use those principle concepts given the animation more variability in both emotional domain and original domain, then we can accelerate the product process time and make the whole animation industry cost down. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T03:57:06Z (GMT). No. of bitstreams: 1 ntu-99-R95944035-1.pdf: 2083787 bytes, checksum: cb0d90348aace9bf8e217449c5abe80c (MD5) Previous issue date: 2010 | en |
dc.description.tableofcontents | 致謝 I
中文摘要 III 英文摘要 V 圖目錄 IX 表目錄 XII 第一章 介紹 1 1.1 研究背景…………………………………………………… 1 1.2 研究動機…………………………………………………… 3 1.3 研究流程…………………………………………………… 6 1.4 研究貢獻…………………………………………………… 9 1.5 論文結構…………………………………………………… 10 第二章相關研究 12 第三章動作捕捉 22 3.1 資料蒐集…………………………………………………… 22 3.2 資料處理…………………………………………………… 26 第四章演算法分析與動作合成 31 4.1 使用者觀察………………………………………………… 31 4.2 找尋情緒因子……………………………………………… 35 4.3 情緒動作合成……………………………………………… 54 第五章 動作準則歸納 64 5.1 直覺分析…………………………………………………… 64 5.2 拉邦動作質地分析………………………………………… 68 第六章情緒動作重建 81 第七章 結果驗證 87 第八章 結論與展望 90 參考文獻 94 附錄 98 | |
dc.language.iso | zh-TW | |
dc.title | 基於情緒分析的動作合成法 | zh_TW |
dc.title | Towards a Generative Model of Emotional Motion | en |
dc.type | Thesis | |
dc.date.schoolyear | 98-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 梁容輝,陳文進,朱靜美 | |
dc.subject.keyword | 動作捕捉技術,機器學習,情緒動作, | zh_TW |
dc.subject.keyword | Motion Capture,Machine Learning,Emotional Motion, | en |
dc.relation.page | 106 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2010-06-17 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-99-1.pdf 目前未授權公開取用 | 2.03 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。