請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/43554
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 傅立成 | |
dc.contributor.author | En-Wei Huang | en |
dc.contributor.author | 黃恩暐 | zh_TW |
dc.date.accessioned | 2021-06-15T02:23:19Z | - |
dc.date.available | 2011-08-23 | |
dc.date.copyright | 2011-08-23 | |
dc.date.issued | 2011 | |
dc.date.submitted | 2011-08-17 | |
dc.identifier.citation | [1] A. Agarwal and B. Triggs. 3d human pose from silhouettes by relevance vector regression. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages II–882 – II–888 Vol.2, 27, 2004.
[2] A. Agarwal and B. Triggs. Monocular human motion capture with a mixture o regressors. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pages 72 –72, 25-25, 2005. [3] A. Agarwal and B. Triggs. A local basis representation for estimating human pose from cluttered images. In Proceedings of the Asian Conference on Computer Vision, pages 50–59. Springer, 2006. [4] J. Aggarwal and Q. Cai. Human motion analysis: A review. Computer vision and image understanding, 73(3):428–440, 1999. [5] O. Arikan and D. A. Forsyth. Interactive motion generation from examples. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’02, pages 483–490, New York, NY, USA, 2002. ACM. [6] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003. [7] M. Black and A. Jepson. A probabilistic framework for matching temporal trajectories: Condensation-based recognition of gestures and expressions. In Proceedings of the European Conference on Computer Vision, pages 909–924, 1998. [8] M. Brand and A. Hertzmann. Style machines. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 183–192, New York, NY, USA, 2000. ACM Press/Addison-Wesley Publishing Co. [9] J. Chai and J. K. Hodgins. Performance animation from low-dimensional control signals. In ACM SIGGRAPH 2005 Papers, pages 686–696, 2005. [10] J. Chai, J. Xiao, and J. Hodgins. Vision-based control of 3d facial animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 193–206. Citeseer, 2003. [11] Y. Chang, C. Hu, R. Feris, and M. Turk. Manifold based analysis of facial expression. Image and Vision Computing, 24(6):605–614, 2006. [12] M. Cline. Rigid body simulation with contact and constraints. Master’s thesis, The University of British Columbia, 2002. [13] CMU. Motion Capture Database, http://mocap.cs.cmu.edu/, 2011. [14] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [15] S. Cooper, A. Hertzmann, and Z. Popovi’c. Active learning for real-time motion controllers. In ACM SIGGRAPH 2007 papers, 2007. [16] S. Deans. The Radon transform and some of its applications. Wiley New York, 1983. [17] D. Demirdjian. Combining geometric-and view-based approaches for articulated pose estimation. In Proceedings of the European Conference on Computer Vision, pages 183–194. Springer, 2004. [18] D. Demirdjian, L. Taycher, G. Shakhnarovich, K. Grauman, and T. Darrell. Avoiding the ”streetlight effect”: tracking by exploring likelihood modes. In Proceedings of the IEEE International Conference on Computer Vision, volume 1, pages 357 – 364 Vol. 1, 17-21 2005. [19] M. Dontcheva, G. Yngve, and Z. Popovi’c. Layered acting for character animation. In ACM SIGGRAPH 2003 Papers, pages 409–416, 2003. [20] F. Dornaika and F. Davoine. Simultaneous facial action tracking and expression recognition in the presence of head motion. International Journal of Computer Vision, 76(3):257–281, 2008. [21] R. Duda, P. Hart, and D. Stork. Pattern classification. Wiley Interscience Publications, 2001. [22] S. Eickeler, A. Kosmala, and G. Rigoll. Hidden markov model based continuous online gesture recognition. In Proceedings of the International Conference on Pattern Recognition, volume 2, pages 1206–1208, 1998. [23] A. Elgammal and C. Lee. Inferring 3d body pose from silhouettes using activity manifold learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 681–688, 2004. [24] A. Elgammal and C. Lee. Separating style and content on a nonlinear manifold. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, 2004. [25] P. Faloutsos, M. van de Panne, and D. Terzopoulos. Composable controllers for physics-based character animation. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 251–260. ACM, 2001. [26] D. Fidaleo and M. Trivedi. Manifold analysis of facial gestures for face recognition. In Proceedings of the ACM SIGMM workshop on Biometrics methods and applications, pages 65–69, 2003. [27] H. Francke, J. Ruiz-del Solar, and R. Verschae. Real-time hand gesture detection and recognition using boosted classifiers and active learning. Advances in Image and Video Technology, pages 533–547, 2007. [28] W. Freeman, D. Anderson, P. Beardsley, C. Dodge, M. Roth, C. Weissman, W. Yerazunis, H. Kage, K. Kyuma, and Y. Miyake. Computer vision for interactive computer graphics. IEEE Computer Graphics and Applications, 18(3):42–53, 1998. [29] Y. Freund, H. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28(2):133–168, 1997. [30] D. Gavrila. The visual analysis of human movement: A survey. Computer vision and image understanding, 73(1):82–98, 1999. [31] Z. Ghahramani, D. Cohn, and M. Jordan. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145, 1996. [32] M. Gleicher. Retargetting motion to new characters. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, SIGGRAPH 1998, pages 33–42, New York, NY, USA, 1998. ACM. [33] X. He and P. Niyogi. Locality preserving projections. In Proceedings of the Conference on Advances in Neural Information Processing Systems, pages 153–160, 2004. [34] J. Hodgins and N. Pollard. Adapting simulated behaviors for new characters. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 153–162. ACM Press/Addison-Wesley Publishing Co., 1997. [35] J. Hodgins, W. Wooten, D. Brogan, and J. O’Brien. Animating human athletics. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, page 78. ACM, 1995. [36] T. Igarashi, T. Moscovich, and J. F. Hughes. Spatial keyframing for performance-driven animation. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’05, pages 107–115, New York, NY, USA, 2005. ACM. [37] M. Isard and A. Blake. Condensation - conditional density propagation for visual tracking. International Journal of Computer Vision, 29(1):5–28, 1998. [38] M. Isard and A. Blake. A mixed-state condensation tracker with automatic model-switching. In Proceedings of the IEEE International Conference on Computer Vision, pages 107–112, 1998. [39] S. Ishigaki, T. White, V. B. Zordan, and C. K. Liu. Performance-based control interface for character animation. pages 61:1–61:8, 2009. [40] K. Jia, C. Shenzhen, D. Yeung, C. Bay, and H. Kowloon. Human action recognition using local spatio-temporal discriminant embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2008. [41] K. Kahol, P. Tripathi, and S. Panchanathan. Automated gesture segmentation from dance sequences. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 883–888, 2004. [42] H. Kang and W. Lee. Recognition-based gesture spotting in video games. Pattern Recognition Letters, 25(15):1701–1714, 2004. [43] D. Kim and J. Song. Simultaneous gesture segmentation and recognition based on forward spotting accumulative hmms. Pattern Recognition, 40(11):3012–3026, 2007. [44] E. Kokkevis. Practical physics for articulated characters. In Proceedings of the Game Developers Conference, 2004. [45] L. Kovar, M. Gleicher, and F. Pighin. Motion graphs. In ACM SIGGRAPH 2002 Papers, pages 473–482, New York, NY, USA, 2002. ACM. [46] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Confrence on Machine Lerning, pages 282–289, 2001. [47] W. Lam and T. Komura. Real-time locomotion control by sensing gloves. Computer Animation and Virtual Worlds, 17(5):513–525, 2006. [48] J. Laszlo, M. van de Panne, and F. Eugene. Interactive control for physically-based animation. pages 201–208, 2000. [49] C. Lee and A. Elgammal. Coupled visual and kinematic manifold models for tracking. International Journal of Computer Vision, 87(1):118–139, 2010. [50] H. Lee and J. Kim. An hmm-based threshold model approach for gesture recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10):961–973, 1999. [51] J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard. Interactive control of avatars animated with human motion data. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pages 491–500, 2002. [52] K. Lee, J. Ho, M. Yang, and D. Kriegman. Video-based face recognition using probabilistic appearance manifolds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 313–320, 2003. [53] K.-C. Lee and D. Kriegman. Online learning of probabilistic appearance manifolds for video-based recognition and tracking. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages 852 – 859 vol. 1, 20-25 2005. [54] D. D. Lewis and W. A. Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’94, pages 3–12, New York, NY, USA, 1994. Springer-Verlag New York, Inc. [55] H. Li and M. Greenspan. Multi-scale gesture recognition from time-varying contours. In Proceedings of the IEEE International Conference on Computer Vision, volume 1, pages 236 – 243 Vol. 1, 17-21 2005. [56] W. Liang and D. Suter. Learning and matching of dynamic shape manifolds for human action recognition. IEEE Transactions on Image Processing, 16(6):1646–1661, 2007. [57] M. Lindenbaum, S. Markovitch, and D. Rusakov. Selective sampling for nearest neighbor classifiers. Machine Learning, 54(2):125–152, 2004. [58] J. McCann and N. Pollard. Responsive characters from motion fragments. In ACM SIGGRAPH 2007 papers, 2007. [59] Microsoft. Kinect. http://www.xbox.com/en-us/kinect. 2010. [60] J. Min, Y.-L. Chen, and J. Chai. Interactive generation of human animation with deformable motion models. ACM Transaction on Graphics, 29(1):1–12, 2009. [61] S. Mitra and T. Acharya. Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 37(3):311–324, 2007. [62] T. Moeslund and E. Granum. A survey of computer vision-based human motion capture. Computer Vision and Image Understanding, 81(3):231–268, 2001. [63] A. Moore, J. Schneider, J. Boyan, and M. Lee. Q2: Memory-based active learning for optimizing noisy continuous functions. In Proceedings of the fifteenth international conference on machine learning, 1998. [64] L. Morency, A. Quattoni, and T. Darrell. Latent-dynamic discriminative models for continuous gesture recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2007. [65] G. Mori and J. Malik. Estimating human body configurations using shape context matching. In Proceedings of the European Conference on Computer Vision, pages 150–180. Springer, 2002. [66] G. Mori and J. Malik. Recovering 3d human body configurations using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(7):1052 –1062, july 2006. [67] Y. Nam and K. Wohn. Recognition of space-time hand-gestures using hidden markov model. In Proceedings of the ACM symposium on Virtual reality software and technology, pages 51–58, 1996. [68] J. Noh, D. Fidaleo, and U. Neumann. Gesture driven facial animation. Technical report, USC CS Technical Report 02-761, 2002. [69] E. Ong, A. Micilotta, R. Bowden, and A. Hilton. Viewpoint invariant exemplar-based 3d human tracking. Computer Vision and Image Understanding, 104(2-3):178–189, 2006. [70] S. Oore, D. Terzopoulos, and G. Hinton. A desktop input device and interface for interactive 3d character animation. In Graphics Interface, pages 133–140, 2002. [71] M. Oshita and A. Makinouchi. A dynamic motion control technique for human-like articulated figures. Computer Graphics Forum, 20(3):192–202, 2001. [72] V. Pavlovic, R. Sharma, and T. Huang. Visual interpretation of hand gestures for human-computer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):677–695, 1997. [73] T. Pejˇsa and I. Pandˇzi’c. State of the art in example-based motion synthesis for virtual characters in interactive applications. Computer graphics forum, 167:7055, 2010. [74] R. Poppe. Vision-based human motion analysis: An overview. Computer Vision and Image Understanding, 108(1-2):4–18, 2007. [75] M. Porta. Vision-based user interfaces: methods and applications. International Journal of Human-Computer Studies, 57(1):27–73, 2002. [76] Primesense. http://www.primesense.com/. 2010. [77] C. Rao, A. Yilmaz, and M. Shah. View-invariant representation and recognition of actions. International Journal of Computer Vision, 50:203–226, 2002. [78] P. S. A. Reitsma and N. S. Pollard. Evaluating motion graphs for character animation. ACM Transaction on Graphics, 26(4):18, 2007. [79] L. Ren, G. Shakhnarovich, J. K. Hodgins, H. Pfister, and P. Viola. Learning silhouette features for control of human motion. ACM Trans. Graph., 24(4):1303–1331, 2005. [80] P. Robbel. Active learning in motor control. Master’s thesis, University of Edinburgh, UK, 2005. [81] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [82] A. Safonova and J. K. Hodgins. Construction and optimal search of interpolated motion graphs. ACM Transaction on Graphics, 26(3):106, 2007. [83] G. Shakhnarovich, P. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In Proceedings of the IEEE International Conference on Computer Vision, pages 750 –757 vol.2, 13-16 2003. [84] H. J. Shin, J. Lee, S. Y. Shin, and M. Gleicher. Computer puppetry: An importance-based approach. ACM Trans. Graph., 20(2):67–94, 2001. [85] T. Shiratori and J. K. Hodgins. Accelerometer-based user interfaces for the control of a physically simulated character. ACM Trans. Graph., 27(5):123:1–123:9, 2008. [86] K. Shoemake. Animating rotation with quaternion curves. ACM SIGGRAPH computer graphics, 19(3):245–254, 1985. [87] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from single depth images. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Colorado Springs, USA, 2011. [88] R. Slyper and J. K. Hodgins. Action capture with accelerometers. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 193–199, Aire-la-Ville, Switzerland, Switzerland, 2008. [89] C. Sminchisescu, A. Kanaujia, and D. Metaxas. Conditional models for contextual human motion recognition. Computer Vision and Image Understanding, 104(2-3):210–220, 2006. [90] R. Souvenir and J. Babbs. Learning the viewpoint manifold for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1 – 7, 2008. [91] S. Tabbone, L. Wendling, and J. Salmon. A new shape descriptor defined on the radon transform. Computer Vision and Image Understanding, 102(1):42–51, 2006. [92] J. Tautges, A. Zinke, B. Kr‥uger, J. Baumann, A. Weber, T. Helten, M. M‥uller, H.-P. Seidel, and B. Eberhardt. Motion reconstruction using sparse accelerometer data. ACM Trans. Graph., 30:18:1–18:12, May 2011. [93] J. Tenenbaum. Mapping a manifold of perceptual observations. Advances in neural information processing systems, pages 682–688, 1998. [94] M. Thorne, D. Burke, and M. van de Panne. Motion doodles: an interface for sketching character motion. In ACM SIGGRAPH 2007 courses, 2007. [95] A. Treuille, Y. Lee, and Z. Popovi’c. Near-optimal character animation with continuous control. In ACM SIGGRAPH 2007 papers, 2007. [96] P. Turaga, A. Veeraraghavan, and R. Chellappa. Unsupervised view and rate invariant clustering of video sequences. Computer Vision and Image Understanding, 113:353–371, 2009. [97] S. Vijayakumar, A. D’souza, and S. Schaal. Incremental online learning in high dimensions. Neural Computation, 17(12):2602–2634, 2005. [98] R. Y. Wang and J. Popovi’c. Real-time hand-tracking with a color glove. ACM Transaction on Graphics, 28(3):1–8, 2009. [99] S. Wang, A. Quattoni, L. Morency, D. Demirdjian, and T. Darrell. Hidden conditional random fields for gesture recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 1521 – 1527, 2006. [100] T. Wang, H. Shum, Y. Xu, and N. Zheng. Unsupervised analysis of human gestures. In Advances in Multimedia Information Processing, pages 174–181, 2001. [101] Y. Wang, K. Huang, and T. Tan. Human activity recognition based on r transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, 2007. [102] Y. Wang, H. Jiang, M. Drew, Z. Li, and G. Mori. Unsupervised discovery of action classes. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 1654–1661, 2006. [103] X. Wei, J. Min, and J. Chai. Physically valid statistical models for human motion generation. ACM Trans. Graph., 30:19:1–19:10, May 2011. [104] Y. Wu and T. Huang. Vision-based gesture recognition: A review. Gesture-based communication in human-computer interaction, pages 103–115, 1999. [105] F. Xu, Y. Liu, C. Stoll, J. Tompkin, G. Bharaj, Q. Dai, H.-P. Seidel, J. Kautz, and C. Theobalt. Video-based characters - creating new human performances from a multi-view video database. In ACM SIGGRAPH 2011 papers, 2011. [106] K. Yamane, Y. Ariki, and J. Hodgins. Animating non-humanoid characters with human motion data. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’10, pages 169–178, Aire-la-Ville, Switzerland, Switzerland, 2010. Eurographics Association. [107] R. Yan, J. Yang, and A. Hauptmann. Automatically labeling video data using multi-class active learning. In Proceedings of Ninth IEEE International Conference on Computer Vision, pages 516–523 vol.1, 2003. [108] K. Yin and D. K. Pai. Footsee: an interactive animation system. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’03, pages 329–338, Aire-la-Ville, Switzerland, Switzerland, 2003. [109] P. Zhao and M. van de Panne. User interfaces for interactive control of physics-based 3d characters. In Proceedings of the 2005 symposium on Interactive 3D graphics and games, pages 87–94. ACM, 2005. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/43554 | - |
dc.description.abstract | 人體動作與電腦動畫之間的互動在電玩遊戲、表演藝術、商業展示和博物館展覽等領域有越來越多的應用。由人體動作帶動的互動具有直覺、豐富的表現力以及促進融入感等優勢,但是為了一個互動應用的高使用度,在動作辨識及互動設計上有幾項挑戰必須被妥善處理,包括快速的反應、正確的辨識結果以及高品質的動畫。
我們提出了一個利用使用者的表現動作驅動的動畫系統以達到使用者和電腦動畫間的互動,在線上操作時,一個事先學習好的控制映射將人體姿勢對應到一組動畫系統的控制參數以驅動電腦動畫。在本論文中,我們利用locally weighted projection regression (LWPR)來建立控制映射模型, 有能力處理高維度的輸入資料, 因此非常適合高維度的人體動作,除此之外,LWPR模型能夠被遞增式地建立,因此我們能將LWPR模型的建立與我們提出的主動式學習機制整合在一起,藉由選取有用的訓練樣本逐步改進現有的控制映射。在系統離線時,我們提出了一個主動式學習的步驟,從一組連續的訓練序列挑選出有用的訓練樣本來改進現有的控制映射,一個單一的連續訓練序列包含有數個控制動作與它們之間的轉換動作,一個連續訓練序列非常類似使用者在線上操作時的實際動作,因此能幫助建立有效的控制映射。 我們利用一個以使用者動作驅動的角色動畫系統來展現我們提出的方法,實驗結果顯示我們的方法可以利用人體動作產生反應迅速與高品質的動畫。本論文的主要貢獻在提出一個有效且迅速建立利用人體動作達到細微與多樣化控制電腦動畫的方法,我們預期在電玩遊戲與表演藝術等領域可以產生新的影響。 | zh_TW |
dc.description.abstract | Interaction between human motion and computer animation has more and more applications in video games, performance art, commercial demonstrations and museum exhibitions. The high degrees of freedom of human motions have some advantages in manipulating computer animations including intuition, expressiveness and improvement in the sense of immersion. However, some requirements including robust human motion recognition, quick response and high-quality animation need to be addressed for such a system.
We propose a performance-driven animation system for interacting with computer animation by human motion. During online, a learned control mapping maps a human pose to the parameters of an animation model to drive the animation. The control mapping is modeled by locally weighted projection regression (LWPR) which uses a set of local linear models to approximate a non-linear function. It handles high-dimensional data well and can be learned incrementally. The first property is crucial for a performance-driven system because the representation of a human pose is high-dimensional. The second property helps us build an active learning approach upon it by selecting useful training samples to update an existed mapping. During offline, we propose an active learning approach which automatically selects samples and subsequences in a set of continuous training sequences to improve an existed mapping. A continuous training sequence containing multiple control motions and transition motions is similar to the online operation and helps create an effective mapping. The experiments of performance-driven character animation show that the proposed approach can generate responsive and high-quality animation by human motion. Our work will contribute to the applications such as performance art and video game by giving versatile and accurate interaction. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T02:23:19Z (GMT). No. of bitstreams: 1 ntu-100-D91922002-1.pdf: 2285066 bytes, checksum: 4d64005f7ec3365e3b56578ac22c3706 (MD5) Previous issue date: 2011 | en |
dc.description.tableofcontents | Abstract i
Contents iv List of Figures vi List of Tables vii 1 Introduction 1 1.1 Applications 3 1.2 Challenges 4 1.2.1 Human Motion Variation 4 1.2.2 Trade-Off between Responsiveness and Animation Quality 4 1.3 System Overview 5 1.4 Contribution 9 1.5 Thesis Organization 10 2 Related Work 11 2.1 Vision-Based Human Motion Analysis 11 2.1.1 Pose Estimation 11 2.1.2 Gesture Recognition 13 2.1.3 Motion Tracking 14 2.1.4 Manifold-Based Analysis 15 2.1.5 Unsupervised Learning of Motion Model 15 2.2 Interactive Computer Animation 16 2.2.1 Interactive Control of Computer Animation by Vision 16 2.2.2 Character Animation 17 2.2.3 Interactive Control of Animated Character 19 2.2.4 Physics-Based Animation 21 2.3 Active Learning 22 3 Problem Formulation and Preliminary 23 3.1 Problem Formulation 23 3.2 Locality Preserving Projection 24 3.3 Particle Filter 26 3.4 Locally Weighted Projection Regression 28 4 Unsupervised Modeling of Human Motions 31 4.1 Visual Representation 32 4.2 Unsupervised Learning of Motion Types 33 4.2.1 Modeling the Complex Motion Manifold 35 4.2.2 Learning Motion Subgraphs 37 4.3 Motion Tracking and Recognition 40 4.3.1 The Particle Representation 41 4.3.2 The Dynamical Model and Sample Measurement of the Particle Filter 42 4.3.3 Tracking and Recognition of Motions 43 4.4 Summary 45 5 Active Learning for Performance-Driven Animation 46 5.1 Overview of the Interactive Animation System 49 5.1.1 Input Signal 51 5.1.2 Animation Sequence and Animation Type 51 5.1.3 Control Mapping 51 5.1.4 Active Learning for the Control Mapping 54 5.1.5 Animation Control 54 5.2 Learning the Control Mapping 54 5.2.1 Build the Generic Control Mapping 56 5.2.2 Update the Control Mapping by Active Learning 57 5.3 Controlling Character Animation 61 5.3.1 Character Animation Model 61 5.3.2 Determining Current Animation Type 62 5.3.3 Driving the Animation 64 5.4 Summary 67 6 Experiments 69 6.1 Unsupervised Motion Modeling and Recognition 69 6.1.1 Four-Beat Conducting Motions 69 6.1.2 Boxing Motions 73 6.1.3 Discussion 78 6.2 Active Learning for Performance-Driven Animation 79 6.2.1 Results of Character Animation 79 6.2.2 Discussion 88 7 Conclusion and Future Work 91 Bibliography 93 | |
dc.language.iso | en | |
dc.title | 由使用者主導之人體動作與電腦動畫的互動 | zh_TW |
dc.title | User-Guided Interaction between Human Motion and Computer Animation | en |
dc.type | Thesis | |
dc.date.schoolyear | 99-2 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 賴尚宏,歐陽明,陳祝嵩,李蔡彥,連震杰,范欽雄,林奕成 | |
dc.subject.keyword | 用表現驅動之動畫,動作粹取,控制映射,主動式學習, | zh_TW |
dc.subject.keyword | Performance-Driven Animation,Unsupervised Motion Extraction,Con- trol Mapping,Active Learning, | en |
dc.relation.page | 101 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2011-08-17 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-100-1.pdf 目前未授權公開取用 | 2.23 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。