請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/32605完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 傅立成(Li-Chen Fu) | |
| dc.contributor.author | Chun-chi Huang | en |
| dc.contributor.author | 黃俊棋 | zh_TW |
| dc.date.accessioned | 2021-06-13T04:12:12Z | - |
| dc.date.available | 2006-07-31 | |
| dc.date.copyright | 2006-07-31 | |
| dc.date.issued | 2006 | |
| dc.date.submitted | 2006-07-24 | |
| dc.identifier.citation | [1] Norman I. Badler, Michael J. Hollick, and John P. Granieri. Real-time control
of a virtual human using minimal sensors. Presence, 2(1):82-86, 1993. [2] P.J. Besl and N.D. McKay. A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239-256, 1992. [3] Matthew Brand. Shadow puppetry. In ICCV '99: Proceedings of the Interna- tional Conference on Computer Vision-Volume 2, page 1237, Washington, DC, USA, 1999. IEEE Computer Society. [4] German K. M. Cheung, Simon Baker, and Takeo Kanade. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In CVPR (1), pages 77-84, 2003. [5] T. Darrell, G. Gordon, M. Harville, and J. Woodfill. Integrated person tracking using stereo, color, and pattern detection. cvpr, 00:601, 1998. [6] Quentin Delamarre and Olivier Faugeras. 3d articulated models and multi-view tracking with silhouettes. In ICCV '99: Proceedings of the International Con- ference on Computer Vision-Volume 2, page 716, Washington, DC, USA, 1999. IEEE Computer Society. [7] Quentin Delamarre and Olivier Faugeras. 3d articulated models and multiview tracking with physical forces. Comput. Vis. Image Underst., 81(3):328-357, 2001. [8] D. Demirdjian. Enforcing constraints for human body tracking. cvprw, 09:102, 2003. [9] D. Demirdjian, T. Ko, and T. Darrell. Constraining human body tracking. In ICCV '03: Proceedings of the Ninth IEEE International Conference on Computer Vision, page 1071, Washington, DC, USA, 2003. IEEE Computer Society. [10] David Demirdjian. Combining geometric- and view-based approaches for artic- ulated pose estimation. In ECCV (3), pages 183-194, 2004. [11] David Demirdjian, Leonid Taycher, Gregory Shakhnarovich, Kristen Grauman, and Trevor Darrell. Avoiding the 'streetlight efiect': Tracking by exploring likelihood modes. iccv, 1:357-364, 2005. [12] Mira Dontcheva, Gary Yngve, and Zoran Popovic. Layered acting for character animation. ACM Trans. Graph., 22(3):409-416, 2003. [13] William T. Freeman, David B. Anderson, Paul A. Beardsley, Chris N. Dodge, Michal Roth, Craig D. Weissman, William S. Yerazunis, Hiroshi Kage, Kazuo Kyuma, Yasunari Miyake, and Ken ichi Tanaka. Computer vision for interactive computer graphics. IEEE Comput. Graph. Appl., 18(3):42-53, 1998. [14] Keith Grochow, Steven L. Martin, Aaron Hertzmann, and Zoran Popovic. Style- based inverse kinematics. ACM Trans. Graph., 23(3):522-531, 2004. [15] W. van der Mark H. Sunyoto and D. M. Gavrila. A comparative study of fast dense stereo vision algorithms. In Intelligent Vehicles Symposium, pages 319-324. IEEE Computer Society, 2004. [16] Nicholas R. Howe, Michael E. Leventon, and William T. Freeman. Bayesian reconstruction of 3d human motion from single-camera video. In NIPS, pages 820-826, 1999. [17] En-Wei Huang and Li-Chen Fu. Real-time arm tracking system using example- based matching and local optimization. In IEEE International Conference on Systems, Man, and Cybernetics, 2006. (to appear). [18] Nebojsa Jojic, Thomas Huang, Barry Brumitt, Brian Meyers, and Steve Harris. Detection and estimation of pointing gestures in dense disparity maps. fg, 00:468, 2000. [19] Nebojsa Jojic, Thomas S. Huang, and Matthew Turk. Tracking self-occluding articulated objects in dense disparity maps. iccv, 01:123, 1999. [20] Rick Kjeldsen and Jacob Hartman. Design issues for vision-based computer interaction systems. In PUI '01: Proceedings of the 2001 workshop on Perceptive user interfaces, pages 1-8, New York, NY, USA, 2001. ACM Press. [21] Neil D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. In Sebastian Thrun, Lawrence Saul, and Bernhard Schfiolkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [22] Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. Interactive control of avatars animated with human motion data. In SIGGRAPH '02: Proceedings of the 29th annual conference on Computer graph- ics and interactive techniques, pages 491-500, New York, NY, USA, 2002. ACM Press. [23] Michael H. Lin. Tracking articulated objects in real-time range image sequences. iccv, 01:648, 1999. [24] Thomas B. Moeslund and Erik Granum. Modelling and estimating the pose of a human arm. Mach. Vision Appl., 14(4):237-247, 2003. [25] Sageev Oore, Demetri Terzopoulos, and Geofirey E. Hinton. A desktop input device and interface for interactive 3d character animation. In Graphics Interface, pages 133-140, 2002. [26] Ralf Plaenkers and Pascal Fua. Model-based silhouette extraction for accurate people tracking. In ECCV '02: Proceedings of the 7th European Conference on Computer Vision-Part II, pages 325-339, London, UK, 2002. Springer-Verlag. [27] Marco Porta. Vision-based user interfaces: methods and applications. Int. J. Hum.-Comput. Stud., 57(1):27-73, 2002. [28] Lawrence R. Rabiner. A tutorial on hidden markov models and selected appli- cations in speech recognition. pages 267-296, 1990. [29] Liu Ren, Gregory Shakhnarovich, Jessica K. Hodgins, Hanspeter Pfister, and Paul Viola. Learning silhouette features for control of human motion. ACM Trans. Graph., 24(4):1303-1331, 2005. [30] S. Rusinkiewicz and M. Levoy. Eficient variants of the icp algorithm. 3dim, 00:145, 2001. [31] Sudhanshu Kumar Semwal, Ron R. Hightower, and Sharon A. Stansfield. Map- ping algorithms for real-time control of an avatar using eight sensors. Presence, 7(1):1-21, 1998. [32] Hyun Joon Shin, Lucas Kovar, and Michael Gleicher. Physical touch-up of hu- man motions. In PG '03: Proceedings of the 11th Pacific Conference on Com- puter Graphics and Applications, page 194, Washington, DC, USA, 2003. IEEE Computer Society. [33] Hyun Joon Shin, Jehee Lee, Sung Yong Shin, and Michael Gleicher. Computer puppetry: An importance-based approach. ACM Trans. Graph., 20(2):67-94, 2001. [34] Hedvig Sidenbladh, Michael J. Black, and Leonid Sigal. Implicit probabilistic models of human motion for synthesis and tracking. In ECCV '02: Proceed- ings of the 7th European Conference on Computer Vision-Part I, pages 784-800, London, UK, 2002. Springer-Verlag. [35] Seyoon Tak and Hyeong-Seok Ko. A physically-based motion retargeting filter. ACM Trans. Graph., 24(1):98-117, 2005. [36] M. Vukobratovic and B. Borovac. Zero-moment point: Thirty five years of its life. International Journal of Humanoid Robotics, 1(1):157-173, 2004. [37] Greg Welch and Gary Bishop. An introduction to the kalman filter. Technical report, Chapel Hill, NC, USA, 1995. [38] Andrew Wilson and Nuria Oliver. Gwindows: robust stereo vision for gesture- based control of windows. In ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces, pages 211-218, New York, NY, USA, 2003. ACM Press. [39] D. A. Winter. Biomechanics and Motor Control of Human Movement, Second edition. John Wiley and Sons, 1990. [40] KangKang Yin and Dinesh K. Pai. Footsee: an interactive animation system. In SCA '03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 329-338, Aire-la-Ville, Switzerland, Switzerland, 2003. Eurographics Association. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/32605 | - |
| dc.description.abstract | 本篇論文提出了一種在有限空間下用上半身動作驅動的表演式動畫系統。我們利用無標記式電腦視覺手臂追蹤系統得到使用者的上半身動作資訊。無標記式的追蹤系統通常雜訊較多,因此首先使用濾波器去除雜訊。去除雜訊之後,使用者上半身的動作資訊可以用來直接控制動畫人物的上半身,也能經過動作辨識觸發預先設定好的動作並套用在動畫人物上
在本系統中,動畫人物的控制分為上半身及下半身兩個部份。上半身的動作可以由使用者上半身的動作去除雜訊後直接控制,也可以是這些動作經由動作辨識觸發的預錄動作,或是前兩者的混合。下半身的動作則都是動作辨識觸發的預錄動作。上下半身的動作合併起來有可能是違反物理法則的,我們提出了一個兩層式的動作整合框架來解決這個問題。在每一層中使用卡爾曼濾波器將動作變得平順,接著根據物理限制去修改動作。透過這個動作整合框架處理後,合併後的動作可以兼顧平順以及符合物理法則。 | zh_TW |
| dc.description.abstract | This thesis proposes a performance animation system controlled by upper body motion under a restricted workspace environment. The upper body motion are obtained from a markerless vision-based arm tracking system. Since 3D pose information derived from vision is often coarse and inexact, we apply a filter to reduce noise. After noise reduction, the motion signal can be used to directly control the animation of the upper body of the avatar, or be recognized as motion command to assign motion to the avatar.
The motion control of our system is divided into two parts, the upper body and the lower body. The upper body motions of avatar can be controlled by control signal from filtered vision input, or they can be motion capture data selected by command signal from motion recognition, or they can be the mixture of two. The motions of lower body are motion capture data automatically selected by motion recognition of filtered vision input. The combined motion of the upper and the lower body may not be physically plausible. To solve this problem, we propose a two level motion integration framework. In each level, we use Kalman filters to smooth the combined motion and then revise it by imposing constraints. The combined motions are processed by the framework to ensure that they are not only smooth but also physically plausible. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-13T04:12:12Z (GMT). No. of bitstreams: 1 ntu-95-R93922017-1.pdf: 2140481 bytes, checksum: 8992d16c65d6a5b30a94e145fa9d5815 (MD5) Previous issue date: 2006 | en |
| dc.description.tableofcontents | List of Figures iv
List of Tables v 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Related Works 4 2.1 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Motion Trajectory Generation 10 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Kalman Filter and Noise Reduction . . . . . . . . . . . . . . . . . . . 11 3.2.1 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.2 Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Hidden Markov Model . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.2 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 Motion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4.1 Static Motion Recognition . . . . . . . . . . . . . . . . . . . . 21 3.4.2 Dynamic Motion Recognition . . . . . . . . . . . . . . . . . . 23 4 Motion Integration 25 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Motion Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2.1 Balance Constraints . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.2 Momentum Constraints . . . . . . . . . . . . . . . . . . . . . 27 4.2.3 Joint Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2.4 Workspace Constraints . . . . . . . . . . . . . . . . . . . . . . 27 iii 4.3 Two Level Motion Integration . . . . . . . . . . . . . . . . . . . . . . 28 4.3.1 Human ZMP/CM . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3.2 Feature Point Position . . . . . . . . . . . . . . . . . . . . . . 33 5 Experiments 37 5.1 Environment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.2 Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2.1 Noise reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2.2 Static motion recognition . . . . . . . . . . . . . . . . . . . . . 40 5.2.3 Dynamic motion recognition . . . . . . . . . . . . . . . . . . . 40 5.2.4 Combination of vision data and motion capture data . . . . . 40 6 Conclusion 42 Reference 44 | |
| dc.language.iso | en | |
| dc.subject | 角色動畫 | zh_TW |
| dc.subject | Performance animation | en |
| dc.title | 以視覺為基礎的角色動畫系統 | zh_TW |
| dc.title | Vision-based Interactive 3D Character Animation System | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 94-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 歐陽明(Ming Ouhyoung),洪一平(Yi-Ping Hung),王傑智(Chieh-Chih Wang) | |
| dc.subject.keyword | 角色動畫, | zh_TW |
| dc.subject.keyword | Performance animation, | en |
| dc.relation.page | 49 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2006-07-26 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-95-1.pdf 未授權公開取用 | 2.09 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
