Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/32605
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor傅立成(Li-Chen Fu)
dc.contributor.authorChun-chi Huangen
dc.contributor.author黃俊棋zh_TW
dc.date.accessioned2021-06-13T04:12:12Z-
dc.date.available2006-07-31
dc.date.copyright2006-07-31
dc.date.issued2006
dc.date.submitted2006-07-24
dc.identifier.citation[1] Norman I. Badler, Michael J. Hollick, and John P. Granieri. Real-time control
of a virtual human using minimal sensors. Presence, 2(1):82-86, 1993.
[2] P.J. Besl and N.D. McKay. A method for registration of 3-d shapes. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 14(2):239-256, 1992.
[3] Matthew Brand. Shadow puppetry. In ICCV '99: Proceedings of the Interna-
tional Conference on Computer Vision-Volume 2, page 1237, Washington, DC,
USA, 1999. IEEE Computer Society.
[4] German K. M. Cheung, Simon Baker, and Takeo Kanade. Shape-from-silhouette
of articulated objects and its use for human body kinematics estimation and
motion capture. In CVPR (1), pages 77-84, 2003.
[5] T. Darrell, G. Gordon, M. Harville, and J. Woodfill. Integrated person tracking
using stereo, color, and pattern detection. cvpr, 00:601, 1998.
[6] Quentin Delamarre and Olivier Faugeras. 3d articulated models and multi-view
tracking with silhouettes. In ICCV '99: Proceedings of the International Con-
ference on Computer Vision-Volume 2, page 716, Washington, DC, USA, 1999.
IEEE Computer Society.
[7] Quentin Delamarre and Olivier Faugeras. 3d articulated models and multiview
tracking with physical forces. Comput. Vis. Image Underst., 81(3):328-357, 2001.
[8] D. Demirdjian. Enforcing constraints for human body tracking. cvprw, 09:102,
2003.
[9] D. Demirdjian, T. Ko, and T. Darrell. Constraining human body tracking. In
ICCV '03: Proceedings of the Ninth IEEE International Conference on Computer
Vision, page 1071, Washington, DC, USA, 2003. IEEE Computer Society.
[10] David Demirdjian. Combining geometric- and view-based approaches for artic-
ulated pose estimation. In ECCV (3), pages 183-194, 2004.
[11] David Demirdjian, Leonid Taycher, Gregory Shakhnarovich, Kristen Grauman,
and Trevor Darrell. Avoiding the 'streetlight efiect': Tracking by exploring
likelihood modes. iccv, 1:357-364, 2005.
[12] Mira Dontcheva, Gary Yngve, and Zoran Popovic. Layered acting for character
animation. ACM Trans. Graph., 22(3):409-416, 2003.
[13] William T. Freeman, David B. Anderson, Paul A. Beardsley, Chris N. Dodge,
Michal Roth, Craig D. Weissman, William S. Yerazunis, Hiroshi Kage, Kazuo
Kyuma, Yasunari Miyake, and Ken ichi Tanaka. Computer vision for interactive
computer graphics. IEEE Comput. Graph. Appl., 18(3):42-53, 1998.
[14] Keith Grochow, Steven L. Martin, Aaron Hertzmann, and Zoran Popovic. Style-
based inverse kinematics. ACM Trans. Graph., 23(3):522-531, 2004.
[15] W. van der Mark H. Sunyoto and D. M. Gavrila. A comparative study of fast
dense stereo vision algorithms. In Intelligent Vehicles Symposium, pages 319-324.
IEEE Computer Society, 2004.
[16] Nicholas R. Howe, Michael E. Leventon, and William T. Freeman. Bayesian
reconstruction of 3d human motion from single-camera video. In NIPS, pages
820-826, 1999.
[17] En-Wei Huang and Li-Chen Fu. Real-time arm tracking system using example-
based matching and local optimization. In IEEE International Conference on
Systems, Man, and Cybernetics, 2006. (to appear).
[18] Nebojsa Jojic, Thomas Huang, Barry Brumitt, Brian Meyers, and Steve Harris.
Detection and estimation of pointing gestures in dense disparity maps. fg, 00:468,
2000.
[19] Nebojsa Jojic, Thomas S. Huang, and Matthew Turk. Tracking self-occluding
articulated objects in dense disparity maps. iccv, 01:123, 1999.
[20] Rick Kjeldsen and Jacob Hartman. Design issues for vision-based computer
interaction systems. In PUI '01: Proceedings of the 2001 workshop on Perceptive
user interfaces, pages 1-8, New York, NY, USA, 2001. ACM Press.
[21] Neil D. Lawrence. Gaussian process latent variable models for visualisation
of high dimensional data. In Sebastian Thrun, Lawrence Saul, and Bernhard
Schfiolkopf, editors, Advances in Neural Information Processing Systems 16. MIT
Press, Cambridge, MA, 2004.
[22] Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S.
Pollard. Interactive control of avatars animated with human motion data. In
SIGGRAPH '02: Proceedings of the 29th annual conference on Computer graph-
ics and interactive techniques, pages 491-500, New York, NY, USA, 2002. ACM
Press.
[23] Michael H. Lin. Tracking articulated objects in real-time range image sequences.
iccv, 01:648, 1999.
[24] Thomas B. Moeslund and Erik Granum. Modelling and estimating the pose of
a human arm. Mach. Vision Appl., 14(4):237-247, 2003.
[25] Sageev Oore, Demetri Terzopoulos, and Geofirey E. Hinton. A desktop input
device and interface for interactive 3d character animation. In Graphics Interface,
pages 133-140, 2002.
[26] Ralf Plaenkers and Pascal Fua. Model-based silhouette extraction for accurate
people tracking. In ECCV '02: Proceedings of the 7th European Conference on
Computer Vision-Part II, pages 325-339, London, UK, 2002. Springer-Verlag.
[27] Marco Porta. Vision-based user interfaces: methods and applications. Int. J.
Hum.-Comput. Stud., 57(1):27-73, 2002.
[28] Lawrence R. Rabiner. A tutorial on hidden markov models and selected appli-
cations in speech recognition. pages 267-296, 1990.
[29] Liu Ren, Gregory Shakhnarovich, Jessica K. Hodgins, Hanspeter Pfister, and
Paul Viola. Learning silhouette features for control of human motion. ACM
Trans. Graph., 24(4):1303-1331, 2005.
[30] S. Rusinkiewicz and M. Levoy. Eficient variants of the icp algorithm. 3dim,
00:145, 2001.
[31] Sudhanshu Kumar Semwal, Ron R. Hightower, and Sharon A. Stansfield. Map-
ping algorithms for real-time control of an avatar using eight sensors. Presence,
7(1):1-21, 1998.
[32] Hyun Joon Shin, Lucas Kovar, and Michael Gleicher. Physical touch-up of hu-
man motions. In PG '03: Proceedings of the 11th Pacific Conference on Com-
puter Graphics and Applications, page 194, Washington, DC, USA, 2003. IEEE
Computer Society.
[33] Hyun Joon Shin, Jehee Lee, Sung Yong Shin, and Michael Gleicher. Computer
puppetry: An importance-based approach. ACM Trans. Graph., 20(2):67-94,
2001.
[34] Hedvig Sidenbladh, Michael J. Black, and Leonid Sigal. Implicit probabilistic
models of human motion for synthesis and tracking. In ECCV '02: Proceed-
ings of the 7th European Conference on Computer Vision-Part I, pages 784-800,
London, UK, 2002. Springer-Verlag.
[35] Seyoon Tak and Hyeong-Seok Ko. A physically-based motion retargeting filter.
ACM Trans. Graph., 24(1):98-117, 2005.
[36] M. Vukobratovic and B. Borovac. Zero-moment point: Thirty five years of its
life. International Journal of Humanoid Robotics, 1(1):157-173, 2004.
[37] Greg Welch and Gary Bishop. An introduction to the kalman filter. Technical
report, Chapel Hill, NC, USA, 1995.
[38] Andrew Wilson and Nuria Oliver. Gwindows: robust stereo vision for gesture-
based control of windows. In ICMI '03: Proceedings of the 5th international
conference on Multimodal interfaces, pages 211-218, New York, NY, USA, 2003.
ACM Press.
[39] D. A. Winter. Biomechanics and Motor Control of Human Movement, Second
edition. John Wiley and Sons, 1990.
[40] KangKang Yin and Dinesh K. Pai. Footsee: an interactive animation system. In
SCA '03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium
on Computer animation, pages 329-338, Aire-la-Ville, Switzerland, Switzerland,
2003. Eurographics Association.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/32605-
dc.description.abstract本篇論文提出了一種在有限空間下用上半身動作驅動的表演式動畫系統。我們利用無標記式電腦視覺手臂追蹤系統得到使用者的上半身動作資訊。無標記式的追蹤系統通常雜訊較多,因此首先使用濾波器去除雜訊。去除雜訊之後,使用者上半身的動作資訊可以用來直接控制動畫人物的上半身,也能經過動作辨識觸發預先設定好的動作並套用在動畫人物上
在本系統中,動畫人物的控制分為上半身及下半身兩個部份。上半身的動作可以由使用者上半身的動作去除雜訊後直接控制,也可以是這些動作經由動作辨識觸發的預錄動作,或是前兩者的混合。下半身的動作則都是動作辨識觸發的預錄動作。上下半身的動作合併起來有可能是違反物理法則的,我們提出了一個兩層式的動作整合框架來解決這個問題。在每一層中使用卡爾曼濾波器將動作變得平順,接著根據物理限制去修改動作。透過這個動作整合框架處理後,合併後的動作可以兼顧平順以及符合物理法則。
zh_TW
dc.description.abstractThis thesis proposes a performance animation system controlled by upper body motion under a restricted workspace environment. The upper body motion are obtained from a markerless vision-based arm tracking system. Since 3D pose information derived from vision is often coarse and inexact, we apply a filter to reduce noise. After noise reduction, the motion signal can be used to directly control the animation of the upper body of the avatar, or be recognized as motion command to assign motion to the avatar.
The motion control of our system is divided into two parts, the upper body and the lower body. The upper body motions of avatar can be controlled by control signal from filtered vision input, or they can be motion capture data selected by command signal from motion recognition, or they can be the mixture of two. The motions of lower body are motion capture data automatically selected by motion recognition of filtered vision input. The combined motion of the upper and the lower body may not be physically plausible. To solve this problem, we propose a two level motion integration framework. In each level, we use Kalman filters to smooth the combined motion and then revise it by imposing constraints. The combined motions are processed by the framework to ensure that they are not only smooth but also physically plausible.
en
dc.description.provenanceMade available in DSpace on 2021-06-13T04:12:12Z (GMT). No. of bitstreams: 1
ntu-95-R93922017-1.pdf: 2140481 bytes, checksum: 8992d16c65d6a5b30a94e145fa9d5815 (MD5)
Previous issue date: 2006
en
dc.description.tableofcontentsList of Figures iv
List of Tables v
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Related Works 4
2.1 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Motion Trajectory Generation 10
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Kalman Filter and Noise Reduction . . . . . . . . . . . . . . . . . . . 11
3.2.1 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.2 Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Hidden Markov Model . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.2 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Motion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4.1 Static Motion Recognition . . . . . . . . . . . . . . . . . . . . 21
3.4.2 Dynamic Motion Recognition . . . . . . . . . . . . . . . . . . 23
4 Motion Integration 25
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Motion Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1 Balance Constraints . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.2 Momentum Constraints . . . . . . . . . . . . . . . . . . . . . 27
4.2.3 Joint Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2.4 Workspace Constraints . . . . . . . . . . . . . . . . . . . . . . 27
iii
4.3 Two Level Motion Integration . . . . . . . . . . . . . . . . . . . . . . 28
4.3.1 Human ZMP/CM . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.2 Feature Point Position . . . . . . . . . . . . . . . . . . . . . . 33
5 Experiments 37
5.1 Environment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2 Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2.1 Noise reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2.2 Static motion recognition . . . . . . . . . . . . . . . . . . . . . 40
5.2.3 Dynamic motion recognition . . . . . . . . . . . . . . . . . . . 40
5.2.4 Combination of vision data and motion capture data . . . . . 40
6 Conclusion 42
Reference 44
dc.language.isoen
dc.subject角色動畫zh_TW
dc.subjectPerformance animationen
dc.title以視覺為基礎的角色動畫系統zh_TW
dc.titleVision-based Interactive 3D Character Animation Systemen
dc.typeThesis
dc.date.schoolyear94-2
dc.description.degree碩士
dc.contributor.oralexamcommittee歐陽明(Ming Ouhyoung),洪一平(Yi-Ping Hung),王傑智(Chieh-Chih Wang)
dc.subject.keyword角色動畫,zh_TW
dc.subject.keywordPerformance animation,en
dc.relation.page49
dc.rights.note有償授權
dc.date.accepted2006-07-26
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-95-1.pdf
  未授權公開取用
2.09 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved