Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65281
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor歐陽明(Ming Ouhyoung)
dc.contributor.authorYu Tuen
dc.contributor.author屠愚zh_TW
dc.date.accessioned2021-06-16T23:34:23Z-
dc.date.available2014-08-01
dc.date.copyright2012-08-01
dc.date.issued2012
dc.date.submitted2012-07-27
dc.identifier.citation[1] V.N. Balasubramanian, Jieping Ye, and S. Panchanathan. Biased manifold embedding: A framework for person-independent head pose estimation. In Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, pages 1 –7, june 2007.
[2] M.D. Breitenstein, D. Kuettel, T. Weise, L. van Gool, and H. Pfister. Real-time face pose estimation from single range images. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1 –8, june 2008.
[3] L. Chen, L. Zhang, Y. Hu, M. Li, and H. Zhang. Head pose estimation using fisher manifold learning. In Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, pages 203 – 207, oct. 2003.
[4] G. Fanelli, J. Gall, and L. Van Gool. Real time head pose estimation with random regression forests. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 617 –624, june 2011.
[5] G. Fanelli, T. Weise, J. Gall, and L. Van Gool. Real time head pose estimation from consumer depth cameras. In 33rd Annual Symposium of the German Association for Pattern Recognition (DAGM’11), September 2011.
[6] Sotiris Malassiotis and Michael G. Strintzis. Robust real-time 3d head pose estimation from range data. Pattern Recogn., 38:1153–1165, August 2005.
[7] Y. Matsumoto and A. Zelinsky. An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement. In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, pages 499 –504, 2000.
[8] L.-P. Morency, P. Sundberg, and T. Darrell. Pose estimation using 3d view-based eigenspaces. In Analysis and Modeling of Faces and Gestures, 2003. AMFG 2003. IEEE International Workshop on, pages 45 – 52, oct. 2003.
[9] E. Murphy-Chutorian and M.M. Trivedi. Head pose estimation in computer vision: A survey. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(4):607 –626, april 2009.
[10] Margarita Osadchy, Yann Le Cun, and Matthew L. Miller. Synergistic face detection and pose estimation with energy-based models. J. Mach. Learn. Res., 8:1197–1215, May 2007.
[11] E. Seemann, K. Nickel, and R. Stiefelhagen. Head pose estimation using stereo vision for human-robot interaction. In Automatic Face and Gesture Recognition, 2004. roceedings. Sixth IEEE International Conference on, pages 626 – 631, may 2004.
[12] M. Storer, M. Urschler, and H. Bischof. 3d-mam: 3d morphable appearance model for efficient fine head pose estimation from still images. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pages 192–199, 27 2009-oct. 4 2009.
[13] Ming Ouhyoung Tzong-Jer Yang, Fu-Che Wu. Method of image processing using three facial feature points in three-dimensional head motion tracking, 06 2003.
[14] T. Vatahska, M. Bennewitz, and S. Behnke. Feature-based head pose estimation from images. In Humanoid Robots, 2007 7th IEEE-RAS International Conference on, pages 330 –335, 29 2007-dec. 1 2007.
[15] T. Weise, B. Leibe, and L. Van Gool. Fast 3d scanning with automatic motion compensation. In Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, pages 1 –8, june 2007.
[16] Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. Realtime performancebased facial animation. ACM Trans. Graph., 30(4):77:1–77:10, August 2011.
[17] J. Whitehill and J.R. Movellan. A discriminative approach to frame-by-frame head pose tracking. In Automatic Face Gesture Recognition, 2008. FG ’08. 8th IEEE International Conference on, pages 1 –7, sept. 2008.
[18] Ruigang Yang and Zhengyou Zhang. Model-based head pose tracking with stereovision. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 255 –260, may 2002.
[19] Jian Yao and Wai-Kuen Cham. Efficient model-based linear head motion recovery from movies. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, volume 2, pages II–414 – II–421 Vol.2, june-2 july 2004.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65281-
dc.description.abstract在本篇論文中,我們對於以深度影像為基礎達成即時頭部軌跡追蹤之問題提出了兩種不同的方法。在方法一中,我們在深度圖上對使用者的正臉取樣產生點雲(Point Cloud),並使用這些點雲以最小平方法計算出最佳近似的平面,再將臉部輪廓以最小平方法計算出最佳近似的橢圓,以此平面之法向量及橢圓之傾斜角計算出頭部之旋轉角度。在方法二中,我們使用梯度坡降法迭代地將特定距離函數進行最佳化,此方法可算出更精準的旋轉角度及移動距離。我們所提出的這兩個方法,在單一CPU運算下皆可達到30fps的計算速度。在此系統所使用之攝影器材─Microsoft Kinect以及Asus Xtion Pro,乃較為普及、平價、易取得且易使用的深度攝影機,其優點亦伴隨著較高的拍攝雜訊。為了使系統得以不被環境光線改變所影響,我們只利用其拍攝出之深度影像為輸入來源做計算。在本篇論文中,我們證明,三維空間中,六個自由度的即時頭部軌跡追蹤,是可以經由分析帶有雜訊之深度影像達成的。zh_TW
dc.description.abstractIn this thesis, we propose a system to estimate head poses only using depth information in real-time. Two methods are developed. First, assuming that a head can be approximated by a bounding box, we find the best fitted plane for the frontal face by the least square error method. Thus, the normal vector of this plane represents the head orientation. Second, an optimization method based on 3D model fitting is developed. We iteratively minimize the distance between source and target point clouds of a user's head. This method is more robust and the results are more precise. Both of the proposed methods give fully real-time responses (30fps) without needing the GPU speedup. We adopt a commodity depth sensor named Microsoft Kinect as well as Asus Xtion, and use the depth image as the only input so that our system will not be affected by illumination variations. The simplicity of this acquisition device comes at the cost of frequent noises in the acquired data. We demonstrate that 6 degree of freedom real-time head motion tracking in 3D space can be achieved with noisy depth data.en
dc.description.provenanceMade available in DSpace on 2021-06-16T23:34:23Z (GMT). No. of bitstreams: 1
ntu-101-R99922133-1.pdf: 25849316 bytes, checksum: 3ede17f2fe451033db21cf1a9554f6bb (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents致謝 i
摘要 ii
Abstract iii
1 Introduction 1
1.1 Issues in Head Motion Tracking 4
1.2 Organization 5
2 Related Works 7
2.1 Feature-Based Methods 7
2.2 Appearance-Based Methods 8
2.3 Depth-Based Methods 9
2.4 Comparison with the Porposed Method 10
3 Overview 12
3.1 Problem Formulation 13
3.2 Hardware 14
3.3 System Architecture 17
4 The Proposed 3D Head Motion Tracking 22
4.1 Rigid Body Motion Module 22
4.2 Point Clouds Generation 24
4.3 Nose Detection 28
4.4 Head Pose Estimation: Least Square Solution 32
4.5 Head Pose Estimation: Optimization Solution 37
4.6 Smooth Filter 42
5 Results 45
6 Conclusion 52
Bibliography 54
dc.language.isozh-TW
dc.subject迭代最佳化zh_TW
dc.subject深度影像zh_TW
dc.subject即時頭部運動軌跡追蹤zh_TW
dc.subject最小平方法zh_TW
dc.subject三維樣板比對zh_TW
dc.subjectLeast Square Methoden
dc.subjectReal-Time Head Motion Trackingen
dc.subjectDepth Imageen
dc.subjectKinecten
dc.subject3D Template Matchingen
dc.subjectIterative Optimizationen
dc.title以深度影像及三維樣板比對為基礎之即時頭部運動軌跡追蹤zh_TW
dc.titleDepth-Based Real Time Head Motion Tracking Using 3D Template Matchingen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee王傑智(Chieh-Chih Wang),傅楸善(Chiou-Shann Fuh),葉正聖(Jeng-Sheng Yeh)
dc.subject.keyword即時頭部運動軌跡追蹤,深度影像,三維樣板比對,迭代最佳化,最小平方法,zh_TW
dc.subject.keywordReal-Time Head Motion Tracking,Depth Image,Kinect,3D Template Matching,Iterative Optimization,Least Square Method,en
dc.relation.page59
dc.rights.note有償授權
dc.date.accepted2012-07-27
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  未授權公開取用
25.24 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved