Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65567
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor傅立成(lichen@ntu.edu.tw)
dc.contributor.authorBor-Jeng Chenen
dc.contributor.author陳柏錚zh_TW
dc.date.accessioned2021-06-16T23:50:56Z-
dc.date.available2014-07-27
dc.date.copyright2012-07-27
dc.date.issued2012
dc.date.submitted2012-07-20
dc.identifier.citation[1] P. Buehler, M. Everingham, D. P. Huttenlocher, and A. Zisserman, 'Long Term Arm and Hand Tracking for Continuous Sign Language TV Broadcasts,' in Proceedings of the British Machine Vision Conference, 2008.
[2] M. Eichner, M. Marin-Jimenez, A. Zisserman, and V. Ferrari, '2D Articulated Human Pose Estimation and Retrieval in (Almost) Unconstrained Still Images,' International Journal of Computer Vision, vol. 99, pp. 190-214, 2012.
[3] L. Karlinsky, M. Dinerstein, D. Harari, and S. Ullman, 'The chains model for detecting parts by their context,' in IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 25-32.
[4] A. Mittal, A. Zisserman, and P. Torr, 'Hand detection using multiple proposals,' in Proceedings of the British Machine Vision Conference, 2011, pp. 75.1-75.11.
[5] A. Erol, G. Bebis, M. Nicolescu, R. Boyle, and X. Twombly, 'Vision-based hand pose estimation: A review,' Special Issue on Vision for Human-Computer Interaction, vol. 108, pp. 52-73, 2007.
[6] M. Donoser and H. Bischof, 'Real time appearance based hand tracking,' in 19th International Conference on Pattern Recognition, 2008, pp. 1-4.
[7] M. Kolsch and M. Turk, 'Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration,' in IEEE Conference on Computer Vision and Pattern Recognition Workshop, 2004, pp. 158-158.
[8] N. Jojic, M. Turk, and T. S. Huang, 'Tracking self-occluding articulated objects in dense disparity maps,' in The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, pp. 123-130 vol.1.
[9] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, 'Real-time human pose recognition in parts from single depth images,' in IEEE Conference on Computer Vision and Pattern Recognition, 2011, pp. 1297-1304.
[10] S. Jianbo and C. Tomasi, 'Good features to track,' in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 593-600.
[11] P. Zhigeng, L. Yang, Z. Mingmin, S. Chao, G. Kangde, T. Xing, and S. Z. Zhou, 'A real-time multi-cue hand tracking algorithm based on computer vision,' in IEEE Virtual Reality Conference, 2010, pp. 219-222.
[12] M. Kristan, J. Perš, S. Kovačič, and A. Leonardis, 'A local-motion-based probabilistic model for visual tracking,' Pattern Recognition, vol. 42, pp. 2160-2168, 2009.
[13] S. Hinterstoisser, C. Cagniart, S. Ilic, P. Sturm, N. Navab, P. Fua, and V. Lepetit, 'Gradient Response Maps for Real-Time Detection of Texture-Less Objects,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, pp. 1-1, 2011.
[14] G. R. Bradski and J. Davis, 'Motion segmentation and pose recognition with motion history gradients,' in Fifth IEEE Workshop on Applications of Computer Vision, 2000, pp. 238-244.
[15] A. Argyros and M. Lourakis, 'Real-Time Tracking of Multiple Skin-Colored Objects with a Possibly Moving Camera,' in Proceedings of European Conferenceon Computer Vision, 2004, pp. 368-379.
[16] P. Cui, L. F. Sun, F. Wang, and S. Q. Yang, 'Contextual Mixture Tracking,' IEEE Transactions on Multimedia, vol. 11, pp. 333-341, 2009.
[17] M. Yang, T. Yu, and Y. Wu, 'Game-Theoretic Multiple Target Tracking,' in IEEE 11th International Conference on Computer Vision, 2007, pp. 1-8.
[18] X. Zhang, W. Hu, W. Qu, and S. Maybank, 'Multiple Object Tracking Via Species-Based Particle Swarm Optimization,' IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, pp. 1590-1602, 2010.
[19] R. T. Collins and Y. Liu, 'On-line selection of discriminative tracking features,' in The Proceedings of the Ninth IEEE International Conference on Computer Vision, 2003, pp. 346-352 vol.1.
[20] C. M. Huang and L. C. Fu, 'Multitarget Visual Tracking Based Effective Surveillance With Cooperation of Multiple Active Cameras,' IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 41, pp. 234-247, 2011.
[21] M. Isard and A. Blake, 'CONDENSATION—Conditional Density Propagation for Visual Tracking,' International Journal of Computer Vision, vol. 29, pp. 5-28, 1998.
[22] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, 'Novel approach to nonlinear/non-Gaussian Bayesian state estimation,' Radar and Signal Processing, IEE Proceedings F, vol. 140, pp. 107-113, 1993.
[23] S. Maskell and N. Gordon, 'A tutorial on particle filters for on-line nonlinear/non-Gaussian Bayesian tracking,' in Target Tracking: Algorithms and Applications (Ref. No. 2001/174), IEE, 2001, pp. 2/1-2/15 vol.2.
[24] A. Doucet, S. Godsill, and C. Andrieu, 'On sequential Monte Carlo sampling methods for Bayesian filtering,' Statistics and Computing, vol. 10, pp. 197-208, 2000.
[25] J. Liu and R. Chen, 'Sequential Monte Carlo Methods for Dynamic Systems,' Journal of the American Statistical Association, vol. 93, pp. 1032-1044, 1998.
[26] N. Gordon, 'Bayesian methods for tracking,' Ph. D. Thesis, Imperial College, University London, 1994.
[27] J. W. Davis and A. F. Bobick, 'The representation and recognition of human movement using temporal templates,' in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997, pp. 928-934.
[28] J. Davis, 'Recognizing Movement using Motion Histograms,' MIT Media lab Technical Report, 1999.
[29] H. T. Chen, T. L. Liu, and C. S. Fuh, 'Probabilistic tracking with adaptive feature selection,' in Proceedings of the 17th International Conference on Pattern Recognition, 2004, pp. 736-739 Vol.2.
[30] W. Junqiu and Y. Yagi, 'Adaptive Mean-Shift Tracking With Auxiliary Particles,' IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 39, pp. 1578-1589, 2009.
[31] N. Dalal and B. Triggs, 'Histograms of oriented gradients for human detection,' in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, pp. 886-893 vol. 1.
[32] L. Matthews, T. Ishikawa, and S. Baker, 'The template update problem,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, pp. 810-815, 2004.
[33] E. Veach and L. J. Guibas, 'Optimally combining sampling techniques for Monte Carlo rendering,' presented at the Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, 1995.
[34] V. Sharma, 'A blob representation for tracking robust to merging and fragmentation,' in IEEE Workshop on Applications of Computer Vision (WACV), 2012, pp. 161-168.
[35] http://www.primesense.com/.
[36] http://www.openni.org/.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/65567-
dc.description.abstract本論文描述了一個運用單攝影機並可應用於人機互動的雙手追蹤系統。為了辨別使用者的頭以及手,本方法同時追蹤了使用者的頭。當目標距離彼此大於一段距離時,它們會被視為獨立追蹤。然而當它們有可能被互相干擾時,它們的狀態向量會一起被考慮依據相依的量測。追蹤器會運用遮罩將其它追蹤器最近的結果所在的區域忽略,以避免不同追蹤器之間的干擾。當下具鑑別力的顏色權重影像以及參考模型的反向投影的合成、運動模板影像和梯度方向特徵被用來驗證粒子濾波器所產生的假設。在另一方面,當目標物距離很近,甚至是重疊的時候,我們運用基於膚色推論之重要性取樣的粒子濾波器,產生融合目標物的假設,並加入深度順序的估測。我們依據視覺上的資訊包括:被遮蔽的臉部模板、手的形狀之梯度方向、運動的連續性以及前臂的線性方程式,來驗證這些融合的目標物可能的假設。實驗結果中展示了系統的即時效率以及強健性,我們也提供了系統跟依據Kinect深度影像的OpenNI 追蹤器追蹤結果在準確度上的比較,以及與一個目前最新的人體姿態估測方法在正確率的比較。zh_TW
dc.description.abstractThis thesis presents a two-hands tracking method with a monocular camera for human machine interaction (HMI). To clarify the face of the user and his/her hands, the face is also tracked in our method. The targets are tracked independently when they are far from each other; however, they are merged with dependent likelihood measurements in higher dimension while they are likely to interrupt each other. While one target is being tracked in the independent situation, other targets are masked to decrease the skin color disturbances on the tracked one. Multiple cues, including the combination of the locally discriminative color weighted image and the back-projection image of the reference color model, the motion history image and the gradient orientation feature, are employed to verify the hypotheses originated from the particle filter. On the other hand, when the targets are closing or even overlapping, the multiple importance sampling (MIS) particle filter generates the tracking hypotheses of the merged targets by the skin blob reasoning and the depth order estimation. These joint hypotheses are then evaluated by the visual cues of occluded face template, hand shape gradient orientation, motion continuity and forearm equation. The experimental results present the real-time efficiency and the robustness in comparison with the OpenNI tracker which has been released recently for the Kinect depth sensor and with the state-of-the-art human pose estimation method.en
dc.description.provenanceMade available in DSpace on 2021-06-16T23:50:56Z (GMT). No. of bitstreams: 1
ntu-101-R99921085-1.pdf: 2646299 bytes, checksum: 34987c7f160d5ee3f124f4eaa6525dee (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents摘要 I
ABSTRACT IV
CONTENTS VI
LIST OF FIGURES VIII
LIST OF TABLES XI
CHAPTER 1 INTRODUCTION 1
1.1 Motivation 1
1.2 Related Works 2
1.3 Contribution 4
1.4 Thesis Organization 6
CHAPTER 2 PRELIMINARIES 7
2.1 Bayesian Filter 7
2.2 Particle Filter 12
2.2.1 Sequential Importance Sampling (SIS) Particle Filter 13
2.2.2 Resampling and Degeneracy Problem 15
2.2.3 Sampling Importance Resampling (SIR) Particle Filter 16
2.3 Motion History Image 17
CHAPTER 3 HAND LIKELIHOOD EVALUATION 20
3.1 Color Similarity with Feature Selection 21
3.2 Motion Continuity 24
3.3 Orientation Template Matching 28
3.4 Joint Likelihood Functions 31
3.4.1 Joint Template Matching 32
3.4.2 Joint Color Similarity 34
3.4.3 Joint Motion and Shape Likelihood with Depth Order Reasoning 36
3.4.4 Likelihood from Arm Motion 40
3.5 Overall Likelihood 43
CHAPTER 4 TRACKING METHODOLOGY 45
4.1 Initialization 45
4.2 Particle Filter for Independent Tracker 47
4.3 Multiple Importance Sampling (MIS) Particle Filter for Joint Tracker 50
4.4 Hands Tracking System 56
CHAPTER 5 EXPERIMENTAL RESULT 58
5.1 Environmental Description 58
5.2 Results of Hands Tracking 59
5.3 Results Compared with OpenNI Tracker 62
5.4 Results Compared with Human Pose Estimation 73
CHAPTER 6 CONCLUSION AND FUTURE WORK 77
6.1 Conclusion 77
6.2 Future Work 78
REFERENCE 79
dc.language.isoen
dc.subject粒子濾波器zh_TW
dc.subject影像追蹤zh_TW
dc.subject手部追蹤zh_TW
dc.subjectParticle Filteren
dc.subjectVisual Trackingen
dc.subjectHand Trackingen
dc.title在複雜背景下具自遮蔽處理之雙手追蹤系統zh_TW
dc.titleHands Tracking with Self-occlusion Handling in Cluttered Environmenten
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee羅仁權,張文中,范欽雄,陳永耀
dc.subject.keyword粒子濾波器,影像追蹤,手部追蹤,zh_TW
dc.subject.keywordParticle Filter,Visual Tracking,Hand Tracking,en
dc.relation.page81
dc.rights.note有償授權
dc.date.accepted2012-07-20
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
Appears in Collections:電機工程學系

Files in This Item:
File SizeFormat 
ntu-101-1.pdf
  Restricted Access
2.58 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved