Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/37140
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳永耀
dc.contributor.authorYi-Hung Huangen
dc.contributor.author黃議弘zh_TW
dc.date.accessioned2021-06-13T15:19:56Z-
dc.date.available2012-08-18
dc.date.copyright2011-08-18
dc.date.issued2011
dc.date.submitted2011-08-11
dc.identifier.citation[1] T. B. Moeslund, A. Hilton, and V. Kruger, “A Survey of Advances in Vision-Based Human Motion Capture and Analysis,” Computer Vision and Image Understanding, vol. 104, no. 2-3, pp. 90-126, 2006.
[2] R. Poppe, “A Survey on Vision-Based Human Action Recognition,” Computer Vision and Image Understanding, vol. 28, no. 6, pp. 976-990, 2010.
[3] J. K. Aggarwal and Q. Cai, “Human Motion Analysis: A Review,” Proceedings IEEE Nonrigid and Articulated Motion Workshop, pp. 90-102, Jun 1997.
[4] Z. Chen and H. J. Lee, “Knowledge-Guided Visual Perception of 3-D Human Gait from A Single Image Sequence,” IEEE Transactions on System, Man and Cybernetics, vol. 22, no. 22, pp. 336-342, Mar/Apr 1992.
[5] A. G. Bharatkumar, K. E. Daigle, M. G. Pandy, C. Qin, and J. K. Aggarwal, 'Lower Limb Kinematics of Human Walking with the Medial Axis Transformation,' Proceedings of the 1994 IEEE Workshop on Motion of Non-Rigid and Articulated Objects, pp.70-76, Nov 1994.
[6] E. Huber, '3-D Real-time Gesture Recognition Using Proximity Spaces,' Proceedings of the 3rd IEEE Workshop on Applications of Computer Vision, pp.136-141, Dec 1996
[7] D. Hogg, “Model-Based Vision: a Program to See a Walking Person,” Image and Vision Computing, vol. 1, no. 1, 1983.
[8] K. Rohr, “Towards Model-Based Recognition of Human Movements in Image Sequences,” Computer Vision, Graphics, and Image Processing, pp. 94-115, 1994.
[9] D. Marr and H. K. Nishihara, “Representation and Recognition of the Spatial Organization of Three-Dimensional Shapes,” Proceedings Royal Society London, vol. B, pp. 269-294, 1978.
[10] R. F. Rashid, “Towards a System for the Interpretation of Moving Light Displays,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, no. 6, 1980.
[11] A. Shio and . Sklansky, “Segmentation of People in Motion,” Proceedings of IEEE Workshop on Visual Motion, IEEE Computer Society, pp. 325-332, 1991.
[12] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, July 1997.
[13] A. F. Bobick and J. W. Davis, 'The Recognition of Human Movement Using Temporal Templates,' IEEE Transactions on Pattern Analysis and Machine Intelligence, , vol.23, no.3, pp.257-267, Mar 2001.
[14] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, 'Actions as Space-Time Shapes,' Computer Vision, 2005. Tenth IEEE International Conference on , vol.2, no., pp.1395-1402 Vol. 2, 17-21 Oct. 2005.
[15] P. Turaga, R. Chellappa, V. S. Subrahmanian, and O. Udrea, 'Machine Recognition of Human Activities: A Survey,' IEEE Transactions on Circuits and Systems for Video Technology, vol.18, no.11, pp.1473-1488, Nov. 2008.
[16] A. Mokber, C. Achard, and M. Milgram, “Recognition of Human Behavior By Space-Time Silhouette Characterization,” Pattern Recognition Letters, vol. 29, no. 1, pp. 81-89, Jan. 2008.
[17] R. Poppe, “Vision-Based Human Motion Analysis: An Overview,” Computer Vision and Image Understanding, vol. 108, no. 1-2, pp. 4-18, 2007.
[18] O. Chomat and J. L. Crowley, “Probabilistic Recognition of Activity Using Local Appearance,” Proceedings of IEEE Conference Computer Vision and Pattern Recognition, vol. 02, pp. 104-109, 1999.
[19] L. Z. Manor and M. Irani, “Event-Based Analysis of Video,” Proceedings of IEEE Conference Computer Vision and Pattern Recognition, vol. 02, pp. 123-130, 2001.
[20] H. Zhong, J. Shi, and M. Visontai, “Detecting Unusual Activity in Video,” Proceeding of IEEE Conference Computer Vision and Pattern Recognition, pp. 819-826, 2004.
[21] I. Lativ, “On Space-Time Interest Points,” International Journal Computer Vision, vol. 64, no. 2-3, pp. 107-123, 2005.
[22] P. Dollar, V. Rabaud, G. Cottrell, and Belongie, “Behavior Recognition via Sparse Spatio-Temporal Features,” Proceedings of IEEE Workshop Vision Surveillance Performance Evaluation of Tracking Surveillance, pp. 65-72, 2005.
[23] J. C. Niebles, H. Wang, and L. F. Fei, “Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words,” Proceeding of British Machine Vision Conference, pp. 1249-1258, 2006.
[24] C. Schuldt, I. Laptev, and B. Caputo, “Recognizing Human Actions: A Local SVM Approach,” Proceedings of International Conference Pattern Recognition, pp. 32-36, 2004.
[25] S. T. Su, “Moving Object Detection Based on Two-Staged Background Subtraction Approach,” Master Thesis, Department of Electrical Engineering, National Taiwan University, Taipei, 2009.
[26] H. T. Feng, “Human Postures Recognition by Self-Learning Adaptive Fuzzy-Rule Based Classifier System,” Master Thesis, Department of Electrical Engineering, National Taiwan University, Taipei, 2007.
[27] J.K. Aggarwal and P. Sangho, 'Human Motion: Modeling and Recognition of Actions and Interactions,' 3D Data Processing, Visualization and Transmission, 2004. Proceedings. 2nd International Symposium on , pp. 640- 647, Sept. 2004
[28] L.R. Rabiner, 'A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,' Proceedings of the IEEE , vol.77, no.2, pp.257-286, Feb 1989.
[29] J. Yamato, J. Ohya, and K. Ishii, 'Recognizing Human Action in Time-Sequential Images Using Hidden Markov Model,' Computer Vision and Pattern Recognition, 1992. Proceedings CVPR '92., 1992 IEEE Computer Society Conference on , vol., no., pp.379-385, 15-18 Jun 1992
[30] L. E. Baum and J. A. Egon, “An Inequality with Applications to Statistical Estimation for Probabilistic Functions of a Markov Process and to a Model for Ecology,” Bulletin of the American Meteorological Society, vol. 73, pp. 360-363, 1967.
[31] L. E. Baum and G. R. Sell, “Growth Functions for Transformations on Manifolds,” Pacific Journal of Mathematics, vol. 27, no. 2, pp. 211-227, 1968.
[32] A. J. Viterbi, “Error Bounds for Convolutional Codes and an Asymptotically Optimal Decoding Algorithm,” IEEE Transactions on Information Theory, vol. IT-13, pp. 260-269, April 1967.
[33] G. D. Forney, “The Viterbi Algorithm,” Proceedings of IEEE, vol. 61, pp. 268-278, March 1973.
[34] L. E. Baum and T. Petrie, “Statical Inference for Probabilistic Functions of Finite State Markov Chains,” The Annals of Mathematical Statistics, vol. 37, pp. 1554-1563, 1966.
[35] L. E. Baum, T. Petrie, G. Soules, and N. Weiss, “A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains,” The Annals of Mathematical Statistics, vol. 41, no. 1, pp. 164-171, 1970.
[36] L. E. Baum, “An Inequality and Associated Maximization Technique in Statistical Estimation fort Probabilistic Functions of Markov Processes,” Inequalities, vol. 3, pp. 1-8, 1972.
[37] F. Buccolieri, C. Distance, and A. Leone, “Human Posture Recognition Using Active Contours and Radial Basis Function Neural Network,” IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 213-218, 2005.
[38] C. F. Juang and C. M. Chang, 'Human Body Posture Classification by a Neural Fuzzy Network and Home Care System Application,' IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, vol.37, no.6, pp.984-994, Nov. 2007.
[39] M. Ekinci and E. Gedikli, “Silhouette Based Human Motion Detection and Analysis for Real-Time Automated Video Surveillance,” Turkish Journal of Electrical Engineering and Computer Sciences, vol. 13, no. 2, 2005.
[40] H. Fujiyoshi and A.J. Lipton, 'Real-Time Human Motion Analysis by Image Skeletonization,' Applications of Computer Vision, 1998. WACV '98. Proceedings., Fourth IEEE Workshop on , pp.15-21, Oct 1998.
[41] M. Ekinci, “A New Attempt to Silhouette-Based Gait Recognition for Human Identification,” Lecture Notes in Computer Science, vol. 4013, no. 10, pp. 443-454, 2006.
[42] J. B. Arie, W. Zhiqian, P. Pandit, and S. Rajaram, 'Human Activity Recognition Using Multidimensional Indexing,' IEEE Transactions on Pattern Analysis and Machine Intelligence, , vol.24, no.8, pp. 1091- 1104, Aug 2002.
[43] X. Feng and P. Perona, “Human Action Recognition By Sequence of Movelet Codewords,” Proceeding of 3D Data Processing Visualization and Transmission, pp. 717-723, 2002.
[44] A.A. Efros, A.C. Berg, G. Mori, and J. Malik, 'Recognizing Action at a Distance,' Proceedings. Ninth IEEE International Conference on Computer Vision, pp.726-733, Oct. 2003.
[45] C. Schuldt, I. Laptev, and B. Caputo, 'Recognizing Human Actions: a Local SVM Approach,' Proceedings of the 17th International Conference on Pattern Recognition, vol.3, pp. 32- 36, Aug. 2004.
[46] V. Kellokumpu, M. Pietikainen, and J. Heikkila, “Human Activity Recognition Using Sequences of Postures,” IAPR Conference on Machine Vision Applications, 2005.
[47] C. Sminchisescu, A. Kanaujia, L. Zhiguo, and D. Metaxas, 'Conditional Models for Contextual Human Motion Recognition,' IEEE International Conference on Computer Vision, vol.2, pp.1808-1815, Oct. 2005.
[48] D. Weinland, R. Ronfard, and E. Boyer, “Motion History Volumes for Free Viewpoint Action Recognition,” IEEE Workshop Modeling People and Human Interaction, 2005.
[49] A. Yilmaz and M. Shah, “Action Sketch: A Novel Action Representation,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.1, pp. 984- 989, June 2005.
[50] L. Wang and D. Suter, 'Learning and Matching of Dynamic Shape Manifolds for Human Action Recognition,' IEEE Transactions on Image Processing,, vol.16, no.6, pp.1646-1661, June 2007.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/37140-
dc.description.abstract本論文提出了一個室內人類動作辨識系統。一般來說,動作是由一組有順序的關鍵姿態來組成。本論文的方法不同處在於,我們沒有考慮關鍵姿態彼此之間的順序性,只考慮關鍵姿態的「組成」,而關鍵姿態的順序性改以分析目標物的重心移動方向取代。
首先,藉由圖形匹配,將輸入的人類剪影和資料庫內事先儲存的各種動作的關鍵姿態作比對,找出一些最相像的關鍵姿態。本方法的基本想法是,當人類看到一張姿態影像時,腦內會自動聯想該影像中的人可能正在進行的動作。也就是說,每個人類的姿態影像都會對一個或多個動作有相關性。藉由這個概念,我們將每個關鍵姿態都對每個動作賦予權重分數,形成「權重向量」。不同時間點被匹配到的關鍵姿態的權重向量分數相加起,就是關鍵姿態對各個動作的權重分數。由於此權重分數並沒順序姓,我們另外分析了重心的位移情況以及其垂直方向,用來辨識一些由相同關鍵姿態組成但順序不同的動作。
本方法的特色是我們只考量動作由「哪些」關鍵姿態組成(也就是說不考慮關鍵姿態彼此的順序性),再搭配分析重心的移動軌跡來進行人類動作辨識。此方法的好處是可以對圖形匹配的錯誤結果具有較佳的強韌性,並且能更完整的考量到靜態姿態的描述。此外,在圖形匹配的程序中,我們新嘗試了一種已用於步伐分析[41]但尚未被拿來應用在圖形辨識的特徵,此特徵能保留姿態外觀的輪廓特性。我們以17個室內常見的動作為辨識目標,並且測試了五個受測者以及四種視角,實驗結果證實本方法的辨識率可高達89.23%。此外,本方法具有可擴充性,可自由的增加想判斷的動作。
zh_TW
dc.description.abstractIn this thesis, an indoor human actions recognition system is proposed. Generally, actions are composed by sequences of key postures. In our approach, the order of key postures is not considered, that is, only the compositions of actions are considered. The center point trajectory of the target is analyzed to substitute for the order of key postures.
By using pattern matching process, the input human silhouette is matched with the key postures pre-stored in the database, and some key postures are matched in each frame. The idea of our approach is that people associate the possible actions when seeing a key posture. In other words, each key posture is related to one or more actions. Taking this idea, every key posture in database has weights for all actions, called weighting vector. The weighting vectors of matched key postures in recent frames give action scores for every action. Because this method has no property of the order of key postures, the center point trajectory is analyzed to distinguish actions with same key postures but have different orders.
The feature of our approach is that the order of key postures is not utilized. The composition of key postures and center point trajectory are used to recognize human actions. This method is robust against the error result of pattern matching, and the stationary temporal situations are also considered. Besides, in pattern matching process, a feature called “distance vector” [41] which was applied in gait recognition is modified and utilized as the pattern feature. This feature keeps the characteristic of human exterior contours well. Seventeen common human actions and static postures are recognized, and 5 subjects are tested in 4 viewpoints. The recognition rate of our approach is 89.23%, and the experiment result shows that our approach has potential to recognize more actions.
en
dc.description.provenanceMade available in DSpace on 2021-06-13T15:19:56Z (GMT). No. of bitstreams: 1
ntu-100-R98921063-1.pdf: 7688693 bytes, checksum: 41a3bc9c12585a9d203e5772a0b5cca1 (MD5)
Previous issue date: 2011
en
dc.description.tableofcontents誌謝 i
中文摘要 ii
Abstract iv
Contents vi
List of Figures viii
List of Tables xii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Problem Definition 1
1.3 Proposed Approach 3
1.4 Thesis Overview 6
Chapter 2 State of the Art 8
2.1 Human Body Models 8
2.1.1 Modeling Human Body with Priori Human Model 9
2.1.2 Modeling Human Body without Priori Human Model 13
2.2 Human Representation in Approaches without Human Model 15
2.3 Actions Classification 19
2.3.1 Template Matching Approaches 19
2.3.2 Spatial-Temporal Approaches 21
2.3.3 State Space Approaches 25
2.4 Summary 29
Chapter 3 Moving Object Detection and Key Postures Matching 30
3.1 Moving Object Detection 31
3.2 Feature Extraction 37
3.2.1 Size Normalization 39
3.2.2 Distance Vector 41
3.3 Classification of Key Postures in Database 44
3.4 Key Postures Matching Criteria 44
Chapter 4 Actions Recognition 51
4.1 Weighting Vectors of Key Postures 52
4.2 Moving Direction of the Center Point 57
4.3 Actions Recognition Criteria 63
Chapter 5 Experiment Result 65
Chapter 6 Conclusion and Future Work 82
Appendix 84
References 94
dc.language.isoen
dc.subject動作辨識zh_TW
dc.subject重心軌跡zh_TW
dc.subject權重向量zh_TW
dc.subject動作分數zh_TW
dc.subject背景相減zh_TW
dc.subject關鍵姿態zh_TW
dc.subjectactions recognitionen
dc.subjectweighting vectoren
dc.subjectaction scoresen
dc.subjectcenter point trajectoryen
dc.subjectkey posturesen
dc.subjectbackground segmentationen
dc.title以關鍵姿態及運動軌跡的權重向量進行人類動作辨識zh_TW
dc.titleHuman Actions Recognition Based on a New Approach: Weighting Vectors of Key Postures and Motion Trajectoryen
dc.typeThesis
dc.date.schoolyear99-2
dc.description.degree碩士
dc.contributor.oralexamcommittee傅立成,顏家鈺
dc.subject.keyword動作辨識,背景相減,關鍵姿態,重心軌跡,動作分數,權重向量,zh_TW
dc.subject.keywordactions recognition,background segmentation,key postures,center point trajectory,action scores,weighting vector,en
dc.relation.page99
dc.rights.note有償授權
dc.date.accepted2011-08-11
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-100-1.pdf
  未授權公開取用
7.51 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved