請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/25635
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳永耀(Yung-Yaw Chen) | |
dc.contributor.author | Chien-Cheng Li | en |
dc.contributor.author | 李乾丞 | zh_TW |
dc.date.accessioned | 2021-06-08T06:22:20Z | - |
dc.date.copyright | 2006-08-04 | |
dc.date.issued | 2006 | |
dc.date.submitted | 2006-07-30 | |
dc.identifier.citation | [1]. I. Haritaoglu, D. Harwood, L. S. Davis, “W : real-time surveillance of people and their activities”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 2000, pp.809-830.
[2]. N. M. Oliver, B. Rosario, and A. P. Pentland, “A Beyesian computer vision system for modeling human interactions”, IEEE transactions on Pattern Analysis and Machine Intelligence, 22(8), 2000, pp.831-843. [3]. J.K. Aggarwal and Q. Cai, ”Human motion analysis: a review”, Nonrigid and Articulated Motion Workshop, 1997. Proceedings, IEEE, Page(s):90 – 102,16 June 1997. [4]. G. Johansson, “Visual motion perception”, Sci. Am. 232(6), 1975, 76-88. [5]. T. Moeslund and E. Granum, “A survey of computer vision based human motion capture”, Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231-268, 2001. [6]. R. F. Rashid, “Toward a system for the interpretation of moving light display”, IEEE Trans. PAMI 2(6), 574-581, November 1980. [7]. J. A. Webb and J. K. Aggarwal, “Visually interpreting the motion of objects in space”, IEEE Comput. August 1981, 40-46. [8]. J. A. Webb and J. K. Aggarwal, “Structure from motion of rigid and jointed objects”, in Artif. Intell. 19, 1982, 107-130. [9]. S Kurakake and R. Nevatia, “Description and tracking of moving articulated objects”, in 11 Intl. Conf. on Pattern Recognition, Hague, Netherlands, 1992, Vol. 1, pp. 491-495. [10]. D. Gavrila and L. Davis, “3D model-based tracking of human upper body movement: a multi-view approach”, In Proceedings of Int’l Symposium on Computer Vision, pages: 253-258, 1995. [11]. M. Isard and A. Blake, “Condensation-conditional density propagation for visual tracking”, Int. Journal of Computer Vision, 29(1): 5-28, 1998. [12]. H. J. Lee and Z. Chen, “Knowledge-guided visual perception of 3-D human gait from a single image sequence”, Trans. Systems, Man, Cybernetics 22(2), 336-342, 1992. [13]. M. K. Leung and Y. H. Yang, “First sight: A human body outline labeling system”, Trans Pattern Anal. Mach. Intelligence 17(4), 359-377, 1995. [14]. D. Ayers and M. Shah, “Recognizing human action in a static room”, In Proceedings Computer Vision and Pattern Recognition, pages 42-46, 1998. [15]. James Davis and Aaron Bobick, “The representation and recognition of action using temporal plates”, In Proceedings Computer Vision and Pattern Recognition, pages 928-934, 1997. [16]. Stephen S. Intille, J. Davis, and A. Bobick, “Real time closed world tracking”, In Proceedings Computer Vision and Pattern Recognition, pages 697-703, 1997. [17]. C. Bregler, “Learning and recognition human dynamics in video sequence”, Proc. IEEE Conf. on CVPR, pp. 568-574, 1997. [18]. I. C. Chang and C. L. Huang, “The model-based human body motion analysis system”, Image and Vision Computing, vol. 18, pp. 1067-1083, 2000. [19]. M. Brand and V. Kettnaker, “Discovery and segmentation of activities in video”, IEEE Trans. on PAMI, vol. 22, no. 2, pp. 844-851, Aug. 2000. [20]. N. Krahnstover, M. Yeasin, and R. Sharma, “Toward a unified framework for tracking and analysis if human motion”, IEEE Workshop on Detection and Recognition Events in Video, 2001. [21]. T. Urano, T. Matsui, T. Nakata, and H. Mizoguchi, “Human pose recognition by memory-based hierarchical feature matching”, IEEE International Conference on Systems, Man and Cybernetics, vol. 7, pp. 6412-6416, Oct. 2004. [22]. R. Jain and H. H. Nagel, “On the analysis of accumulative difference pictures from image sequences of real scenes”, IEEE Trans. on PAMI, 1(2): 206-214, 1979. [23]. D. Hogg, “Model-based vision: A program to see a walking person”, Image and Vision Computing 1(1), 5-20, 1983. [24]. K. Rohr, “Towards model-based recognition of human movements in image sequences”, CGVIP: Image Understanding, 59(1): 94-115, 1994. [25]. D. Marr and H. K. Nishihara, “Representation and recognition of the spatial organization of three dimensional shapes”, In Proc. R. Soc. London, volume B, pages 269-394, 1978. [26]. K. Rohr, “Human Movement Analysis Based on Explicit Motion Models”, chap 8, pp. 171-198, Kluwer Academic, Dordrecht/Boston, 1997. [27]. J. O’Rourke and N. I. Badler, “Model-based image analysis of human motion using constraint propagation”, IEEE Trans. PAMI, 2: 522-536, 1980. [28]. K. Sato, T. Maeda, H. Kato, and S. Inokuchi, “CAD-based object tracking with distributed monocular camera for security monitoring”, Proc. 2 CAD-Based Vision Workshop, Champion, PA, February 1994, pp. 291-297. [29]. A. G. Bharatkumar, K. E. Daigle, M. G. Pandy, Q. Cai, and J. K. Aggarwal, “Lower limb kinematics of human walking with the medial axis transformation”, IEEE Workshop on Motion of Non-Rigid and Articulated Objects, Austin, TX. 1994, pp. 70-76. [30]. C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: real-time tracking of the human body”, Trans. Pattern Anal. Mach. Intelligence 19(7), 780-785, 1997. [31]. T. B. Moselund and E. Granum, “Multiple cues used in model-based human motion capture”, in The Fourth International Conference on Automatic Face and Gesture Recognition, Grenoble, France, March 2000. [32]. T. Darrell, P. Maes, B. Blumberg, and A. P. Pentland, “A novel environment for situated vision and behavior”, in Workshop for Visual Behaviors at CVPR-94, 1994. [33]. A. Nakazawa, H. Kato, and S. Inokuchi, “Human tracking using distributed video systems”, in International Conference on Pattern Recognition, 1998. [34]. M. Brand, “Shadow puppetry”, in International Conference on Computer Vision, Corfu, Greece, September 1999. [35]. R. Rosales and S. Sclaroff, “Learning and synthesizing human body motion and posture”, in Fourth International Conference on Automatic Face and Gesture Recognition, Grenoble, France, March 2000. [36]. A. Shio and J. Sklansky, “Segmentation of people in motion”, In Proc. Of IEEE Workshop on Visual Motion, IEEE Computer Society, pages 325-332, Octobar 1991. [37]. J. Yang and A. Waibel, “A real-time face tracker”, In Proc. Of IEEE Computer Society Workshop Application on Computer Vision, pages 142-147, Sarasota, FL, 1996. [38]. J. Farmer, M. Casdagli, S. Eubank, and J. Gibson, “State-space reconstruction in the presence of noise”, Physics D, 51D: 52-98, 1991. [39]. Nuria Oliver Barbara Rosario and Alex Pentland, “A Bayesian computer vision system for modeling human interactions”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, August, 2000. [40]. Jie Yang, Yangsheng Xu, and Chiou S. Chen, “Human action learning via hidden markov model”, IEEE Transactions on Systems, Man and Cybernetics, A: 34-44, 1997. [41]. A. F. Bobick and A. D. Wilson, “State-based technique for the summarization and recognition of gesture”, In Proc. Of 5th Intl. Conf. on Computer Vision, pages 382-388, 1995. [42]. J. Yamato, J. Ohya, and K. Ishii, “Recognizing human action in time-sequential images using Hidden Markov Model”, In Proc. IEEE Conf. CVPR, pages 379-385, Champaign, IL, June 1992. [43]. James Davis and Aaron Bobick, “The representation and recognition of action using temporal plates”, In Proceedings Computer Vision and Pattern Recognition, pages 928-934, 1997. [44]. A. F. Bobick and J Davis, “Real-time recognition of activity using temporal templates”, In Proc. of IEEE Computer Society Workshop Applications on Computer Vision, pages 39-42, Sarasota, FL, 1996. [45]. R. Polana and R Nelson, “Low level recognition of human motion (or how to get your man without finding his body parts)”, In Proc. of IEEE Computer Society Workshop on Motion of Non-Rigid and Articulated Objects, pages 77-82, Austin, TX, 1994. [46]. J. K. Aggarwal and Sangho Park, “Human Motion: Modeling and Recognition of Actions and Interactions”, 3DPVT04 (640-647). IEEE Abstract. IEEE Top Reference. 0412 BibRef, 2004. [47]. A. Wilson and A. Bobick, “Parameter Hidden Markov Models for Gesture Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(9), 1999. [48]. 張意政, “Non-rigid Motion Analysis for Human Body”, 國立清華大學電機工程研究所博士論文, 1999. [49]. C. Becchetti and L. P. Ricotti, “Speech Recognition Theory and C++ Implementation”, New York: Wiley, 1999. [50]. L. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”, Proceeding of the IEEE, vol. 77, No. 2, February 1989. [51]. M. Shah and R. Jain, “Motion-Based Recognition”, chapter 9, pages 201-226. Kluwer Academic Publishers, 1997. [52]. H. S. J. O. T. Min, B. Yoon, and T. Ejima, “Visual recognition of static/dynamic gesture: Gesture-driven editing system”, Journal of Visual Languages and Computing, 10: 291-309, 1999. [53]. J. Yang, Y. Xu, and C. S. Chen, “Human action learning via hidden markov model”, IEEE Transactions on Systems, Man and Cybernetics, pages 34-44, 1997. [54]. M. Herman, “Understanding body postures of human stick figures”, PhD thesis, University of Maryland, 1979. [55]. K. Akita, “Image sequence analysis of real world human motion”, Pattern Recognition, 17(1), 1984. [56]. R. Mann, A. Jepson and J. Siskind, “Computational perception of scene dynamics”, Computer Vision and Image Understanding, 65(2): 113-128, 1997. [57]. T. H. Cormen, C. E. Leiserson and R. L. Rivest, “Introduction to Algorithms”, MIT Press, 1998. [58]. S. Intille and A. Bobick, “Recognition and visual recognition of complex multi-agent actions using belief networks”, Technical Report No. 454, MIT, 1998. [59]. M. Brand and I. Essa, “Causal analysis for visual gesture understanding”, MIT Tech. Report, 1995. [60]. Y. Ivanov and A. Bobick, “Recognition of visual activities and interactions by stochastic parsing”, IEEE transactions on Pattern Analysis and Machine Intelligence, 22(8): 852-872, 2000. [61]. D. Ayers and M. Shah, “Recognition human action in a static room”, In Proceedings of IEEE Computer Society Workshop on Interpretation of Visual Motion, pages 42-46, 1998. [62]. T. Wada and T. Matsuyama, “Multiobject behavior recognition by event driven selective attention method”, IEEE transaction on Pattern Analysis and Machine Intelligence, 22(8): 873-887, August 2000. [63]. S. Park and J. K. Aggarwal, “Event semantics in two-person interactions”, In International Conference on Pattern Recognition, Cambridge, UK, 2004. [64]. 李冠德, “Fuzzy Rule-Based Human Actions Recognition for Home Care System”, 國立台灣大學電機工程研究所碩士論文, 2004. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/25635 | - |
dc.description.abstract | 辨識人體運動行為是一項很有趣的研究主題,通常是使用電腦視覺來做為研究的方法,應用的範圍很廣泛,包含虛擬實境、人機介面、監控系統與人體動作分析。含有人體運動的連續畫面當中,人體運動行為之辨識可分為兩大部分:如何取得人體特徵的資訊以及人體動作行為之解析與表達。人體特徵取得的困難點在於要如何從拍攝取得的複雜影像中找出所需的人體部分,而人體動作行為解析與表達的挑戰在於當取得人體特徵之後,如何利用人體特徵進而辨識出多變且複雜的人體動作。
本篇論文即是針對人體動作行為的解析與表達這方面,提出一套適合居家看護的人體姿態辨識系統。其做法是先依據由CCD取得的人體輪廓,進而收集與整理人類姿態的共通點與特徵,再利用人體姿態的特徵來設計一些簡單的參數與規則,希望透過這些參數與簡易的規則可以反推出人體姿態,特別是針對一些日常生活中常見的姿態來做辨識,目的不僅是希望可正確的判斷出人體姿態,更希望可以提升辨識人體姿態的速度,達成真正即時系統,以便更符合居家看護系統的要求。本系統假設有ㄧ完整且清晰的人體輪廓做為輸入,因此本篇論文使用ㄧ個簡易的方法取得人體輪廓並且控制拍攝背景與穿著,再透過一些辨識規則辨認出人體姿態,在最後結論中將會顯示判斷正確率,以證明此方法可成功解析人體姿態。 | zh_TW |
dc.description.abstract | Recognition of human activities has become a popular research in recent years, especially in the field of computer vision. There are many applications, like virtual reality, human-computer interface, surveillance systems, and human actions analysis. There are two aspects about recognition of human activities: how to get the information of human activities and representation of human motion. The difficulties of gathering information of human activities are to extract human body characteristics from the complex image. The challenge of representing human motion is how to utilize the collected human information to recognize the complicated and varied human motions.
This thesis mainly focuses on the representation of human postures and supplies a methodology of human postures recognition for home-care system. This thesis assumes that the human silhouette would be got and the human silhouette is complete, though the background subtraction is used in this thesis. The methodology of human posture recognition is to search and create the relation between human silhouette and result of recognition. The relation of this thesis is parameters and simple rules. Parameters and rules mean the characteristics of human postures. To raise the correct recognition rate and to achieve a real time system are two goals in our research. In the later chapter of this thesis, there are some results to prove that these two goals are achieved in this thesis. | en |
dc.description.provenance | Made available in DSpace on 2021-06-08T06:22:20Z (GMT). No. of bitstreams: 1 ntu-95-R93921075-1.pdf: 1438074 bytes, checksum: cb62a36b2eb899e011e29bfe9378bc90 (MD5) Previous issue date: 2006 | en |
dc.description.tableofcontents | 摘要 I
Abstract II Contents III List of Figures V List of Tables VII Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Definition 2 1.3 Previous Methods 3 1.4 Our Approach 5 1.5 Thesis Overview 7 Chapter 2 Human Activities Identification 9 2.1 Introduction 9 2.2 Human Motion Analysis Approaches 9 2.2.1 Model Based Approaches 10 2.2.2 Non-Model Based Approaches 15 2.3 Human Activities Recognition Approaches 17 2.3.1 State-Space Approaches 18 2.3.2 Template Matching Approaches 21 2.3.3 Behavior Recognition Schemes with Domain Knowledge 23 2.4 Summary 27 Chapter 3 Design of Human Posture Recognition 29 3.1 Introduction 29 3.2 System Architecture 29 3.3 Definition of Postures and Parameters 31 3.3.1 Definition of Postures 31 3.3.2 Definition of Parameters 35 3.4 Procedure of System 46 3.5 Summary 50 Chapter 4 Rules Statement 51 4.1 Introduction 51 4.2 Rules of Posture Recognition 51 4.3 Summary 70 Chapter 5 71 5.1 Introduction 71 5.2 Research Environment 71 5.3 Postures Recognition Test 73 5.4 Summary 77 Chapter 6 Conclusion 79 Reference 81 | |
dc.language.iso | en | |
dc.title | 利用簡易規則達成快速人體姿態鑑別 | zh_TW |
dc.title | Fast Human Posture Recognition by Heuristic Rules | en |
dc.type | Thesis | |
dc.date.schoolyear | 94-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 顏家鈺(Jia-Yush Yen),傅立成(Li-Chen Fu),林進燈(Chin-Teng Lin) | |
dc.subject.keyword | 人體姿態辨識,電腦視覺,居家看護系統, | zh_TW |
dc.subject.keyword | human posture recognition,computer vision,home-care system, | en |
dc.relation.page | 87 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2006-07-31 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-95-1.pdf 目前未授權公開取用 | 1.4 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。