Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/30455
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor貝蘇章(Soo-Chang Pei)
dc.contributor.authorChin-Yuan Chenen
dc.contributor.author陳勁元zh_TW
dc.date.accessioned2021-06-13T02:04:14Z-
dc.date.available2007-07-16
dc.date.copyright2007-07-16
dc.date.issued2007
dc.date.submitted2007-07-03
dc.identifier.citation[1] W. Hu, T. Tan, L. Wang, and S. Maybank, “A Survey on Visual Surveillance of Object Motion and Behaviors,” in IEEE Transactions on Systems, Man , and Cybernetics-Part C: Applications and Reviews, Vol. 34, No. 3, August 2004, pp. 334-352.
[2] C. Stauffer and W. Grimson, “ Adaptive background mixture models for real-time tracking,” in Proc. IEEE Conf. Computer Vision and Pattern recognition, vol. 2, 1999, pp. 246-252.
[3] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, ”Wallflower: principles and practice of background maintenance,” in Proc. Int. Conf. Computer Vision, 1999, pp. 255-261.
[4] I. Haritaoglu, D. Harwood, and L. S. Davis, “W4:Real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 809-830, Aug. 2000
[5] S. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “tracking groups of people,” Comput. Vis. Image Understanding, vol. 80, no. 1, pp. 42-56, 2000.
[6] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt, and L.Wixson, “A system for video surveillance and monitoring,” Carnegie Mellon Univ., Pittsburgh, PA, Tech. Rep., CMU-RI-TR-00-12, 2000.
[7] A. J. Lipton, H. Fujiyoshi, and R. S. Patil, “Moving target classification and tracking from real-time video,” in Proc. IEEE Workshop Applications of Computer Vision, 1998, pp. 8–14.
[8] Y. Kuno, T. Watanabe, Y. Shimosakoda, and S. Nakagawa, “Automated detection of human for visual surveillance system,” in Proc. Int. Conf. Pattern Recognition, 1996, pp. 865–869.
[9] R. Cutler and L. S. Davis, “Robust real-time periodic motion detection, analysis, and applications,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 781–796, Aug. 2000.
[10] A. J. Lipton, “Local application of optic flow to analyze rigid versus nonrigid motion,” in Proc. Int. Conf. Computer Vision Workshop Frame-Rate Vision, Corfu, Greece, 1999.
[11]C. Stauffer, “Automatic hierarchical classification using time-base co-occurrences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, 1999, pp. 335–339.
[12] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder:
real-time tracking of the human body,” IEEE Trans. Pattern Anal. Machine
Intell., vol. 19, pp. 780–785, July 1997.
[13] JPL, “Traffic surveillance and detection technology development,”
Sensor Development Final Rep., Jet Propulsion Laboratory Publication
no. 97-10, 1997.
[14] J. Malik, S. Russell, J. Weber, T. Huang, and D. Koller, “A machine
vision based surveillance system for Californaia roads,” Univ. of California,
PATH project MOU-83 Final Rep., Nov. 1994.
[15] N. Paragios and R. Deriche, “Geodesic active contours and level sets
for the detection and tracking of moving objects,” IEEE Trans. Pattern
Anal. Machine Intell., vol. 22, pp. 266–280, Mar. 2000.
[16] N. Peterfreund, “Robust tracking of position and velocity with Kalman
snakes,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp.
564–569, June 2000.
[17] M. Isard and A. Blake, “Contour tracking by stochastic propagation of
conditional density,” in Proc. European Conf. Computer Vision, 1996,
pp. 343–356.
[18] R. Polana and R. Nelson, “Low level recognition of human motion,”
in Proc. IEEE Workshop Motion of Non-Rigid and Articulated Objects,
Austin, TX, 1994, pp. 77–82.
[19] D.-S. Jang and H.-I. Choi, “Active models for tracking moving objects,”
Pattern Recognit., vol. 33, no. 7, pp. 1135–1146, 2000.
[20] E. Shechtman and M. Irani. Space-time behavior based correlation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1:405–412, June 2005.
[21] G. Golub and C. V. Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore and London, third edition, 1996.
[22] I. A. Karaulova, P. M. Hall, and A. D. Marshall, “A hierarchical model of dynamics for tracking people with a single video camera,” in Proc.
British Machine Vision Conf., 2000, pp. 262–352.
[23] S. Ju, M. Black, and Y. Yaccob, “Cardboard people: a parameterized model of articulated image motion,” in Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition, 1996, pp. 38–44.
[24] S. A. Niyogi and E. H. Adelson, “Analyzing and recognizing walking figures in XYT,” in Proc. IEEE Conf.
[25] Q. Delamarre and O. Faugeras, “3D articulated models and multi-view tracking with physical forces,” Comput. Vis. Image Understanding, vol. 81, no. 3, pp. 328–357, 2001.
[26] R. Plankers and P. Fua, “Articulated soft objects for video-based body modeling,” in Proc. Int. Conf. Computer Vision,Vancouver, BC, Canada, 2001, pp. 394–401.
[27] C. Bregler, “Learning and recognizing human dynamics in video sequences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997, pp. 568–574.
[28] T. Zhao, T. S. Wang, and H. Y. Shum, “Learning a highly structured motion model for 3D human tracking,” in Proc. Asian Conf. Computer Vision, Melbourne, Australia, 2002, pp. 144–149.
[29] W. F. Gardner and D. T. Lawton, “Interactive model-based vehicle tracking,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, pp.1115–1121, Nov. 1996.
[30] T. N. Tan, G. D. Sullivan, and K. D. Baker, “Model-based localization and recognition of road vehicles,” Int. J. Comput. Vis., vol. 29, no. 1, pp. 22–25, 1998.
[31] Tieniu N. Tan, Senior Member, IEEE, and Keith D. Baker, Member, IEEE,” Efficient Image Gradient Based Vehicle Localization,” IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 8, AUGUST 2000
[32] H. Yang, J. G. Lou, H. Z. Sun, W. M. Hu, and T. N. Tan, “Efficient and robust vehicle localization,” in Proc. IEEE Int. Conf. Image Processing, 2001, pp. 355–358.
[33] J. G. Lou, H. Yang, W. M. Hu, and T. N. Tan, “Visual vehicle tracking using an improved EKF,” in Proc. Asian Conf. Computer Vision, 2002, pp. 296–301.
[34] D. Koller, K. Daniilidis, and H.-H. Nagel, “Model-based object tracking in monocular image sequences of road traffic scenes,” Int. J. Comput. Vis., vol. 10, no. 3, pp. 257–281, 1993.
[35] H. Kollnig and H.-H. Nagel, “3D pose estimation by directly matching polyhedral models to gray value gradients,” Int. J. Comput. Vis., vol. 23, no. 3, pp. 283–302, 1997.
[36] M. Haag and H.-H. Nagel, “Combination of edge element and optical flow estimates for 3D-model-based vehicle tracking in traffic image sequences,” Int. J. Comput. Vis., vol. 35, no. 3, pp. 295–319, 1999.
[37] A. F. Bobick and A. D.Wilson, “A state-based technique to the representation and recognition of gesture,” IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp. 1325–1337, Dec. 1997.
[38] A. D. Wilson, A. F. Bobick, and J. Cassell, “Temporal classification of natural gesture and application to video coding,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1997, pp. 948–954.
[39] Y. A. Ivanov and A. F. Boblic, “Recognition of visual activities and interactions by stochastic parsing,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 852–872, Aug. 2000.
[40] N. Johnson and D. Hogg, “Learning the distribution of object trajectories for event recognition,” Image Vis. Comput., vol. 14, no. 8, pp. 609–615, 1996.
[41] A. Kojima, T. Tamura, and K. Fukunaga, “Natural language description of human activities from video images based on concept hierarchy of actions,” Int. J. Comput. Vis., vol. 50, no. 2, pp. 171–184, 2002.
[42] D. Cunado, J. Nash, M. S. Nixon, and J. N. Carter, “Gait extraction and description by evidence gathering,” in Proc. Int. Conf. Audio- and Video- Based Biometric Person Authentication, 1999, pp. 43–48.
[43] R. Tanawongsuwan and A. Bobick, “Gait recognition from time-normalized joint-angle trajectories in the walking plane,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2001, pp. (II)726–(II)731.
[44] P. S. Huang, C. J. Harris, and M. S. Nixon , “Canonical space representation for recognizing humans by gait or face,” in Proc. IEEE Southwest Symp. Image Analysis and Interpretation, 1998, pp. 180–185.
[45] L. Lee, “Gait Dynamics for Recognition and Classification,” MIT AI Lab, Cambridge, MA, Tech. Rep. AIM-2001-019, 2001.
[46] C. BenAbdelkader, R. Culter, and L. Davis, “Stride and cadence as a biometric in automatic person identification and verification,” in Proc. Int. Conf. Automatic Face and Gesture Recognition, Washington, DC, 2002, pp. 372–377
[47] S. A. Niyogi and E. H. Adelson, “Analyzing and recognizing walking figures in XYT,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1994, pp. 469–474.
[48] A. Kale, A. Rajagopalan, N. Cuntoor, and V. Kruger, “Gait-based recognition of humans using continuous HMMs,” in Proc. Int. Conf. Automatic Face and Gesture Recognition, Washington, DC, 2002, pp. 336–341.
[49] G. Shakhnarovich and T. Darrell, “On probabilistic combination of face and gait cues for identification,” in Proc. Int. Conf. Automatic Face and Gesture Recognition, Washington, DC, 2002, pp. 176–181.
[50] A.. Briassouli and N. Ahuja, “Integrated spatial and frequency domain motion segmentation and estimation,” in Proceedings of the 10th IEEE international Conference on Computer Vision, Oct., 2005.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/30455-
dc.description.abstract視訊在今日已是隨手可得。自動化理解視訊的關鍵之一在於分析其中的動作。在過去的二十年間,有許多相關這個主題的研究。
首先,關於回顧前人研究中關於動作分析的一般步驟,包含動作偵測、動作分段、物體分類。接著,提到一些與動作分析相關的應用,例如視訊監控及個人辨識。整理並呈現與這些應用相關的處理技巧,諸如物體追蹤與行為了解。
然後,引入一個可以直接了解視訊中蘊含行為的方法。我們集中在使用光流以達成在視訊中辨識動作的目標。這個方法運用時間及畫面所成的體積找出待測視訊間的關係。所有的處理從最小的基本單位開始,這個基本單位稱為時空小塊。藉由釐清時空小塊之間的種種性質,可以找出由時空小塊構成的大型待測視訊間的關係。實驗結果證明這個方法是有效的。其基本概念既直接且簡單,而且不受待測視訊中動作物體的外觀影響,但是在進行實驗時必須詳加考慮此法的高運算複雜度。
zh_TW
dc.description.abstractVideos are all around us. One of the key to understand a video automatically is to analyze to motions within it. In the past two decades, researches have been conducted against this topic.
First, the general procedure of motion analysis is reviewed in the related work, including motion detection, motion segmentation, and object classification. Then, some applications of motion analysis are mentioned, such as video surveillance and personal identification. Techniques related to these applications are summarized, such as object tracking and behavior understanding.
Then, a direct way to understand the behavior of a video is introduced. We focus on this optical-flow based approach to recognize action in video sequences. This method uses space-time volumes to perform correlation between templates. All the process of this approach start with the basic unit, space-time patches. With the well developed properties of these space-time patches, the correlation between two large templates can be practiced. Some results from experiments have proved the validity of this method. The concept of this method is direct and simple, and it is irrelevant to the appearance of the moving object, but the implementation of experiments should be designed with the consideration of high computation complexity.
en
dc.description.provenanceMade available in DSpace on 2021-06-13T02:04:14Z (GMT). No. of bitstreams: 1
ntu-96-R94942039-1.pdf: 1545587 bytes, checksum: 6293ac8897200c1b737afbebec3bdcc1 (MD5)
Previous issue date: 2007
en
dc.description.tableofcontentsChapter 1
Introduction 1
1.1 In the age of media explosion 1
1.2 Aimed issue 2
Chapter 2
Related works 3
2.1 Motion detection 3
2.1.1 Background update 3
2.2 Motion segmentation 4
2.2.1 Background subtraction 4
2.2.2 Temporal difference 5
2.2.3 Optical flow 5
2.2.4 Hybrid method 5
2.3 Object classification 5
2.3.1 Shape-based classification 6
2.3.2 Motion-based classification 6
2.3.3 Other way 7
2.4 Recent popular application 7
2.4.1 Object Tracking 8
2.4.1.1 Region-based tracking 9
2.4.1.2 Active contour-based tracking 10
2.4.1.3 Feature-based tracking 11
2.4.1.4 Model-based tracking 13
2.4.2 Understanding and description of behaviors 21
2.4.3 Personal identification for visual surveillance 26
CHAPTER 3
Motion Correlation Based on Space-Time Optical-flow 34
3.1 Properties of a space-time intensity patch 35
3.2 Consistency between two ST-patches 37
3.3 Handling spatial-temporal ambiguities 39
3.4 Continuous rank-increase measure Δr 45
3.5 Correlating Space-Time Video Templates 47
3.6 Considerations for experiment 48
3.7 Experiment results 52
3.8 Conclusion 58
CHAPTER 4
Alternative Method for Motion Correlation Based on Space-Time Optical-flow 60
4.1 Properties of a space-time intensity patch 60
4.2 Consistency between two ST-patches 61
4.3 Correlating Space-Time Video Templates 62
4.4 Comparisons with the rank-increase method 63
4.5 Experimental results 64
4.6 Conclusion 70
CHAPTER 5
Future work 72
Reference 74
dc.language.isoen
dc.subject利用時空光流比對來做動作辨識zh_TW
dc.subjectAction Recognitionen
dc.title利用時空光流比對來做動作辨識zh_TW
dc.titleAction Recognition Using Space-Time Optical-flow Matchingen
dc.typeThesis
dc.date.schoolyear95-2
dc.description.degree碩士
dc.contributor.oralexamcommittee鐘國亮,陳永昌,鄭伯順
dc.subject.keyword利用時空光流比對來做動作辨識,zh_TW
dc.subject.keywordAction Recognition,en
dc.relation.page77
dc.rights.note有償授權
dc.date.accepted2007-07-04
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-96-1.pdf
  未授權公開取用
1.51 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved