請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/41095
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 貝蘇章(Soo-Chang Pei) | |
dc.contributor.author | Tzu-Ling Kuo | en |
dc.contributor.author | 郭姿玲 | zh_TW |
dc.date.accessioned | 2021-06-14T17:16:36Z | - |
dc.date.available | 2011-08-05 | |
dc.date.copyright | 2008-08-05 | |
dc.date.issued | 2008 | |
dc.date.submitted | 2008-07-25 | |
dc.identifier.citation | REFERENCE
[1] L. Itti, and C. Koch, “Computational Modeling of Visual Attention,” Macmillan Magazines, Vol. 2, 2001, pp. 194-203. [2] W. X. Schneider, “An Introduction to “Mechanism of Visual Attention: A Cognitive Neuroscience Perspective,” Visual Cognition, 1998, pp.1-8. [3] L. Itti, “Visual Attention,” In: The Handbook of Brain Theory and Neural Networks, (M. A. Arbib Ed.), MIT Press, Jan 2003, pp. 1196-1201. [4] C. Koch and S. Ullman, “Shifts in Selective Visual Attention: towards the Underlying Neural Circuitry,” Hum. Neurobiol. 4, 1985, pp. 219-227. [5] A. M Treisman and G. Gelade, “A Feature-integration Theory of Attention,” Cognit Psychol., 12(1), 1980, pp. 97-136. [6] J. Wolfe, “Visual Search: a review.” In: Pashler, H(ed), Attention, UK: University College London Press. [7] B. C. Ko, and J. -Y. Nam, “Object-of Interest Image Segmentation Based on Human Attention and Semantic Region Clustering,” in Journal of the Optical Society of America A, OSA, 2006, pp. 2462-2470. [8] L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-based Visual Attention for Rapid Scene Analysis,” in IEEE Trans. Pattern Anal. Mach. Intell. 20, 1998, pp. 1254-1259. [9] N. V. Patel and I. K. Sethi, 'Video Shot Detection and Characterization for Video Databases,' in Pattern Recognit., 1997, 30, (4), pp. 583-592. [10] P. J. Burt, and E. H. Adelson, “The Laplacian Pyramid as a Compact Image Code,” IEEE Trans. Commun., 1983, 31, pp. 532-540. [11] K. Rapantzikos, N. Tsapatsoulis, Y. Avrithis, and S. Kollias, 'Bottom-up Spatiotemporal Visual Attention Model for Video Analysis,' in IET Image Process., 2007, 1, (2), pp. 237-248. [12] H. L. Kennedy, 'Detecting and Tracking Moving Objects in Sequences of Color Images,' in Acoustic, Speech and Signal Processing, Vol. 1, 2007, pp. 1197-1200. [13] L. J. Bain and M. Engelhardt, “ Introduction to Probability and Mathematical Statistics,” 2nd Ed., California, Duxbury Press, 1992 [14] Y. Jiang, and D. Xu, “A Visual Attention Model Based on DCT Domain,” IEEE, Tencon 2005, 10, 2005, pp. 1-5. [15] L. Itti, “Auomatic Foveation for Video Compression Using a Neurobiological Model of Visual Attention,” IEEE Transations on Image Processing, Vol. 13, No. 10, pp. 1304-1318, Oct 2004 [16] L. Itti, C. Koch, “Computational Modeling of Visual Attention,” Nature Reviews Neuroscience. 2(3), 194-203, 2001. [17] U. Rutishauser, D. Walther, C.Koch and P. Perona, “Is Bottom-up Attention Useful for Object Recognition,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’04), Vol. 2, 06 27- 07 02. [18] Schleher, D. Curtis (Ed.), “Automatic Detection and Radar DataProcessing,” Artec House, Dedham, MA, 1980. [19] L.J. Bain and M. Engelhardt, “Introduction to Probability and Mathematical Statistics,” 2nd Ed., California, Duxbury Press, 1992. [20] T.J. Patterson, D.M. Chabries, and R.W Christiansen, 'Detection Algorithms for Image Sequence Analysis', IEEE Trans. Acoustics, Speech, and Signal Process, vol. 37, no. 9, pp. 1454-1458, Sep. 1989. [21] R.J. Radke, S. Andra, 0. Al-Kofahi and B. Roysam, 'Image Change Detection Algorithms: a Systematic Survey', IEEE Trans. Image Process., vol. 14, no. 3, pp. 294-307, Mar. 2005. [22] S. Kim, S. Park, and M. Kim, “Central Object Extraction for Object-based Image Retrieval,” in Proceedings of the International Conference on Image and Video Retrieval (Association for Computing Machinery, 2003), pp. 39–49. [23] W. Wang, Y. Song, and A. Zhang, “Semantics Retrieval by Region Saliency,” in Proceedings of the International Conference on Image and Video Retrieval (Association for Computing Machinery, 2002), pp. 29–37. [24] B. C. Ko and H. Byun, “Frip: a Region-based Image Retrieval Tool Using Automatic Image Segmentation and Stepwise Boolean and Matching,” IEEE Trans. Multimedia 7, 105–113 (2005). [25] E. Loupias and N. Sebe, “Wavelet-based Salient Points for Image Retrieval,” Research Report RR 99.11 (RFV-INSA Lyon, 1999). [26] http:// en.wikipedia.org/wiki/RGB | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/41095 | - |
dc.description.abstract | 人類視覺注意力系統是近來熱門的話題。人類視覺注意力系統主要是利用數學演算法計算出圖形或視訊蘊藏的特定資訊;此類特定資訊,泛指早期發展出的靈長類動物視覺系統的神經元結構和行為所接收並有所反應的資訊。其理論可廣泛應用於機器人的行動設計或是人工智慧的設計。目前已有許多理論被提出,同時學術界亦有許多利用視覺注意力模型來設計演算法的應用,像是圖片的物體切割、視訊的物體偵測、物體辨識等。
本論文主旨是用演算模型模擬人類視覺達到偵測物體的功能。視覺注意力模型可以從影像或是視訊中萃取出意圖特徵並找出顯著點或是顯著區。其中,顯著點或顯著區廣泛地被利用指人類觀看圖片或是視訊時直覺上的注意點或是注意處。現存亦有許多演算法來計算出人類眼睛對於圖片或是視訊的顯著點或顯著處。在此,我們基於「顯著」的概念,實現了兩個視覺注意力模型,像是以顯著圖或是顯著體積方式表示視覺注意力模型。之後,我們利用視覺注意力模型融合統計的概念,設計出偵測數位餘弦轉換後的視訊資料中移動的物體並以顯著圖表示之。 | zh_TW |
dc.description.abstract | Human visual attention system is a popular topic in recent years. The human visual attention system addresses the situation of computational implementation of intentional attention in the human vision. The human visual attention system is widely applied in the design of robot or automatic intelligence. In many researches, implementations about object segmentations, object recognitions, and object detections are proposed more and more frequently.
In this thesis, we mainly display two methods and implementations to simulate the human visual attention model. The output is denoted as saliency. Saliency means the place where human eyes emphasis on the most when first looking at an image. We displayed the algorithms that are widely used as the basic of the build of attention model for images. Moreover, another brand new concept of the salient model representation for videos is displayed here. Detecting moving objects in videos is an issue that people has discussed with high frequency in recent years. An algorithm for the real-time implement is now a developing and popular issue. Also, it presents a concept about the real-time moving object detection in time domain and another similar concept applied in DCT data domain in videos. | en |
dc.description.provenance | Made available in DSpace on 2021-06-14T17:16:36Z (GMT). No. of bitstreams: 1 ntu-97-R95942088-1.pdf: 3386007 bytes, checksum: a679b66605c9b3e2e128e0be8d02ca91 (MD5) Previous issue date: 2008 | en |
dc.description.tableofcontents | CONTENTS
口試委員會審定書 誌謝 i 中文摘要 iii ABSTRACT v CONTENTS vii LIST OF FIGURES xi LIST OF TABLES xiii Chapter 1 Introduction 1 Chapter 2 Visual Attention Model 3 2.1 Introduction 3 2.2 Bottom-up Attention Model 4 2.3 Top-down Attention Model 6 Chapter 3 Bottom-up Visual Attention Model for Object-of Interest Image 7 3.1 Introduction 7 3.2 Saliency Map Generation 7 3.2.1 Color Model Transformation and Down-sample 9 3.2.2 Feature Map Generation 11 3.2.3 Saliency Map Generation 12 3.3 Experiment 14 3.4 Conclusion 19 Chapter 4 Bottom-up Spatiotemporal Visual Attention Model for Video Analysis 21 4.1 Introduction 21 4.2 Video Pre-processing 22 4.2.1 Shot Detection 22 4.2.2 Video Volume Generation 23 4.2.3 Simplification/ Filtering 24 4.3 Feature Volume Generation 26 4.3.1 Gaussian Pyramid 26 4.3.2 Intensity and Color Volume Generation 27 4.3.3 2D and 3D Orientation Volume 28 4.4 Saliency Volume Generation 31 4.4.1 Center-surround Difference 31 4.4.2 Normalization 32 4.4.3 Conspicuity Volume Generation 33 4.4.4 Saliency Volume Generation 34 4.5 Experiment Result 34 4.6 Conclusion 36 Chapter 5 Moving Object Detection 37 5.1 Introduction 37 5.2 Static Model in Time Domain 38 5.2.1 Color Coordinate Transformation 41 5.2.2 Memory Confirmation 41 5.2.3 Parameter Calculation 41 5.2.4 Detecting Filter 42 5.2.5 Another Method for the Detection 43 5.3 Static Model in DCT Domain 45 5.3.1 The DCT Transform 46 5.3.2 The revised algorithm 49 5.4 Experiment Result 52 5.4.1 In the Temporal domain 53 5.4.2 In DCT Domain 57 5.5 Conclusion 59 Chapter 6 Conclusion and Future Work 61 6.1 Conclusion 61 6.2 Future Work 61 REFERENCE 63 | |
dc.language.iso | en | |
dc.title | 基於視覺注意力模型之物體偵測演算法 | zh_TW |
dc.title | Object Detection Methods Based on the Visual Attention Model | en |
dc.type | Thesis | |
dc.date.schoolyear | 96-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 鍾國亮(Kuo-Liang Chung),鄭伯順(Po-Shun Cheng),馮世邁(See-May Phoong) | |
dc.subject.keyword | 視覺注意力,顯著點,顯著圖,物體偵測, | zh_TW |
dc.subject.keyword | visual attention,saliency point,saliency map,objection detection, | en |
dc.relation.page | 65 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2008-07-28 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-97-1.pdf 目前未授權公開取用 | 3.31 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。