請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/45436
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 簡韶逸(Shao-Yi Chien) | |
dc.contributor.author | Yu-Hsiang Tseng | en |
dc.contributor.author | 曾鈺翔 | zh_TW |
dc.date.accessioned | 2021-06-15T04:20:04Z | - |
dc.date.available | 2009-12-29 | |
dc.date.copyright | 2009-12-29 | |
dc.date.issued | 2009 | |
dc.date.submitted | 2009-10-26 | |
dc.identifier.citation | [1] D. Cormaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,”IEEE Trans. Pattern Anal. Machine Intell., vol. 26, no. 11, pp. 1531–1536, Nov. 2004.
[2] Z. Yin and R. Collins, “Spatial divide and conquer with motion cues for tracking through clutter,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, June 2006, vol. 1, pp. 570–577. [3] A. Yilmaz and X. Li, “Contour-based object tracking with occlusion handling in video acquired using mobile cameras,” IEEE Trans. Pattern Anal. Machine Intell., vol. 26, no. 11, pp. 1531–1536, Nov. 2004. [4] D. Chen and J. Yang, “Robust object tracking via online dynamic spatial bias appearance models,” IEEE Trans. Pattern Anal. Machine Intell., vol. 29, no. 12, pp. 2157–2169, Dec. 2007. [5] Y.-T. Chen, C.-S. Chen, and Y.-P. Hung, “Integration of Background Modeling and Object Tracking,” in Proc. IEEE Int. Conf. Multimedia and Expo, July 2006, pp. 757–760. [6] Z. Wang, X. Yang, Y. Xu, and S. Yu, “Camshift guided particle filter for visual tracking,” in Proc. IEEE Signal Processing Systems. Workshop, Oct. 2007, pp. 301–306. [7] F. Liu, Q. Liu, and H. Lu, “Robust color-based tracking,” in Proc. IEEE Signal Processing Systems. Workshop, Oct. 2007, pp. 301–306. [8] M. Jaward, L. Mihaylova, N. Canagarajah, and D. Bull, “Multiple object tracking using particle filters,” in Proc. IEEE Conf. Aerospace, 2006. [9] E. Maggio, F. Smerladi, and A. Cavallaro, “Adaptive multifeature tracking in a particle filtering framework,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 10, pp. 1348–1359, Oct. 2007. [10] S. K. Zhou and R. Chellappa, “Visual tracking and recognition using appearance-adaptive models in particle filters,” IEEE Trans. Image Processing, vol. 13, no. 11, Nov. 2004. [11] P. Pan and D. Schonfeld, “Dynamic proposal variance and optimal particle allocation in particle filtering for video tracking,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, pp. 1268–1279, Sept. 2008. [12] H. Wang, D. Suter, K. Schindler, and C. Shen, “Adptive object tracking based on an effective appearance filter,” IEEE Trans. Pattern Anal. Machine Intell., vol. 29, no. 9, pp. 1661–1667, Sept. 2007. [13] H. Jiang, S. Fels, and J. J. Little, “Optimizing multiple object tracking and best view video synthesis,” IEEE Trans. Multimedia, vol. 10, pp. 997–1012, Oct. 2008. [14] H. Jiang, S. Fels, and J. J. Little, “A linear programming approach for multiple object tracking,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, June 2008, pp. 1–8. [15] T.-H. Tsai, Y.-C. Liu, D.-M. Chen, and C.-Y. Lin, “Fast occluded object tracking technique with distance evaluation,” in Proc. 9th Joint Conf. Information Sciences, Oct. 2006. [16] W. Hu, X. Zhou,M. Hu, and S.Maybank, “Occlusion reasoning for tracking multiple people,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, no. 1, pp. 114–121, Jan. 2009. [17] W.-K. Chan and S.-Y. Chien, “Real-time memory-efficient video object segmentation in dynamic background with multi-background registeration technique,” in Proc. IEEE Multimedia Signal Process. Workshop, July 2007. [18] C. Stauffer andW. E. L. Grimson, “Adaptive background mixturemodels for real-time tracking,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, July 1999, pp. 246–252. [19] D. G. Lowe, “Distinctive image features form scale-invariant keypoints,” Int. J. of Computer Vision, 2004. [20] T.-W. Chen, W.-K. Chan, and S.-Y. Chien, “Efficient face detection with segmentation and feature-based face scoring in surveillance systems,” in Proc. IEEE Multimedia Signal Processing Workshop, Oct. 2007, pp. 215–218. [21] A. A. Abbo, R. P. Kleihorst, V. Choudhary, L. Sevat, P. Wielage, S. Mouy, B. Vermeulen, and M. Heijligers, “Xetal-II: A 107 GOPS, 600 mW massively parallel processor for video scene analysis,” IEEE J. Solid-State Circuits, vol. 43, no. 1, pp. 192–201, Jan. 2008. [22] H. Noda et al, “The design and implementation of the massively parallel processor based on the matrix architecture,” IEEE J. Solid-State Circuits, vol. 42, no. 1, pp. 804–812, Apr. 2007. [23] M. Nakajima et al, “A 40 GOPS 250 mWmassively parallel processor based on matrix architecture,” in IEEE Int. Solid-State Circuits Conf. Digest of Technical Papers, 2006, pp. 410–411. [24] K.Mizumoto et al, “A multi matrix-processor core architecture for real-time image processing SoC,” in Proc. IEEE Asian Solid-State Circuit Conf., Nov. 2007. [25] S. Kaneko et al, “A 600-MHz single-chip multiprocessor with 4.8-GB/s internal shared pipelined bus and 512-kB internal memory,” IEEE J. Solid-State Circuits, vol. 39, no. 1, pp. 184–193, Jan. 2004. [26] S. Kyo, T. Koga, and S. Okazaki, “IMAP-CE : A 51.2 GOPS video rate image processor with 128 VLIW processing elements,” in Proc. IEEE Int. Conf. Image Processing, Sept. 2001, vol. 3, pp. 294–297. [27] S. Kyo, T. Koga, and S. Okazaki, “A 51.2-GOPS scalable video recognition processor for intelligent cruise control based on a linear array of 128 fourway VLIW processing elements,” IEEE J. Solid-State Circuits, vol. 38, no.11, pp. 1992–2000, Nov. 2003. [28] S. Kyo, S. Okazaki, and T. Arai, “An integrated memory array processor for embedded image recognition systems,” IEEE Trans. Comput., vol. 56, no. 5, pp. 622–634, May 2007. [29] B. K. Khailany, T. Williams, J. Lin, E. P. Long, M. Rygh, D. W. Tovey, and W. J. Dally, “A programmable 512 GOPS stream processor for signal, image, and video processing,” IEEE J. Solid-State Circuits, vol. 43, no. 1, pp. 202–213, Jan. 2008. [30] B. Serebrin, J. D. Owens, C. H. Chen, S. P. Crago, U. J. Kapasi, B. Khailany, P. Mattson, J. Namkoong, S. Rixner, and W. J. Dally, “A stream processor development platform,” in Proc. IEEE Int. Conf. Computer Design: VLSI in Computers and Processors, Sept. 2005, pp. 303–308. [31] U. Kapasi, W. J. Dally, S. Rixner, J. D. Owens, and B. Khailany, “The imagine stream processor,” in Proc. IEEE Int. Conf. Computer Design, Sept. 2002, pp. 282–288. [32] S. Ciricescu, R. Essick, B. Lucas, P. May, K. Moat, J. Norris, M. Schuette, and A. Saidi, “The reconfigurable streaming vector processor (RSVPTM),” in Proc. the 36th Int. Symp. Microarchitecture, 2003. [33] S. Chiricescu,M. Schuette, R. Essick, B. Lucas, P.May, K.Moat, and J. Norris, “RSVPTM: An automotive vector processor,” in Proc. Intelligent Vehicles Symp., 2004. [34] S. M. Chai, S. Chiricescu, R. Essick, B. Lucas, P. May, K. Moat, J. M. Norris, and M. Schuette, “Streaming processors for next-generationi mobile imaging application,” IEEE Communication Magazine, pp. 81–89, Dec. 2005. [35] S. Chiricescu, S. M. Chai, K. Moat, B. Lucas, P. May, J. Norris, R. Essick, andM. Schuette, “RSVP II: A next generation automotive vector processor,”in Proc. Intelligent Vehicles Symp., 2005, pp. 563–568. [36] M. B. Taylor et al, “Evaluation of the rawmicroprocessor: An exposed-wiredelay architecture for ILP and streams,” in Proc. 31st Annual International Symp. Computer Architecture, June 2004, pp. 2–13. [37] http://en.wikipedia.org/wiki/Particle filter. [38] M. Muhlich, “Particle filters an overview,” in Proc. Filter-Workshop, 2003. [39] A. Bhattacharyya, “On a measure of divergency between two statistical populations defined by their probability distributions,” Bulletin of the Calcutta Mathematical Society 35, pp. 99–109, 1943. [40] http://en.wikipedia.org/wiki/Bhattacharyya distance. [41] Y. Rubner, L. Guibas, and C. Tomasi, “The earth mover’s distance, multidimension scaling, and color-based image retrieval,” in Proc. ARPA Image Understanding. Workshop, May 1997. [42] Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metrix for image retrieval,” Int. J. Computer Vision, pp. 99–121, 2000. [43] H. Ling and K. Okada, “Diffusion distance for histogram comparison,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2006. [44] H. Ling and K. Okada, “An efficient earth mover’s distance algorithm for robust histogram comparison,” IEEE Trans. Pattern Anal. Machine Intell., vol. 29, no. 5, pp. 840–863, May 2007. [45] http://www.arm.com. [46] R. Andraka, “A survey of CORDIC algorithms for FPGA based computers,” in Proc. ACM/SIGDA Sixth Int. Symp. Field Programmable Gate Arrays (FPGA-98), Feb. 1998, pp. 192–200. [47] http://www.cadence.com. [48] http://www.springsoft.com. [49] http://www.synopsys.com. [50] http://www.mentor.com. [51] http://www.objectvideo.com. [52] F. Kristensen, H. Hedberg, H.u Jiang, P. Nilsson, and V. Owall, “An embedded real-time surveillance system: Implementation and evaluation,” J. Signal Processing Systems, vol. 52, no. 1, pp. 75–94, July 2008. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/45436 | - |
dc.description.abstract | 近年來監控系統的使用大幅地成長,因此,監控系統中之智慧型功能也會變得越來越重要,例如物件追蹤、前景分割以及人臉偵測等等,否則我們必需要從冗長的監視影像中花費很多時間及精力來得到我們所想要的資訊。
而在追蹤的演算法中,我們發現物體在經過光影變化或是有著與物體相似色的地方時,追蹤的演算法容易出現問題。因此我們改進現有的粒子濾波器框架並設計出一個能夠處理這兩種問題的追蹤演算法。 以外,我們設計了一個協處理器來處理及時運算,並且能夠支援並加速包含我們所提出之追蹤演算法在內的一些常用在智慧型監控系統中的演算法。我們使用了子字平行、資料串流以及硬體共享的技術來解決現今多媒體處理器常會遇到的問題:高產量需求、高頻寬需求、可程式化以及低成本。在我們的硬體中,支援一些基本的圖型運算如直方圖累積、CORDIC等等。而基於我們的硬體設計,可以使用我們的硬體來加速如物件追蹤、前景分割、人臉偵測、行為辨識等等的演算法。並且,這協處埋器是可重組化且可以用來當作是新演算法的測試平台。 原型晶片利用聯華電子90nm技術製成,面積為3.26×3.26mm^2,其工作頻率為125MHz,最大消耗功率為33.3 mW。而此晶片最多可以在640×480、每秒30張、YUV 4:2:0的影像輸入中同時追蹤10個物件。並且,由於使用子字平行技術,我們可以省下約60.28%的晶片記憶體。 | zh_TW |
dc.description.abstract | Surveillance systems are widely used today and the number of employed cameras increases. The intelligent functions in surveillance are more and more important for helping user inspecting the video content. Among the intelligent functions, tracking and segmentation are the most widely used. In this work, we try to address the problems that will happen when we want to track video objects under light condition changes and video objects with background-alike color. We proposes a enhanced particle filter framework that can handle these two kind of problems. Moreover, in order to achieve the real-time applications, we also design a hardware coprocessor that can support most of the operations used in intelligence surveillance system, including those operations used in our proposed algorithm.
In order to overcome the typical hardware design challenge in accelerating vision algorithms, such as high throughput requirement, high bandwidth requirement, programmability and low cost, we employ sub-word level parallelism, streaming based processing, and hardware sharing techniques. In this design, basic image processing operations, such as histogram accumulation, CORDIC, window operations, are supported. Specific operations, such as those in video object segmentation, are also supported. Based on the hardware design, many applications, such as tracking, segmentation, face detection, feature detection and description, and motion analysis, can be accelerated. This coprocessor is reconfigurable and it can be used to test the new developed algorithms. We also implement the hardware coprocessor as a chip with standard cell based design flow. The prototype chip is fabricated with UMC 90nm technology. The chip size is 3.260 x 3.260 mm^2. The external bandwidth is estimated, and the chip can support video object segmentation and, at the same time, tracking 10 objects in 640 x 480 30 fps 4:2:0 YUV color sequence with 125 MHz clock frequency and 33.3mW power consumption. Moreover, with sub-word level parallelism, the on chip memory saving is 60.28%. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T04:20:04Z (GMT). No. of bitstreams: 1 ntu-98-R96943030-1.pdf: 2064729 bytes, checksum: 03803839b8f03097289e31f74c738320 (MD5) Previous issue date: 2009 | en |
dc.description.tableofcontents | Abstract ix
1 Introduction 1 1.1 Introduction to Intelligent Surveillance Systems . . . 1 1.2 Tracking Algorithm and Challenges . . . . . . . . . . 2 1.3 Media Processors and Hardware Design Challenges . . . 4 1.4 Thesis Organization . . . . . . . . . . . . . . . . . 9 2 Proposed Tracking Algorithm 11 2.1 Particle Filter . . . . . . . . . . . . . . . . . . 11 2.2 Tracking Algorithm Overview . . . . . . . . . . . . . 15 2.2.1 Design Challenge . . . . . . . . . . . . . . . . . 15 2.2.2 Tracking Algorithm Overview . . . . . . . . . . . . 15 2.3 Segmentation . . . . . . . . . . . . . . . . . . . . 16 2.4 Connected Component Labeling . . . . . . . . . . . . 18 2.5 Spread Particles . . . . . . . . . . . . . . . . . . 19 2.6 Histogram Accumulation . . . . . . . . . . . . . . . 20 2.7 Histogram Comparison Algorithm . . . . . . . . . . . 22 2.7.1 Cross-bin And Bin-by-bin Computation . . . . . . . 23 2.7.2 Comparison . . . . . . . . . . . . . . . . . . . . 26 2.8 Particle Weight Computation . . . . . . . . . . . . . 31 2.9 Object State Estimation . . . . . . . . . . . . . . . 32 2.10 Target Histogram Update . . . . . . . . . . . . . . 32 2.11 New Tracker Initialization . . . . . . . . . . . . . 33 3 Proposed Hardware Architecture 37 3.1 Architecture Overview . . . . . . . . . . . . . . . . 37 3.1.1 Design Target . . . . . . . . . . . . . . . . . . . 37 3.1.2 Design Challenge . . . . . . . . . . . . . . . . . 37 3.1.3 Proposed Design Techniques . . . . . . . . . . . . 38 3.1.4 Overview . . . . . . . . . . . . . . . . . . . . . 40 3.1.5 High Memory And Bandwidth Requirements . . . . . . 41 3.2 Reconfigurable Memory Array . . . . . . . . . . . . . 43 3.3 Histogram Accumulation RSPE . . . . . . . . . . . . . 46 3.3.1 SIFT Converter . . . . . . . . . . . . . . . . . . 48 3.4 CORDIC . . . . . . . . . . . . . . . . . . . . . . . 49 3.5 Distance Compute RSPE . . . . . . . . . . . . . . . . 51 3.6 Object Info RPSE . . . . . . . . . . . . . . . . . . 52 3.7 Reconfigurable Window Register RSPE . . . . . . . . . 55 3.8 ALU RSPE . . . . . . . . . . . . . . . . . . . . . . 58 3.8.1 Connected Component Labeling . . . . . . . . . . . 58 3.9 MinMax RSPE . . . . . . . . . . . . . . . . . . . . . 60 3.10 Segmentation RSPE . . . . . . . . . . . . . . . . . 62 4 Experiment Result 65 4.1 Algorithm Experiment Result and Analysis . . . . . . 65 4.2 Hardware Implementation Result and Analysis . . . . . 70 4.2.1 Design Flow . . . . . . . . . . . . . . . . . . . . 70 4.2.2 Chip Layout and Specification . . . . . . . . . . . 73 4.2.3 Summary and Comparison . . . . . . . . . . . . . . 75 5 Conclusion 79 References 81 | |
dc.language.iso | en | |
dc.title | 智慧型監控系統中物件追蹤平台之演算法及硬體架構設計 | zh_TW |
dc.title | Algorithm and Hardware Architecture Design of Video Object Tracking Framework for Smart Surveillance Network | en |
dc.type | Thesis | |
dc.date.schoolyear | 98-1 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 蔡宗漢,賴尚宏,林嘉文,劉志尉 | |
dc.subject.keyword | 物件追踪,可重組化硬體架構設計,協處理器,智慧型監控系統,粒子濾波器, | zh_TW |
dc.subject.keyword | Object tracking,reconfigurable hardware architecture design,intelligence surveillance system,coprocessor,particle filter, | en |
dc.relation.page | 86 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2009-10-26 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電子工程學研究所 | zh_TW |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-98-1.pdf 目前未授權公開取用 | 2.02 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。