請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/40642
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 洪一平(Yi-Ping Hung) | |
dc.contributor.author | Ko-Chun Lin | en |
dc.contributor.author | 林克駿 | zh_TW |
dc.date.accessioned | 2021-06-14T16:54:20Z | - |
dc.date.available | 2011-08-22 | |
dc.date.copyright | 2011-08-22 | |
dc.date.issued | 2011 | |
dc.date.submitted | 2011-08-12 | |
dc.identifier.citation | [1] C. Kim and J. Hwang, “An Integrated Scheme for Object-Based Video Abstraction,” ACM Multimedia, pp. 303-311, 2000.
[2] X. Zhu, X. Wu, J. Fan, A.K. Elmagarmid, and W.G. Aref, “Exploring Video Content Structure for Hierarchical Summarization,” MultiMedia Systems, vol. 10, no 2, pp. 98-115, 2004 [3] A.M. Smith and T. Kanade, “Video Skimming and Characterization through the Combination of Image and Language Understanding,” Proc. Int’l Workshop Content-Based Access of Image and Video Databases, pp. 61-70, 1998. [4] Y.-F. Ma, X.-S. Hua, L. Lu, and H. Zhang, “A Generic Framework of User Attention Model and Its Application in Video Summarization,” IEEE Trans. Multimedia, vol. 7, no. 5, pp. 907-919, 2005. [5] M. Irani, P. Anandan, J. Bergen, R. Kumar, and S. Hsu, “Efficient Representations of Video Sequences and Their Applications,” Signal Processing: Image Comm., vol. 8, no. 4, pp. 327-351, 1996. [6] A. Pope, R. Kumar, H. Sawhney, and C. Wan, “Video Abstraction: Summarizing Video Content for Retrieval and Visualization,” Signals, Systems, and Computers, pp. 915-919, 1998. [7] C. Pal and N. Jojic, “Interactive Montages of Sprites for Indexing and Summarizing Security Video,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, p. II: 1192, 2005. [8] G.C. Chao, “Augmented Keyframe,” National Taiwan University Doctoral Dissertation, 2011 [9] C.C Chiang, M.N. Tsai, H.F. Yang, 'A Quick Browsing System for Surveillance Videos', in Proceedings of the 12th IAPR Conference on Machine Vision Applications, MVA, Japan, 2011. [10] A. Rav-Acha, Y. Pritch, and S. Peleg, “Making a Long Video Short: Dynamic Video Synopsis,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 435-441, June 2006. [11] H. Kang, Y. Matsushita, X. Tang, and X. Chen, “Space-Time Video Montage,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1331-1338, June 2006. [12] Y. Pritch, A. Rav-Acha, A. Gutman, and S. Peleg, “Webcam Synopsis: Peeking Around the World,” Proc. Int’l Conf. Computer Vision, Oct. 2007. [13] A. Agarwala, K.C. Zheng, C. Pal, M. Agrawala, M. Cohen, B. Curless, D. Salesin, and R. Szeliski, “Panoramic Video Textures,” Proc. ACM SIGGRAPH ’05, pp. 821-827, 2005. [14] Y. Pritch, A. Rav-Acha, and S. Peleg, “Nonchronological Video Synopsis and Indexing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1971-1984, 2008. [15] Y.Y. Chen, Y.H. Huang, Y.C. Cheng and Y.S. Chen, ”A 3-D Surveillance System using Multiple Integrated Cameras,” IEEE Int’l Conf. on Information and Automation , 2010. [16] T. Huang and S. Russell, “Object Identification in a Bayesian Context,” Int. Joint Conf. Artificial Intelligence, 1997, pp. 1276-1282. [17] H. Pasula, S. Rusell, M. Ostland, and Y. Ritov, “Tracking Many Objects with Many Sensors,” Int. Joint Conf. Artificial Intelligence, 1999, pp. 1160-1171. [18] V. Kettnaker and R. Zabih, “Bayesian Multi-Camera Surveillance,” IEEE Conf. Computer Vision and Pattern Recognition, 1999, pp. 252-259. [19] K. W. Chen, C. C. Lai, P. J. Lee, C. S. Chen, and Y. P. Hung, “Adaptive Learning for Target Tracking and True Linking Discovering across Multiple Non-Overlapping Cameras,” IEEE Transactions on Multimedia, 13(5), 2011. [20] C. Stauffer, and W.E.L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, vol. 2, 1999, pp. 246-252 [21] D. Comaniciu, V. Ramesh, and P. Meer, “Real-Time Tracking of Non-Rigid Objects Using Mean Shift,” Proc. Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 142-149, 2000. [22] V. Kolmogorov and R. Zabih, “What Energy Functions Can Be Minimized via Graph Cuts?” Proc. Seventh European Conf. Computer Vision, pp. 65-81, 2002. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/40642 | - |
dc.description.abstract | 近來幾年,越來越多監視攝影機被廣泛地安裝在公共場所,以監控人們的行為,並應用在許多不同的方面。這些攝影機的拍攝時間通常都相當的長,因此,當特殊事件發生後,我們可能需要花許多時間來觀看監視影片並從冗長的影片中找出目標片段。傳統的做法是使用快速播放來節省影片觀看的時間,雖然利用快轉播放可以將影片的播放時間縮短,但同時也會加快目標的動作,可能使其不意被看清或者容易被忽略。
我們提出了一套時空快速影片檢索系統,來解決在監視影片中尋找目標的問題。讓使用者在快速觀看影片的同時也可以擁有足夠的時間仔細看清楚目標。我們的主要概念是取出並且追蹤影片中所有含有最多資訊的連續物件,並改變他們出現在監視影片中的時間軸位置,藉此來產生一個較短的精簡影片,而不是去改變物件本身的動作速度。另外,我們也設計了一個時間空間檢索介面,來幫助使用者可以更快的速度從多支攝影機中找到監視影片中的目標並追蹤其路徑。當目標離開某支攝影機範圍時,系統會自動預測出目標出現在下一攝影機的時間點,並且播放其在該攝影機的原始影像持續追蹤之,若追蹤錯誤,系統會立即再次產生一僅含有可能目標之更為精簡之濃縮影片,讓使用者迅速在影片中找出正確的目標,接著利用系統繼續往下自動追蹤之。本系統的細節以及產生精簡影片的詳細步驟我們將會在本篇論文中加以說明。 | zh_TW |
dc.description.abstract | Recently, more and more surveillance cameras are widely installed in public place to monitor people's behavior for various applications. Since surveillance videos are very long, we need a lot of time to scan a video in order to find a specific target. A traditional approach is to use the fast-forward method to saving the scanning time, while this method can shorten the video playing time, but it will also accelerate the movement speed of the targets, and may leads the targets hard to be seen clearly even be ignored. In this thesis, we propose a temporal-spatial quick browsing system for surveillance video to solve this problem. The user can not only fast browse surveillance videos but also clearly look at the target while retrieving targets. Our basic idea is first to extract all moving objects that carry the most significant information in a surveillance video, and then to rearrange their position on the time-axis of the video to short it. In addition, we try to preserve all the essential activities appearing in the original surveillance video. We also design a searching interface that can help the user to find a specific target and track its trajectory in surveillance videos of a camera network. The user can select a target in the compact video associate with a camera view. Then, our system automatically estimates its position in the camera network. If the user find any error occur, a new browsing process can be triggered to help user to track the target correct. | en |
dc.description.provenance | Made available in DSpace on 2021-06-14T16:54:20Z (GMT). No. of bitstreams: 1 ntu-100-R98922109-1.pdf: 2150566 bytes, checksum: ec41ad3fc102bfb3376a7ca06c506f59 (MD5) Previous issue date: 2011 | en |
dc.description.tableofcontents | 口試委員會審定書 #
誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS iv LIST OF FIGURES vi LIST OF TABLES vii Chapter 1 Introduction 1 Chapter 2 Related Works 5 Chapter 3 Methodology 8 3.1 Moving Object Extraction and Tracking in Video 8 3.2 Video Synopsis 10 3.2.1 The Object Tubes 11 3.2.2 Energy between Tubes 11 3.2.3 Energy Minimization 13 3.3 Target Tracking across Non-overlapping Cameras 14 Chapter 4 Constraint Synthesis Video Generation 16 4.1 Object Selection 16 4.2 Object Arrangement 17 4.3 Presentation of Objects 20 Chapter 5 Implementation and Experiments 22 5.1 System Description 23 5.2 Experiment Design 25 5.3 Experiment Result 26 Chapter 6 Conclusion and Future Work 29 REFERENCE 30 | |
dc.language.iso | en | |
dc.title | 基於時空條件下之攝影機網路物件快速搜尋系統 | zh_TW |
dc.title | Fast Object Searching System in Camera Network based on Temporal-Spatial Constraints | en |
dc.type | Thesis | |
dc.date.schoolyear | 99-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 江政杰,陳永昇 | |
dc.subject.keyword | 監控影片,影片概括,追蹤,濃縮影片, | zh_TW |
dc.subject.keyword | surveillance video,video summarize,tracking,synopsis video, | en |
dc.relation.page | 32 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2011-08-12 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-100-1.pdf 目前未授權公開取用 | 2.1 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。