Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48769
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平
dc.contributor.authorPei-Jyun Leeen
dc.contributor.author李佩君zh_TW
dc.date.accessioned2021-06-15T07:13:03Z-
dc.date.available2010-08-20
dc.date.copyright2010-08-20
dc.date.issued2010
dc.date.submitted2010-08-19
dc.identifier.citation[1] K.W. Chen, C.C. Lai, Y.P. Hung, and C.S. Chen, “An adaptive learning method for target tracking across multiple cameras,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, pp. 1 – 8, 2008.
[2] Paul E. Debevec , Camillo J. Taylor , Jitendra Malik, “Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach,” Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, p.11-20, August 1996 .
[3] R.A. Finke, “Principles of mental imagery,” in MIT Press, 1989.
[4] Andreas Girgensohn , Don Kimber , Jim Vaughan , Tao Yang , Frank Shipman , Thea Turner , Eleanor Rieffel , Lynn Wilcox , Francine Chen , Tony Dunnigan, “DOTS: support for effective video surveillance,” Proceedings of the 15th international conference on Multimedia, September 25-29, 2007, Augsburg, Germany.
[5] C.H. Hsiao, W.C. Huang, K.W. Chen, L.W. Chang, and Y.P. Hung, “Generating pictorial-based representation of mental image for video monitoring,” in IUI, 2009.
[6] T. Horprasert, D. Harwood, L.S. Davis, “A statistical approach for real-time robust background subtraction and shadow detection,” in: IEEE ICCV’99 Frame-Rate Workshop, Corfu, Greece, Sep 1999.
[7] G. de Haan, J. Scheuer, R. de Vries, and F. Post. “Egocentric navigation for video surveillance in 3d virtual environments,” In IEEE workshop on 3D User Interfaces, March 2009.
[8] R.I. Hartley, and A. Zisserman, “Multiple view geometry,” in 2nd ed, Cambridge University Press, 2004.
[9] Arun Katkere , Saied Moezzi , Don Y. Kuramura , Patrick Kelly , Ramesh Jain, “Towards video-based immersive environments,” Multimedia Systems, v.5 n.2, p.69-85, March 1997.
[10] T. Kanade , P. J. Narayanan , P. W. Rander, “Virtualized reality: concepts and early results,” Proceedings of the IEEE Workshop on Representation of Visual Scenes, p.69, June 21-21, 1995.
[11] K. Levenberg, “A method for the solution of certain problems in least squares,” in Quarterly Applied Math, 2, 1944, pp. 164-168.
[12] B. Lei, and E. Hendriks, “Real-time multi-step view reconstruction for a virtual teleconference system,” in EURASIP J. Appl. Signal Process, 2002(10), 2002, pp. 1067-1088.
[13] Ulrich Neumann , Suya You , Jinhui Hu , Bolan Jiang , JongWeon Lee, “Augmented Virtual Environments (AVE): Dynamic Fusion of Imagery and 3D Models,” Proceedings of the IEEE Virtual Reality 2003, p.61, March 22-26, 2003.
[14] S. Palmer, “Vision science: photons to phenomenology,” in MIT Press, 1999.
[15] W. T. Reeves, “Particle Systems—a Technique for Modeling a Class of Fuzzy Objects,” ACM Transactions on Graphics (TOG), v.2 n.2, p.91-108, April 1983.
[16] H. Sawhney, A. Arpa, R. Kumar, S. Samarasekera, M. Aggarwal, H. Hsu, D. Nister, and K. Hanna, “Video Flashlights— Real Time Rendering of Multiple Videos for Immersive Model Visualization,” Proc. Eurographics Workshop Rendering Techniques, pp. 157-168, 2002.
[17] S. Seitz and C. Dyer, “View morphing,” in SIGGRAPH, 1996.
[18] C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real-time tracking,” in IEEE Transactions on PAMI, 22(8), 2000, pp. 747-757.
[19] Mark Segal , Carl Korobkin , Rolf van Widenfelt , Jim Foran , Paul Haeberli, “Fast shadows and lighting effects using texture mapping,” Proceedings of the 19th annual conference on Computer graphics and interactive techniques, p.249-252, July 1992 .
[20] Noah Snavely , Steven M. Seitz , Richard Szeliski, Photo tourism: exploring photo collections in 3D, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006.
[21] ¬Thorndyke, P. W. & Hayes-Roth, B. (1982). Differences in Spatial Knowledge Acquired from Maps and Navigation, Cognitive Psychology, 14, pp. 560-589.
[22] G. Welch and G. Bishop, “An introduction to the kalman filter,” Chapel Hill, NC, USA, Tech. Rep., 1995.
[23] Y. Wang, D.M. Krum, E.M. Coelho, and D.A. Bowman, “Contextualized videos: combining videos with environment models to support situational understanding,” in IEEE TVCG, 13(6), 2007, pp. 1568-1575.
[24] Z. Zhang, “A flexible new technique for camera calibration,” in IEEE Transactions on PAMI, 22, 2000, pp. 1330-1334.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/48769-
dc.description.abstract目前多攝影機監控系統通常包含大量的攝影機並且設置在大範圍的監視環境。因此,在設計監控系統方面,有一個很重要的議題是如何讓使用者了解一系列事件發生的環境。例如:當目標物橫跨多台攝影機時,我們系統不同於傳統監控系統直接將目前攝影機視角直接切換到另一台攝影機視角的方式,我們提出一個新的方法是以使用者為中心平順轉換攝影機視角,讓使用者可以很簡單的了解攝影機之間的相對關係和地理位置。在切換攝影機視角時,我們的系統會結合預先建立好的虛擬背景的三維模型以及合成的前景,然後再由虛擬攝影機成像。我們系統有一個很重要的特質就是,在攝影機所拍攝範圍只有一小部分有交集時或者是完全沒有交集時依然可以運作。這種情況之前在其他轉場系統從來沒有被處理過。此外,目前的轉場系統通常內插兩台真實攝影機的位置來決定虛擬攝影機的位置在轉場時期的位置。在我們系統,我們設計了一個決定虛擬攝影機路徑的規則,以設置虛擬攝影機的位置,取代了直接內插的方式。zh_TW
dc.description.abstractCurrent visual surveillance systems usually include multiple cameras to monitor the activities of targets over a large area. An important issue for the guard or user using the system is to understand a series of events occurring in the environment, for example to track a target walking across multiple cameras. Opposite to the traditional systems switching the camera view from one to another directly, we propose a novel system to ease the mental effort for users to understand the geometry between real cameras and the guidance path by egocentric view transition. During the period of switching cameras, our system synthesizes the virtual views by blending the synthesized foreground texture into the pre-constructed background model and then re-projecting it to the view of virtual camera. An important property of our system is that it can be applied to the situations of where the view fields of transition cameras are not close enough or even exclusive. Such situations have never been taken into consideration in the state-of-the-art view transition techniques. In addition, current view transition systems usually linear interpolate two real cameras position to decide the virtual camera position in the period of view transition. Here, we design a rule to determine the virtual camera position instead of linear interpolation for better visual effect.en
dc.description.provenanceMade available in DSpace on 2021-06-15T07:13:03Z (GMT). No. of bitstreams: 1
ntu-99-R97944027-1.pdf: 3631115 bytes, checksum: 643ba473c1532dd928357c99bc83bc02 (MD5)
Previous issue date: 2010
en
dc.description.tableofcontents誌謝 I
中文摘要 II
ABSTRACT III
CONTENTS IV
LIST OF FIGURES VI
LIST OF TABLES VII
CHAPTER 1 INTRODUCTION 1
CHAPTER 2 RELATED WORKS 4
CHAPTER 3 SYSTEM OVERVIEW 8
3.1 PREPROCESSING 9
3.2 MULTI-CAMERA TRACKING 10
CHAPTER 4 VIEW TRANSITION FOR OVERLAPPING CAMERAS 11
4.1 FOREGROUND DETECTION 12
4.2 FOREGROUND BILLBOARD CONSTRUCTION 14
4.3 BILLBOARD POSITION ESTIMATION 14
4.4 VIRTUAL CAMERA PLACEMENT 19
CHAPTER 5 VIEW TRANSITION FOR NON-OVERLAPPING CAMERAS 20
5.1 PARTICLE SYSTEM 21
5.2 FOREGROUND PARTICLES GENERATION 22
5.3 PARTICLES MOVEMENT CONTROL 23
5.4 VIRTUAL CAMERA PLACEMENT 27
5.5 BACKGROUND TEXTURE ADAPTATION 29
CHAPTER 6 EXPERIMENTAL RESULTS 31
CHAPTER 7 CONCLUSIONS AND FUTURE WORK 41
7.1 CONCLUSIONS 41
7.2 FUTURE WORK 42
REFERENCE 43
dc.language.isoen
dc.subject 轉場zh_TW
dc.subject 影像生成zh_TW
dc.subject影像內插zh_TW
dc.subjectview interpolationen
dc.subject view synthesisen
dc.subject view transitionen
dc.title以監控者為中心之多攝影機平順轉場技術zh_TW
dc.titleEgocentric View Transition for Video Monitoring in a Distributed Camera Networken
dc.typeThesis
dc.date.schoolyear99-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳永昇,劉庭祿,陳祝嵩,廖偉權
dc.subject.keyword影像內插, 轉場, 影像生成,zh_TW
dc.subject.keywordview interpolation, view transition, view synthesis,en
dc.relation.page45
dc.rights.note有償授權
dc.date.accepted2010-08-19
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊網路與多媒體研究所zh_TW
顯示於系所單位:資訊網路與多媒體研究所

文件中的檔案:
檔案 大小格式 
ntu-99-1.pdf
  未授權公開取用
3.55 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved