Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64185
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳永耀
dc.contributor.authorYu-Shiang Linen
dc.contributor.author林于翔zh_TW
dc.date.accessioned2021-06-16T17:33:52Z-
dc.date.available2017-08-17
dc.date.copyright2012-08-17
dc.date.issued2012
dc.date.submitted2012-08-15
dc.identifier.citation[1] F. Lv, T. Zhao, and R. Nevatia, “Camera Calibration from Video of a Walking Human,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 9, 2006.
[2] E.K. Bas, and J.D. Crisman, “An Easy to Install Camera Calibration For Traffic Monitoring,” IEEE Conference on Intelligent Transportation System, pp. 362-366, 1997.
[3] L.L. Wang and W.H. Tsai, “Camera Calibration by Vanishing Lines for 3-D Computer Vision,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 4, 1991.
[4] T. Chen, A. Bimbo, F. Pernici, and G. Serra, “Accurate self-calibration of two cameras by observations of a moving person on a ground plane,” IEEE Conference on Adv. Video Signal-Based Surveillance, 2007.
[5] B. Bose and E. Grimson, “Ground Plane Rectification by Tracking Moving Objects,” Proc. IEEE Workshop Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 94-101, 2003.
[6] B. Caprile and V. Torre, “Using Vanishing Points for Camera Calibration,” Int’l J. Computer Vision, vol. 4, pp. 127-140, 1990.
[7] R. Cipolla, T. Drummond, and D.P. Robertson, “Camera Calibration from Vanishing Points in Images of Architectural Scenes,” Proc. British Machine Vision Conf., vol. 2, pp. 382-391, 1999.
[8] A. Criminisi, I. Reid, and A. Zisserman, “Single View Metrology,” Int’l J. Computer Vision, vol. 40, no. 2, pp. 123-148, 2000.
[9] J. Deutscher, M. Isard, and J.M. Cormick, “Automatic Camera Calibration from a Single Manhattan Image,” Proc. European Conf. Computer Vision, pp. 175-188, 2002.
[10] O. Faugeras, “Three Dimensional Computer Vision: A Geometric,” Viewpoint. MIT Press, 1993.
[11] M.A. Fischler and R.C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography,” Comm. ACM, vol. 24, no. 6, pp. 381-395, 1981.
[12] J.M. Cormick and M. Isard, “BraMBLe: A Bayesian Multiple-Blob Tracker,” Proc. Int’l Conf. Computer Vision, vol. 2, pp. 34-41, 2001.
[13] D. Liebowitz, A. Criminisi, and A. Zisserman, “Creating Architectural Models from Images,” Proc. EuroGraphics, vol. 18, pp. 39-50, 1999.
[14] A. Nakatsuji, S. Takahashiy, Y. Sugayay, and K. Kanataniy, “Stabilizing the Focal Length Computation for 3-D Reconstruction from Two Uncalibrated Views,” Proc. Asian Conf. Computer Vision, vol. 1, pp. 1-6, 2004.
[15] C. Stau, K. Tieu, and L. Lee, “Robust Automated Planar Normalization of Tracking Data,” Proc. IEEE Workshop Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 1-8, 2003.
[16] C.R. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.
[17] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[18] Z. Zhang, “Camera Calibration with One-Dimensional Objects,” Proc. European Conf. Computer Vision, vol. 4, pp. 161-174, 2002.
[19] T. Zhao, R. Nevatia, and F. Lv, “Segmentation and Tracking of Multiple Humans in Complex Situations,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 194-201, 2001.
[20] T. Zhao, R. Nevatia, “Tracking Multiple Humans in Crowded Environment,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 406-413, 2004.
[21] S.T. Su, “Moving Object Detection Based on Two-Staged Background Subtraction Approach,” Master Thesis, Department of Electrical Engineering, National Taiwan University, 2009.
[22] Y.C. Chung, J.M. Wang, and S.W. Chen, “Progressive Background Images Generation,” 15th IPPR Conference on Computer Vision, Graphics and Image Processing, 2002.
[23] Y.J. Wu, “Computer Vision-Based Traffic Identification Technologies for On-board Driving Assistance and Traffic Monitoring,” Master Thesis, Department of Civil Engineering, National Taiwan University, 2004.
[24] C.Y. Cheng, “A Study on Traffic Parameter Extraction from Image Detector at Intersection” Master Thesis, Department of Civil Engineering, National Taiwan University, 2008.
[25] M. Isard and J.M. Cormick, “BraMBLe: A Bayesian Multiple-Blob Tracker,” Proc. Int’l Conf. Computer Vision, vol. 2, pp. 34-41, 2001.
[26] D. Liebowitz, A. Criminisi, and A. Zisserman, “Creating Architectural Models from Images,” Proc. EuroGraphics, vol. 18, pp. 39-50, 1999.
[27] A. Nakatsuji, S. Takahashiy, Y. Sugayay, and K. Kanataniy, “Stabilizing the Focal Length Computation for 3-D Reconstruction from Two Uncalibrated Views,” Proc. Asian Conf. Computer Vision, vol. 1, pp. 1-6, 2004.
[28] C. Stau, K. Tieu, and L. Lee, “Robust Automated Planar Normalization of Tracking Data,” Proc. IEEE Workshop Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 1-8, 2003.
[29] C.R. Wren, A. Azarbayejani, T. Darrell, and A. Pentland, “Pfinder: Real-Time Tracking of the Human Body,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 780-785, 1997.
[30] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[31] Z. Zhang, “Camera Calibration with One-Dimensional Objects,” Proc. European Conf. Computer Vision, vol. 4, pp. 161-174, 2002.
[32] T. Zhao, R. Nevatia, and F. Lv, “Segmentation and Tracking of Multiple Humans in Complex Situations,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 194-201, 2001.
[33] R. Kasturi, R. C. Jain, and B. G. Schunck, “Machine Vision,” McGraw-Hill International Editions, 1995.
[34] Wikipedia, http://en.wikipedia.org/wiki/.
[35] S.K. Fung, H.C. Yung, and K.H. Pang, “Camera calibration from road lane markings,” Optical Engineering, vol. 42, pp. 2967–2977, 2003.
[36] S. Se, ‘‘Zebra-crossing detection for the partially sighted,’’ In Proc. of CVPR, pp. 211-217, 2000.
[37] I. Fukui, ‘‘TV image processing to determine the position of a robot vehicle,’’ Pattern Recogn., pp. 101-109, 1981.
[38] J.W. Courtney, M.J. Magee, and J.K. Aggarwal, ‘‘Robot guidance using computer vision,’’ Pattern Recogn., pp. 585-592, 1984.
[39] W. H. Chou and W. H. Tsai, ‘‘A new approach to robot location by house corners,’’ Pattern Recogn., pp. 439-451, 1986.
[40] R.Y. Tsai, ‘‘A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,’’ IEEE Trans. Rob. Autom., pp. 323-344, 1987.
[41] I. Junejo, “Using Pedestrians Walking on Uneven Terrains for Camera Calibration,” MVA, 2009.
[42] B. Micusik and T. Pajdla, “Simultaneous surveillance camera calibration and foot-head homology estimation from human detections,” In Proc. of CVPR, pp. 1562-1569, 2010.
[43] Z. Zhang, M. Li, K. Huang, and T. Tan. Practical, “camera auto-calibration based on object appearance and motion for traffic scene visual surveillance,” In Proc. of CVPR, pp. 1-8, 2008.
[44] F. Schaffalitzky and A. Zisserman, “Planar grouping for automatic detection of vanishing lines and points,” IVC, pp. 647-658, 2000.
[45] M. Hodlmoser, B. Micusik, and M. Kampel, “Camera auto-calibration using pedestrians and zebra-crossings,” Computer Vision Workshops (ICCV Workshops), 2011
[46] T. Zhao and R. Nevatia, “Tracking Multiple Humans in Crowed Environment,” IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 406-413, 2004.
[47] Caviar Test Case Scenarios, http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64185-
dc.description.abstract影像中消失點是估計攝影機焦距所須之必要參數,而攝影機焦距是還原影像絕對深度(absolute depth information)之重要資訊,故估計出一個準確的消失點,可有助於還原真實世界之絕對深度關係。有鑑於此,本研究提出了一個新穎的影像中消失點(vanishing point)及攝影機焦距(focal length)之自動估測演算法。過往的研究主要著重在找尋影像中的平行線段,或者是以不同時刻同一人物的頭腳位置拉設線段找尋影像中消失點,進而計算攝影機焦距。本研究的方法不同處在於,本研究先估測一個人物在不同時刻的虛擬投影在座標轉換後的長度變化,再以迭代演算之方式來反推影像中消失點所在之最佳位置。
首先,藉由前景偵測,追蹤同一人物在不同時刻的位置,並在追蹤結束後過濾出此人物的直立姿態,針對此組姿態進行虛擬投影。接著再將此組虛擬投影集合的兩個端點以座標轉換的方式轉換至世界影像座標系統(World Image Coordinate System),並估測出此組虛擬投影集在世界影像座標系統的長度變化,本研究發現同一人物的虛擬投影在世界影像座標系統的長度變化會與消失點的位置有一個特殊關係,即越接近消失點的虛擬投影長度集合之標準差值,會隨著越靠近正確的消失點而逐漸遞減。故本研究針對此一特殊關係,發展一演算法逐步逼近此最佳消失點,本演算法首先會先根據一般場景攝影機之焦距範圍,定義一個特殊之迭代區間(iterative region),並在此一區間觀察虛擬投影的長度遞增性,結合二分迭代法及循序迭代法快速的逼近一個最佳消失點,進而計算得最佳之攝影機焦距。
本研究的優勢在於只運用了一個攝影機即可得到攝影機之焦距資訊,並且無須額外的校正物體(calibration object),也不須限制影片中人物的行走路徑,此外,本研究之運算複雜度亦較低,可更為適用於一般需要即時運算的影像監控系統。
zh_TW
dc.description.abstractThe vanishing point in image is an important parameter for estimating the focal length. Moreover, the camera focal length is important information to revert the absolute depth information in image. Hence, if the optimized vanishing point can be confirmed, the absolute depth information in image can be generated easily. In this thesis, a new method to estimate an optimized focal length and vanishing point in image is proposed. In previous approach, the vanishing point estimation focuses on finding the parallel lines in image or the head-feet position of a people in different frames. In proposed approach, the set of virtual shadows of a walking people has been created. Moreover, the length increment of the set of virtual shadows will be computed after the coordinate transformation, an optimized vanishing point can be obtained quickly by using a special iteration method.
First, the walking people will be tracked and the leg-crossing posture of this people will be filtered in each frame, by using background subtraction process. Moreover, the projection vector is proposed to create a set of virtual shadows, in order to transfer the head-feet coordinates of these virtual shadows to World Image Coordinate System (WICS). A special relationship between the set of virtual shadows and the vanishing point in image has been found, which is the standard deviation of the set of virtual shadows will decrease, when the transformation point close to the optimized vanishing point.
By using this special relationship, a special algorithm can be developed to approach the optimized vanishing point. The algorithm base on a range of focal length in general camera, which can defined a special iterative area. In addition, the increment of length of virtual shadows can be observed in this iterative region, than the corresponding vanishing point can be obtained. To sum up the conclusion, the Bisection iteration method and Sequential iteration method can be combined together to approach an optimized vanishing point.
The feature of proposed approach is that the focal length information can be obtained use only one camera; additional calibration target won’t be needed. Moreover, the action trajectory of human is not limited. Besides, the low computational complexity features can be applying proposed approach to real-time surveillance systems.
en
dc.description.provenanceMade available in DSpace on 2021-06-16T17:33:52Z (GMT). No. of bitstreams: 1
ntu-101-P99921001-1.pdf: 9037813 bytes, checksum: e553028cb60459769034cd15c5a44f9c (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents誌謝 i
中文摘要 ii
Abstract iv
Contents vi
List of Figures viii
List of Tables xix
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Problem Definition 3
1.3 Proposed Approach 5
1.4 Thesis Overview 7
Chapter 2 Previous Work on Camera Parameters Estimation 8
2.1 Target-based Calibration Approaches 8
2.2 Target-less Calibration Approaches 20
2.3 Comparisons and Summary 29
Chapter 3 Virtual Shadow of a Walking Human 32
3.1 Moving Object Detection Process 33
3.2 Walking People Tracking Process 43
3.3 Leg-Crossing Posture Filtering Process 50
3.4 Virtual Shadow Projection Process 61
Chapter 4 Vanishing Point Determination Using Virtual Shadows 65
4.1 Camera Parameters and Coordinate Transformation 66
4.2 Length of Isometric Lines in WICS 71
4.3 Virtual Shadows of Walking People in WICS 81
4.4 Bisection Iteration and Sequencial Iteration 91
Chapter 5 Experiment Results 104
5.1 Experiment Setting 104
5.2 Experiment Results 110
Chapter 6 Conclusion and Future Work 153
References 155
dc.language.isoen
dc.subject攝影機焦距zh_TW
dc.subject迭代區間zh_TW
dc.subject虛擬投影zh_TW
dc.subject座標轉換zh_TW
dc.subject世界影像座標系統zh_TW
dc.subject消失點zh_TW
dc.subjectiterative areaen
dc.subjectcamera focal lengthen
dc.subjectcoordinate transformationen
dc.subjectvirtual shadowen
dc.subjectWorld Image Coordinate Systemen
dc.subjectvanishing pointen
dc.title運用影片中行走人物之虛擬投影進行影像中消失點及攝影機焦距自動校正之研究zh_TW
dc.titleThe Vanishing Point and Camera Focal Length Auto-Calibration Technology Using the Virtual Shadows of a Walking People From Videoen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee傅立成,顏家鈺,連豊力
dc.subject.keyword消失點,攝影機焦距,座標轉換,虛擬投影,世界影像座標系統,迭代區間,zh_TW
dc.subject.keywordvanishing point,camera focal length,coordinate transformation,virtual shadow,World Image Coordinate System,iterative area,en
dc.relation.page159
dc.rights.note有償授權
dc.date.accepted2012-08-15
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  未授權公開取用
8.83 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved