Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/52069
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor傅立成
dc.contributor.authorChun-Cheng Linen
dc.contributor.author林君丞zh_TW
dc.date.accessioned2021-06-15T14:06:37Z-
dc.date.available2018-08-28
dc.date.copyright2015-08-28
dc.date.issued2015
dc.date.submitted2015-08-19
dc.identifier.citation[1] S. Sivaraman and M. M. Trivedi, 'Integrated Lane and Vehicle Detection, Localization, and Tracking: A Synergistic Approach,' IEEE Transactions on Intelligent Transportation Systems, vol. 14, pp. 906-917, 2013.
[2] Y. Quan, A. Thangali, V. Ablavsky, and S. Sclaroff, 'Learning a Family of Detectors via Multiplicative Kernels,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, pp. 514-530, 2011.
[3] H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, 'On-Road Multivehicle Tracking Using Deformable Object Model and Particle Filter With Improved Likelihood Estimation,' IEEE Transactions on Intelligent Transportation Systems, vol. 13, pp. 748-758, 2012.
[4] B.-F. Lin, Y.-M. Chan, L.-C. Fu, P.-Y. Hsiao, L.-A. Chuang, S.-S. Huang, et al., 'Integrating Appearance and Edge Features for Sedan Vehicle Detection in the Blind-Spot Area,' IEEE Transactions on Intelligent Transportation Systems, vol. 13, pp. 737-747, 2012.
[5] E. Ohn-Bar and M. M. Trivedi, 'Learning to Detect Vehicles by Clustering Appearance Patterns,' IEEE Transactions on Intelligent Transportation Systems, pp. 1-11, 2015.
[6] C.-H. Kuo and R. Nevatia, 'Robust multi-view car detection using unsupervised sub-categorization,' in Workshop on Applications of Computer Vision 2009, pp. 1-8.
[7] M. Mahlisch, R. Schweiger, W. Ritter, and K. Dietmayer, 'Sensorfusion Using Spatio-Temporal Aligned Video and Lidar for Improved Vehicle Detection,' in IEEE Intelligent Vehicles Symposium, 2006, pp. 424-429.
[8] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, et al., 'Towards fully autonomous driving: Systems and algorithms,' in IEEE Intelligent Vehicles Symposium, 2011, pp. 163-168.
[9] S. Sato, M. Hashimoto, M. Takita, K. Takagi, and T. Ogawa, 'Multilayer lidar-based pedestrian tracking in urban environments,' in IEEE Intelligent Vehicles Symposium, 2010, pp. 849-854.
[10] J. Huang and H.-S. Tan, 'Vehicle future trajectory prediction with a DGPS/INS-based positioning system,' in American Control Conference, 2006, 2006, p. 6 pp.
[11] M. Bertozzi, L. Bombini, P. Cerri, P. Medici, P. C. Antonello, and M. Miglietta, 'Obstacle detection and classification fusing radar and vision,' in IEEE Intelligent Vehicles Symposium, 2008, pp. 608-613.
[12] R. O. Chavez-Garcia, J. Burlet, V. Trung-Dung, and O. Aycard, 'Frontal object perception using radar and mono-vision,' in IEEE Intelligent Vehicles Symposium, 2012, pp. 159-164.
[13] J. Fritsch, T. Michalke, A. Gepperth, S. Bone, F. Waibel, M. Kleinehagenbrock, et al., 'Towards a human-like vision system for Driver Assistance,' in IEEE Intelligent Vehicles Symposium, 2008, pp. 275-282.
[14] N. Dalal and B. Triggs, 'Histograms of oriented gradients for human detection,' in International IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp. 886-893.
[15] D. G. Lowe, 'Object recognition from local scale-invariant features,' in The proceedings of the seventh IEEE international conference on Computer vision, 1999, pp. 1150-1157.
[16] Y. Freund and R. E. Schapire, 'A decision-theoretic generalization of on-line learning and an application to boosting,' Journal of computer and system sciences, vol. 55, pp. 119-139, 1997.
[17] Y. Freund and R. Schapire, 'A short introduction to boosting,' Journal-Japanese Society For Artificial Intelligence, vol. 14, p. 1612, 1999.
[18] R. E. Schapire and Y. Freund, Boosting: Foundations and algorithms: MIT press, 2012.
[19] Y.-F. Kao, Y.-M. Chan, L.-C. Fu, P.-Y. Hsiao, S.-S. Huang, C.-E. Wu, et al., 'Comparison of granules features for pedestrian detection,' in IEEE Conference on Intelligent Transportation Systems, 2012, pp. 1777-1782.
[20] TRW Automotive. Available: http://www.trw.com/electronic_systems/sensor_technologies/radar
[21] SICKUSA.COM | SICK | Indoor laser measurement technology. Available: http://www.sick.com/us/en-us/home/products/product_portfolio/laser_measurement_systems/Pages/indoor_laser_measurement_technology.aspx
[22] K. Dietmayer, J. Sparbert, and D. Streller, 'Model based Object Classification and Object Tracking in Traffic scenes from Range Images,' in IEEE Intelligent Vehicles Symposium Proceedings, 2001.
[23] S. Santos, J. Faria, F. Soares, R. Araujo, and U. Nunes, 'Tracking of multi-obstacles with laser range data for autonomous vehicles,' in National Festival of Robotics Scientific Meeting, 2003, pp. 59-65.
[24] G. A. Borges and M.-J. Aldon, 'Line extraction in 2D range images for mobile robotics,' Journal of intelligent and Robotic Systems, vol. 40, pp. 267-297, 2004.
[25] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, 'A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,' IEEE Transactions on Signal Processing, vol. 50, pp. 174-188, 2002.
[26] V. Lepetit, F. Moreno-Noguer, and P. Fua, 'Epnp: An accurate o (n) solution to the pnp problem,' International journal of computer vision, vol. 81, pp. 155-166, 2009.
[27] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, 'Object detection with discriminatively trained part-based models,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 1627-1645, 2010.
[28] M. Bertozz, A. Broggi, and A. Fascioli, 'Stereo inverse perspective mapping: theory and applications,' Image and vision computing, vol. 16, pp. 585-590, 1998.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/52069-
dc.description.abstract近年來,在智慧型車輛系統的發展層見疊出。智慧型車輛結合數種不同的感知技術,都是為了要了解車輛周遭環境的狀況,進而提供安全足夠的道路資訊給予駕駛者,甚至是達到自動化的駕駛服務。這些功能都倚賴於系統對於環境的感知,大至城市道路壅塞狀況,用以安排路徑,小至車輛周圍感測,提供路況資訊。在路況感測部分,路上障礙物感測尤其重要,路上障礙物的種類以車輛、機踏車、以及行人為多數。感測出路上障礙物,分析其運動行為,辨別障礙物是否危害駕駛者,便是邁向自動化駕駛服務的重大里程碑。
車輛影像偵測在近年來發展完善,然而考慮到其應用範疇,不同環境下的限制屢見不鮮。其中前方多車道的車輛偵測,因駕駛車輛與前方車輛相對位置變化差異,造成觀察前方車輛在外型上有著巨大的變化。若使用單一模型之影像分類器,外型上巨大的變化的組內差異很容易造成其效果不彰。
為了達到處理此一情形,本論文提出了影像暨距離資訊感知融合之多車道車輛偵測系統。在影像以及距離資訊上,將以單一類別多次模型分類器來個別處理因觀察角度造成外型巨大變化的情形。在影像方面,利用方向梯度直方圖特徵,配合多次模型串聯式分類器,處理在影像上的組內差異。在距離資訊上,以Inscribe Rectangle Filter Segmentation強健叢集之結果,並利用And-Or Model 描述車輛外型之特徵,分析觀察角度造成之特徵差異與空間關係。在感知融合上,藉由影像與距離資訊分類器得到之多子模型之結果,分析其機率模型建構之兩訊號源之間的關係,在多車道環境下依然有穩定的偵測效果。
zh_TW
dc.description.abstractIn Taiwan, thousands of drivers and passengers injured in car accidents on the road each year. According to the statistics from Department of Statistic, Ministry of the Interior of Republic of China, there are 2612 people in sedan got killed in A1 class traffic accidents in Taiwan in last five years, with most fatal crashes involving more than one vehicle. Most of the causes of accidents are the unawareness of the driver to take care of the frontal driving condition on the road. It is critical to detect vehicles on the road for the safety of both drivers and other road users.
Image and range sensor fusion system for detection of vehicles has been a topic of great interest to researchers in the past decades. This fusion system can benefit from both image and range sensors. From the range sensor, it can retrieve the absolute information about the emergence of any obstacle only with minor noise. On the other hand, the ability of recognizing objects with particular hypothesis or specific feature is enabled to classify the object of interest from the help of image sensor. But it is still very difficult to verify the hypothesis for a computer to understand the environment like human beings do.
When it goes to the ability of descriptor, models in subcategory has been proved to be a better model than an overall model. As a result, we propose a sensor fusion system using subcategory model to achieve a robust detection system. In addition, there are new challenge when applying the subcategories into range information, and the Inscribe Rectangle Filter Segmentation is proposed to handle that challenge and And-Or Model is applied to describe the pattern extracted from the fine segmentation.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T14:06:37Z (GMT). No. of bitstreams: 1
ntu-104-R02922121-1.pdf: 3173181 bytes, checksum: 857d01751b0f80ef80c0f14d14b425d5 (MD5)
Previous issue date: 2015
en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
中文摘要 iv
ABSTRACT v
CONTENTS vi
LIST OF FIGURES ix
LIST OF TABLES xi
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Challenges 3
1.3 Related Work 6
1.4 Contributions 10
1.5 Thesis Organization 11
Chapter 2 Preliminaries 12
2.1 Histogram of Oriented Gradient (HOG) 12
2.1.1 HOG Descriptor 13
2.1.2 HOG Feature Encoding 14
2.2 Cascade Classifier and AdaBoost Algorithm 15
2.2.1 Cascade Classifier 15
2.2.2 AdaBoost Algorithm 17
2.2.3 Classifier Learning 20
2.3 Range Sensor Perception 21
2.3.1 Radar Sensor Characteristic 21
2.3.2 Laser Sensor Characteristic 22
2.3.3 Sensor Strength in Fusion 23
2.4 System Overview 24
Chapter 3 Inscribe Rectangle Filter Segmentation Using Range Sensor 26
3.1 Specific Observation Condition 28
3.1.1 Shape Variation 29
3.1.2 Fragmentation 31
3.2 Inscribe Rectangle Filter Segmentation 32
3.3 And-Or Model 39
3.3.1 Overview of And-Or Model 39
3.3.2 AOM encoding 41
3.4 Temporal Laser Evidence 43
Chapter 4 Image and Range Sensor Fusion Using Subcategory 45
4.1 Overview of Fusion System 46
4.1.1 Image-Laser calibration 47
4.1.2 Coordinate Transformation 51
4.1.3 Image coordinate as platform 52
4.1.4 Range coordinate as platform 52
4.2 Confidence Transformation and Association 54
4.3 Refinement of Detection Results 57
Chapter 5 Experiments 59
5.1 Environment Setting and Performance Measurement 60
5.1.1 Environment Setting 60
5.1.2 Performance Measurement 61
5.2 Training and Testing Dataset 62
5.3 Experimental Results of Inscribe Rectangle Filter Segmentation 65
5.4 Comparison of the Sensor Fusion 68
Chapter 6 Conclusion 72
References 74
dc.language.isoen
dc.subject叢集zh_TW
dc.subject多車道汽車偵測zh_TW
dc.subject感知融合zh_TW
dc.subject子模型分類zh_TW
dc.subjectSubcategoryen
dc.subjectSegmentation of Range Dataen
dc.subjectVehicle Detectionen
dc.subjectSensor Fusionen
dc.title利用子類別模型之影像暨距離資訊融合於多車道車輛偵測zh_TW
dc.titleImage and Range Sensor Fusion for Multilane Vehicle Detection Using Subcategory Modelen
dc.typeThesis
dc.date.schoolyear103-2
dc.description.degree碩士
dc.contributor.coadvisor蕭培墉
dc.contributor.oralexamcommittee傅楸善,黃世勳,方瓊瑤
dc.subject.keyword多車道汽車偵測,感知融合,子模型分類,叢集,zh_TW
dc.subject.keywordVehicle Detection,Sensor Fusion,Subcategory,Segmentation of Range Data,en
dc.relation.page77
dc.rights.note有償授權
dc.date.accepted2015-08-20
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-104-1.pdf
  Restricted Access
3.1 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved