請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/60952完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
| dc.contributor.author | Fei-Hong Chao | en |
| dc.contributor.author | 趙飛竑 | zh_TW |
| dc.date.accessioned | 2021-06-16T10:38:08Z | - |
| dc.date.available | 2016-08-17 | |
| dc.date.copyright | 2013-08-17 | |
| dc.date.issued | 2013 | |
| dc.date.submitted | 2013-08-13 | |
| dc.identifier.citation | [1: Siegwart et al. 2011]
Roland Siegwart, Illah Reza Nourbakhsh, and Davide Scaramuzza, Introduction to Autonomous Mobile Robots, 2nd ed., London, England, The MIT Press, Feb. 18, 2011. [2: Gonzalez & Woods 2008] Rafael C. Gonzalez and Richard Eugene Woods, Digital Image Processing, 3rd ed., Upper Saddle River, New Jersey, Pearson Prentice Hall, 2008. [3: Najjar & Bonnifait 2007] Maan El Badaoui El Najjar and Philippe Bonnifait, “Road Selection Using Multicriteria Fusion for the Road-Matching Problem,” IEEE Transactions on Intelligent Transportation Systems, Vol. 8, No. 2, pp. 279-291, June 2007. [4: Alvarez & Lopez 2011] Jose M. Alvarez and Antonio M.’Lopez, “Road Detection Based on Illuminant Invariance,” IEEE Transactions on Intelligent Transportation Systems, Vol. 12, No. 1, pp. 184-193, March 2011. [5: Tarel et al. 2012] Jean-Philippe Tarel, Nicolas Hautiere, Laurent Caraffa, urelien Cord, Houssam Halmaoui, and Dominique Gruyer, “Vision Enhancement in Homogeneous and Heterogeneous Fog,” IEEE Intelligent Transportation Systems Magazine, Vol. 4, No. 2, pp. 6-20, Summer 2012. [6: Jie et al. 2009] Xiong Jie, Han Lina, Geng Guohua, and Zhou Mingquan, “Based on HSV Space Real-Color Image Enhanced by Multi-Scale Homomorphic,” in Proceedings of WRI Global Congress on Intelligent Systems, Xiamen, China, pp. 160-165, May 19-21, 2009. [7: Danescu et al. 2012] Radu Danescu, Cosmin Pantilie, Florin Oniga, and Sergiu Nedevschi, “Particle Grid Tracking System Stereovision Based Obstacle Perception in Driving Environments,” IEEE Intelligent Transportation Systems Magazine, Vol. 4, No. 1, pp. 6-20, Spring 2012. [8: Wu et al. 2009] Bing-Fei Wu, Chuan-Tsai Lin, and Yen-Lin Chen, “Dynamic Calibration and Occlusion Handling Algorithms for Lane Tracking,” IEEE Transactions on Industrial Electronics, Vol. 56, No. 5, pp. 1757-1773, May 2009. [9: Kang et al. 2011] Yousun Kang, Koichiro Yamaguchi, Takashi Naito, and Yoshiki Ninomiya, “Multiband Image Segmentation and Object Recognition for Understanding Road Scenes,” IEEE Transactions on Intelligent Transportation Systems, Vol. 12, No. 4, pp. 1423-1433, Dec. 2011. [10: Wybo et al. 2007] S. Wybo, R. Bendahan, S. Bougnoux, C. Vestri, F. Abad, and T. Kakinami, “Improving Backing-Up Manoeuvre Safety with Vision-Based Movement Detection,” IET Intelligent Transport Systems, Vol. 1, No. 2, pp. 150-158, June 2007. [11: Shinzato et al. 2012] Patrick Y. Shinzato, Valdir Grassi Jr, Fernando S. Osorio, and Denis F. Wolf, “Fast Visual Road Recognition and Horizon Detection Using Multiple Artificial Neural Networks,” in Proceedings of IEEE Intelligent Vehicles Symposium, Alcal de Henares, Madrid, Spain, pp. 1090-1095, June 3-7, 2012. [12: Wu et al. 2012] Chi-Feng Wu, Cheng-Jian Lin, and Chi-Yung Lee, “Applying a Functional Neurofuzzy Network to Real-Time Lane Detection and Front-Vehicle Distance Measurement,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, Vol. 42, No. 4, pp. 577-589, July 2012. [13: Doshi & Trivedi 2009] Anup Doshi and Mohan Manubhai Trivedi, “On the Roles of Eye Gaze and Head Dynamics in Predicting Driver's Intent to Change Lanes,” IEEE Transactions on Intelligent Transportation Systems, Vol. 10, No. 3, pp. 453-462, Sept. 2009. [14: Cualain et al. 2012] D.O. Cualain, M. Glavin, and E. Jones, “Multiple-camera lane departure warning system for the automotive environment,” IET Intelligent Transport Systems, Vol. 6, No.3, pp. 223-234, Sep. 2012. [15: Nie et al. 2010] Yiming Nie, Xiangjing An, Zhenping Sun, Tao Wu, and Hangen He, “Fast Lane Detection Using Direction Kernel Function,” in Proceedings of Chinese Conference on Pattern Recognition, Chongqing, China, pp. 1-5, Oct. 21-23, 2010. [16: Cavallaro et al. 2005] A. Cavallaro, E. Salvador, and T. Ebrahimi, “Shadow-Aware Object-Based Video Processing,” IEE Proceedings of Vision, Image and Signal Processing, Vol. 152, No. 4, pp. 398-406, Aug. 5, 2005. [17: Rosebrock & Rilk 2012] Dennis Rosebrock and Markus Rilk, “Real-time Vehicle Detection with a Single Camera Using Shadow Segmentation and Temporal Verification,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 3061-3066, Oct. 7-12, 2012. [18: Cherng et al. 2009] Shen Cherng, Chiung-Yao Fang, Chia-Pei Chen, and Sei-Wang Chen, “Critical Motion Detection of Nearby Moving Vehicles in a Vision-Based Driver-Assistance System,” IEEE Transactions on Intelligent Transportation Systems, Vol. 10, No. 1, pp. 70-82, March 2009. [19: Wang & Lien 2008] Chi-Chen Raxle Wang and Jenn-Jier James Lien, “Automatic Vehicle Detection Using Local Features—A Statistical Approach,” IEEE Transactions on Intelligent Transportation Systems, Vol. 9, No. 1, pp. 83-96, March 2008. [20: Sivaraman & Trivedi 2010] Sayanan Sivaraman and Mohan Manubhai Trivedi, “A General Active-Learning Framework for On-Road Vehicle Recognition and Tracking,” IEEE Transactions on Intelligent Transportation Systems, Vol. 11, No. 2, pp. 267-276, June 2010. [21: Sivaraman & Trivedi 2012] Sayanan Sivaraman and Mohan Manubhai Trivedi, “Real-Time Vehicle Detection Using Parts at Intersections,” in Proceedings of IEEE International Conference on Intelligent Transportation Systems, Anchorage, Alaska, USA, pp. 1519-1524, Sep. 16-19, 2012. [22: Jazayeri et al. 2011] Amirali Jazayeri, Hongyuan Cai, Jiang Yu Zheng, and Mihran Tuceryan, “Vehicle Detection and Tracking in Car Video Based on Motion Model,” IEEE Transactions on Intelligent Transportation Systems, Vol. 12, No. 2, pp. 583-595, June 2011. [23: Xiong & Ding 2012] Bin Xiong and Xiaoqing Ding, “A Generic Object Detection Using a Single Query Image Without Training,” Tsinghua Science and Technology, Vol. 17, No. 2, pp. 194-201, April 2012. [24: Li et al. 2011] Yan Li, Leon Gu, and Takeo Kanade, “Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 9, pp. 1860-1876, Sep. 2011. [25: Zheng & Liang 2009] Wei Zheng and Luhong Liang, “Fast Car Detection Using Image Strip Features,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA, pp. 2703-2710, June 20-25, 2009. [26: Karaduman et al. 2012] O. Karaduman, H. Eren, H. Kurum, and M. Celenk, “Approaching Car Detection via Clustering of Vertical-Horizontal Line Scanning Optical Edge Flow,” in Proceedings of IEEE International Conference on Intelligent Transportation Systems, Anchorage, Alaska, USA, pp. 502-507, Sep. 16-19, 2012. [27: Alonso et al. 2008] Javier Diaz Alonso, Eduardo Ros Vidal, Alexander Rotter, and Martin Muhlenberg, “Lane-Change Decision Aid System Based on Motion-Driven Vehicle Tracking,” IEEE Transactions on Vehicular Technology, Vol. 57, No. 5, pp. 2736-2746, Sep. 2008. [28: Klette et al. 2011] Reinhard Klette, Norbert Kruger, Tobi Vaudrey, Karl Pauwels, Marc van Hulle, Sandino Morales, Farid I. Kandil, Ralf Haeusler, Nicolas Pugeault, Clemens Rabe, and Markus Lappe, “Performance of Correspondence Algorithms in Vision-Based Driver Assistance Using an Online Image Sequence Database,” IEEE Transactions on Vehicular Technology, Vol. 60, No. 5, pp. 2012-2026, Jun 2011. [29: Jung et al. 2010] Ho Gi Jung, Dong Seok Kim, and Jaihie Kim “Light-Stripe-Projection-Based Target Position Designation for Intelligent Parking-Assist System,” IEEE Transactions on Intelligent Transportation Systems, Vol. 11, No. 4, pp. 942-953, Dec. 2010. [30: Xu et al. 2012] Yanwu Xu, Dong Xu, Stephen Lin, Tony X. Han, Xianbin Cao, and Xuelong Li, “Detection of Sudden Pedestrian Crossings for Driving Assistance Systems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 42, No. 3, pp. 729-739, June 2012. [31: Ge et al. 2009] Junfeng Ge, Yupin Luo, and Gyomei Tei, “Real-Time Pedestrian Detection and Tracking at Nighttime for Driver-Assistance Systems,” IEEE Transactions on Intelligent Transportation Systems, Vol. 10, No. 2, pp. 283-298, June 2009. [32: Bota & Nedesvchi 2008] S. Bota and S. Nedesvchi, “Multi-Feature Walking Pedestrians Detection for Driving Assistance Systems,” IET Intelligent Transport Systems, Vol. 2, No. 2, pp. 92-104, June 2008. [33: Liu et al. 2008] Ce Liu, William T. Freeman, Edward H. Adelson, and Yair Weiss, “Human-Assisted Motion Annotation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, USA, pp. 1-8, June 23-28, 2008. [34: Liu 2009] Ce Liu, “Beyond Pixels: Exploring New Representations and Applications for Motion Analysis,” Doctoral Dissertation, Massachusetts Institute of Technology, June 2009. [35: Sundararajan 2011] Kalaivani Sundararajan, “Unified Point-Edgelet Feature Tracking,” Master's Thesis, Clemson University, May 2011. [36: Moghadam et al. 2012] Peyman Moghadam, Janusz A. Starzyk, and W.S Wijesoma, “Fast Vanishing-Point Detection in Unstructured Environments,” IEEE Transactions on Image Processing, Vol. 21, No. 1, pp. 425-430, Jan. 2012. [37: Yu et al. 2011] Hongfei Yu, Wei Liu, Jianghua Pu, Bobo Duan, Huai Yuan, and Hong Zhao, “Lane Recognition Based on Location of Raised Pavement Markers,” in Proceedings of IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, pp. 1013-1018, June 5-9, 2011. [38: Sun et al. 2006] Zehang Sun, George Bebis, and Ronald Miller, “On-road vehicle detection: a review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 5, pp. 694-711, May 2006. [39: Tzomakas & Seelen 1998] Christos Tzomakas and Werner von Seelen, “Vehicle Detection in Traffic Scenes Using Shadows,” Institut fu‥r Neuroinformatik, Ruht-Universitat, Bochum, Germany, Technical Report IR-INI-98-06, Aug. 1998. [40: Bullkich et al. 2012] Elad Bullkich, Idan Ilan, Yair Moshe, Yacov Hel-Or, and Hagit Hel-Or, “Moving shadow detection by nonlinear Tone-Mapping,” in Proceedings of International Conference on Systems, Signals and Image Processing, Vienna, Austria, pp.146-149, April 11-13, 2012. [41: Guizzo 2011] Erico Guizzo. (2011, Oct.). Automation, robotics, artifical intelligence of IEEE Spectrum. Retrieved: July 15,2013. Available: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works [42: Kong et al. 2010] Hui Kong, Jean-Yves Audibert, and Jean Ponce, “General Road Detection From a Single Image,” IEEE Transactions on Image Processing, pp.2211-2220, Vol. 19, No. 8, Aug. 2010. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/60952 | - |
| dc.description.abstract | 由於感知可以針對有目的性的移動給予資訊,而控制決策則是在可以兼顧及時、準確以及穩健的條件下進行移動,因此在自動學當中,包含機器人學以及輔助系統,感知與控制決策是兩項最重要的課題。
在輔助系統當中,近年來為了避免因為駕駛者沒有專心或是錯誤的判斷所導致的車禍意外,因而提出了利用機器的人工智慧輔助駕駛者的認知判斷以及協助控制的進階安全駕駛計畫。其中,道路偵測是相當熱門的主題,而且已經被廣泛地利用到車道偏離警示系統、車輛偵測與行人偵測系統中;針對不同的功能性,可以將道路偵測的主題分成可駕駛道路區域偵測以及路上的物體偵測共兩種主題。 在可駕駛道路區域偵測的主題中,由於路面的特色很多元不好用人工分析,通常是利用特定的機器學習方法,像是類神經網路和支援向量機器,針對道路區域的部分學習完畢後,在之後的影像中利用學習好的機器認知將路面區域從整張影像中辨識出來。實際上,由於使用這種方法,辨別結果的好壞取決於給機器訓練學習的道路影像,但因為無法窮舉所有可能的道路區域影像讓機器學習,也不可能讓使用者到每種不同的場景時都要再進行一次學習的步驟,因此,這種機器學習的方法並不實際。 而在路上的物體偵測這個主題中,主要都是關於如何成功偵測和辨識路上物體的種類並且進行避障;因此,與路上物體之間的距離估測以及物體的移動路徑的推測對於碰撞時間的估測是相當重要的。 雖然,對於僅提供顏色資訊而沒有提供深度資訊的單眼相機而言,要做距離的估測是一件相當具有挑戰性的事情,但是,我們提出了一個嶄新的方法達成了這個具有挑戰性的事情,此方法包含以下三個步驟,首先是利用從指定的道路區域取得的顏色特徵限制條件,而非利用特定的機器學習機制,進行區域擴張從而找出完整的道路區域的方法,再來是結合在不可行走區域中,利用陰影偵測與車輛結構點進行路上車輛偵測的方法,以及利用相機模型與反透視方法從影像中得到準確的距離資訊,進而建構自車周圍的環境進行認知與危險警示的駕駛輔助系統的方法。 最後,將上述步驟所得到的偵測結果顯示在對於駕駛者而言簡單易懂的格狀的俯視圖中。 | zh_TW |
| dc.description.abstract | Perception and control policy are the keys for automatics, including robotics and human assistance systems. The perception provides information for purposed movements, and the control policy makes the movements in a real-time accuracy-robustness-balanced way.
In the human-assistance systems, advanced safety vehicle (ASV) project which support drivers’ recognition judgment and control by the machine intelligence has been proposed to avoid collisions or accidents which are caused by the lack of recognition and miss judgment by drivers in recent years and as known as driving assistance systems. The road detection is a popular topic and it has been widely used to lane departure warning systems, vehicle detection and pedestrian detection. And the road detection topic can be divided into two parts, which are drivable region detection and on-road objects recognition, by different objectives. In drivable region detection, specific machine learning algorithms like neural networks and supporting vector machine are usually used to classify the region of road surface due to the characteristics of road surface are various. But it is not practical for users to train specific machine learning algorithms every time when they are in different characteristic scenes. In on-road objects recognition, the keys are detecting and classifying on-road objects and then avoiding collision. Thus, the distance estimation and prediction of moving trajectories of on-road objects and the host vehicle are important for collision time estimation. Although the distance estimation from a monocular camera is a challenging issue without depth sensors, where a monocular camera can only provide RGB information in pixels, we propose a method including the region growing using color features restrictions estimated from indicated drivable region instead of specific machine learning algorithms, detecting on-road vehicles from non-drivable region by shadow detection and vehicle structure points, and combining a camera model and inverse perspective mapping (IPM) from a monocular camera image to achieve the vehicle surrounding recognition and warning system with accurate distance information for driver-assistance. Finally, the top-view grid map is chosen to represent all the detection results in an easily understood interface. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T10:38:08Z (GMT). No. of bitstreams: 1 ntu-102-R00921016-1.pdf: 6355579 bytes, checksum: 4c457656d37ee5b4db76d85c17c2a580 (MD5) Previous issue date: 2013 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 中文摘要 iii ABSTRACT v CONTENTS vii LIST OF FIGURES x LIST OF TABLES xxiv Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Description 2 1.3 Contribution of the Thesis 5 1.4 Organization of the Thesis 6 Chapter 2 Literature Survey 8 2.1 Driving Assistance Systems 8 2.2 Vision-Based Driving Assistance Systems 10 Chapter 3 Background Knowledge 13 3.1 Color Spaces 13 3.1.1 RGB Color Space 14 3.1.2 HSV Color Space 15 3.2 Color Features 20 3.2.1 Mean 20 3.2.2 Standard Derivation (STD) 20 3.2.3 Entropy 21 3.3 Edge Detectors 22 3.3.1 Canny Edge Detector 22 3.3.2 Directional Kernel Edge Detector 24 3.4 Otsu’s Method for Auto Thresholding 24 3.5 Histogram of Oriented Gradients (HOGs) 25 3.6 Pinhole Model and Inverse Perspective Mapping 27 3.7 Kalman Filter 29 Chapter 4 Vision-Based Drivable Region Labeling 32 4.1 System Architecture 34 4.2 Growing-Based Drivable Region Detection 36 4.3 Spatiotemporal-Based Vanishing Point Estimation 68 4.3.1 Vanishing Point Candidates Finding via Line Extraction 69 4.3.2 Spatiotemporal-Based Voting for a Confidence Weighted Vanishing Point from Vanishing Point Candidates 77 4.4 Procedures of Road Condition Labeling 86 4.4.1 Estimation of Camera Model using a Vanishing Point 86 4.4.2 Position Estimation of Points on the Road Surface via Camera Model and Inverse Perspective Mapping 96 4.4.3 Road Condition Labeling using a Top-View Grid Map 100 4.4.4 Region of Interest Estimation 103 4.5 Lane Changing Detection and Warning 105 4.6 Temporal and Spatial Coherencies for Boosting Calculation Efficiency and Accuracy 110 4.6.1 Spatial Coherency of Initial Indicated Region 110 4.6.2 Temporal Coherency of Color Feature Restrictions 117 4.6.3 Spatial Coherency of Region of Interest Estimation 118 Chapter 5 Range Estimation of On-Road Vehicles 119 5.1 System Architecture 122 5.2 On-Road Object Extraction via Shadow Detection 124 5.3 The Contour Estimation from Vehicle Structure Points 143 5.4 Range Estimation of Vehicles 150 5.4.1 Vehicle Matching via Optical Flow 151 5.4.2 Vehicle Tracking via Kalman Filter 152 Chapter 6 Experimental Results and Analysis 155 6.1 The Overall System Architecture 155 6.2 Experimental Scenes Analysis 156 6.3 Vision-Based Drivable Region Labeling 159 6.4 Range Estimation of On-Road Vehicles 204 Chapter 7 Conclusions and Future Works 245 7.1 Conclusions 245 7.2 Future Works 246 REFERENCES 247 APPENDIX A 256 | |
| dc.language.iso | en | |
| dc.subject | 格狀俯視圖 | zh_TW |
| dc.subject | 駕駛輔助系統 | zh_TW |
| dc.subject | 道路偵測 | zh_TW |
| dc.subject | 區域擴張 | zh_TW |
| dc.subject | 路上車輛偵測 | zh_TW |
| dc.subject | 反透視方法 | zh_TW |
| dc.subject | 單眼相機 | zh_TW |
| dc.subject | on-road vehicle detection | en |
| dc.subject | top-view grid map | en |
| dc.subject | inverse perspective mapping (IPM) | en |
| dc.subject | region growing | en |
| dc.subject | road detection | en |
| dc.subject | monocular camera | en |
| dc.subject | driving assistance systems | en |
| dc.title | 利用單眼影像進行可駕駛道路區域分析與車輛偵測 | zh_TW |
| dc.title | Monocular Vision-Based Drivable Region Labeling and Range Estimation of On-Road Vehicles | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 101-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 簡忠漢,李後燦,黃正民 | |
| dc.subject.keyword | 駕駛輔助系統,單眼相機,道路偵測,區域擴張,路上車輛偵測,反透視方法,格狀俯視圖, | zh_TW |
| dc.subject.keyword | driving assistance systems,monocular camera,road detection,region growing,on-road vehicle detection,inverse perspective mapping (IPM),top-view grid map, | en |
| dc.relation.page | 257 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2013-08-13 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-102-1.pdf 未授權公開取用 | 6.21 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
