請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68525
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 連豊力(Feng-Li Lian) | |
dc.contributor.author | Yu-Hsuan Wu | en |
dc.contributor.author | 吳余軒 | zh_TW |
dc.date.accessioned | 2021-06-17T02:24:03Z | - |
dc.date.available | 2022-08-24 | |
dc.date.copyright | 2017-08-24 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-08-18 | |
dc.identifier.citation | [1: Zhang et al. 2014]
Daiming Zhang, Bin Fang, Weibin Yang, “Robust inverse perspective mapping based on vanishing point”, Security, Pattern Analysis, and Cybernetics (SPAC), Oct. 18-19, 2014. [2: Liu et al. 2006] Liu Hua-jun, Guo Zhi-bo, Lu Jian-feng, “ A Fast Method for Vanishing Point Estimation and Tracking and Its Application in Road Images”, In Proceedings of 6th International Conference on ITS Telecommunications, Chengdu, China, June 21-23, 2006 [3: Yang et al. 2017] Weibin Yang, Bin Fang, and Yuan Yan Tang, “Fast and Accurate Vanishing Point Detection and Its Application in Inverse Perspective Mapping of Structured Road”, IEEE Transactions on Systems, Man, and Cybernetics: Systems, pp. 1-12, May 23, 2017. [4: Yuan et al. 2014] Jun Yuan, Shuming Tang, Xiuqin Pan, and Hong Zhang, “A robust vanishing point estimation method for lane detection”, in Proceedings of 33rd Chinese Conference on Control, Nanjing, China, pp. 4887-4892, July 28-30, 2014 [5: Borges & Aldon 2004] Geovany Araujo Borges and Marie-Jose Aldon, “Line Extraction in 2D Range Images for Mobile Robotics”, Journal of Intelligent and Robotic Systems, vol. 40, pp. 267-297, July, 2004. [6: Jo et al. 2014] Kichun Jo, Junsoo Kim, Dongchul Kim, Chulhoon Jang, Myoungho Sunwoo, “Development of Autonomous Car—Part I: Distributed System Architecture and Development Process”, IEEE Transactions on Industrial Electronics vol. 61, May 01, 2014. [7: Jo et al. 2015] Kichun Jo, Junsoo Kim, Dongchul Kim, Chulhoon Jang, Myoungho Sunwoo, “Development of Autonomous Car—Part II: A Case Study on the Implementation of an Autonomous Driving System Based on Distributed Architecture”, IEEE Transactions on Industrial Electronics vol. 62, Mar. 09, 2015. [8: Jia et al. 2016] Bingxi Jia, Jian Chen, Kaixiang Zhang, “Drivable Road Reconstruction for Intelligent Vehicles based on Two-View Geometry”, IEEE Transactions on Industrial Electronics, Dec. 23, 2016. [9: Na et al. 2016] Kiin Na, Byungjae Park, Beomsu Seo, “Drivable space expansion from the ground base for complex structured roads”, in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, Oct. 9-12, 2016. [10: Ilas 2013] Constantin Ilas, “Perception in autonomous ground vehicles”, in Proceedings of International Conference on Electronics, Computers and Artificial Intelligence (ECAI), June 27-29, 2013. [11: Bila et al. 2016] Cem Bila, Fikret Sivrikaya, Manzoor A. Khan, and Sahin Albayrak, “Vehicles of the Future: A Survey of Research on Safety Issues”, IEEE Transactions on Intelligent Transportation Systems, pp. 1046-1065, Sep. 02, 2016. [12: Zhu et al. 2017] Hao Zhu, Ka-Veng Yuen, Lyudmila Mihaylova, “Overview of Environment Perception for Intelligent Vehicles”, IEEE Transactions on Intelligent Transportation Systems, Feb. 15, 2017. [13: Phueakjeen et al. 2011] Worawit Phueakjeen, Nattha Jindapetch, and Leang Kuburat, “A study of the edge detection for road lane”, in Proceedings of 8th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, May 17-19, 2011. [14: Bottazzi et al. 2013] Vitor S. Bottazzi, Paulo V. K. Borges, and Jun Jo, “A vision-based lane detection system combining appearance segmentation and tracking of salient points”, Intelligent Vehicles Symposium (IV), June 23-26, 2013. [15: Lin et.al 2010] Qing Lin, Youngjoon Han, and Hernsoo Hahn, “Real-Time Lane Departure Detection Based on Extended Edge-Linking Algorithm”, in Proceedings of Second International Conference on Computer Research and Development, May 7-10, 2010. [16: Ogawa & Takagi 2006] T. Ogawa and K. Takagi, “Lane Recognition Using On-vehicle LIDAR”, IEEE Intelligent Vehicles Symposium, Tokyo, Japan, June 13-15, 2006. [17: Wijesoma et al. 2004] W.S. Wijesoma, K.R.S. Kodagoda, A.P. Balasuriya , “Road-boundary detection and tracking using ladar sensing”, IEEE Transactions on Robotics and Automation, pp. 456-464, June, 2004. [18: Han et al. 2012] Jaehyun Han, Dongchul Kim, Minchae Lee, and Myoungho Sunwoo , “Enhanced Road Boundary and Obstacle Detection Using a Downward-Looking LIDAR Sensor, IEEE Transactions On Vehicular Technology, Vol. 61, No. 3, March 2012. [19: Li et al. 2014] Qingquan Li, Long Chen, Ming Li, “A Sensor-Fusion Drivable-Region and Lane-Detection System for Autonomous Vehicle Navigation in Challenging Road Scenarios”, IEEE Transactions on Vehicular Technology, vol. 63, pp540-555, Sep. , 2014. [20: Ying & Li 2016] Zhenqiang Ying and Ge Li, “Robust lane marking detection using boundary-based inverse perspective mapping”, in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, March 20-25, 2016. [21: Gu et al. 2015] Jingchen Gu, Qieshi Zhang, and Sei-ichiro Kamata, “Robust road lane detection using extremal-region enhancement”, in Proceedings of 3rd IAPR Asian Conference on Pattern Recognition, Nov. 3-6, 2015. [22: Tan et al. 2014] Huachun Tan, Yang Zhou, and Yong Zhu, Danya Yao, and Keqiang Li, “A novel curve lane detection based on improved River flow and RANSAC”, in Proceedings of IEEE 17th International Conference on Intelligent Transportation Systems, Qingdao, China, Oct. 8-11, 2014. [23: Xu et al. 2009] Huarong Xu, Xiaodong Wang, Hongwu Huang, and Keshou Wu, and Qiu Fang, “A fast and stable lane detection method based on B-spline curve”, in Proceedings of IEEE 10th International Conference on Computer-Aided Industrial Design & Conceptual Design, Wenzhou ,China, Nov. 26-29, 2009. [24: Chen & Wang 2006] Qiang Chen and Hong Wang, “A real-time lane detection algorithm based on a Hyperbola-pair model”, in Proceedings of IEEE Intelligent Vehicles Symposium, Tokyo, Japan, June 13-15, 2006. [25: Jung & Kelber 2004] C.R. Jung and C.R. Kelber, “A robust linear-parabolic model for lane following”, in Proceedings of 17th Brazilian Symposium on Computer Graphics and Image Processing, Curitiba, Brazil, Oct. 20, 2004. [26: Pollard et al. 2011] Evangeline Pollard, Dominique Gruyer, Jean-Philippe Tarel, Sio-Song leng, and Aurelien Cord, “Lane Marking Extraction with Combination Strategy and Comparative Evaluation on Synthetic and Camera Images”, in Proceedings of 14th International IEEE Conference on Intelligent Transportation Systems, Washington, USA, Oct. 5-7, 2011. [27: Ozgunalp & Dahnoun 2014] Umar Ozgunalp and Naim Dahnoun, “Robust Lane Detection & Tracking based on Novel Feature Extraction and Lane Categorization”, in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, May 4-9, 2014. [28: Bertozzi & Broggi 1998] M. Bertozzi and A. Broggi, “GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection”, IEEE Transactions on Image Processing, Vol. 7, pp. 62-81, Jan., 1998. [29: Tan et al. 2014] Huachun Tan, Yang Zhou, Yong Zhu, Danya Yao, and Keqiang Li, “A novel curve lane detection based on Improved River Flow and RANSA”, in Proceedings of IEEE 17th International Conference on Intelligent Transportation Systems, Qingdao, China, Oct. 8-11, 2014. [30: Lu et al. 2007] Weina Lu, Haifang Wang, and Qingzhu Wang, “A Synchronous Detection of the Road Boundary and Lane Marking for Intelligent Vehicles”, in Proceedings of SNPD Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, Qingdao, China, July 30- Aug. 1, 2007. [31: Lu et al. 2008] Weina Lu, Yucai Zheng, YuQuan Ma, and Tao Liu, “An Integrated Approach to Recognition of Lane Marking and Road Boundary”, in Proceedings of WKDD First International Workshop on Knowledge Discovery and Data Mining, Adelaide, SA, Australia, Jan. 23-24, 2008. [32: Tran et al. 2011] Trung-Thien Tran, Hyo-Moon Cho, and Sang-Bock Cho, “A Robust Method for Detecting Lane Boundary in Challenging”, Information Technology Journal, pp. 2300-2307, 2011. [33: Lim et al. 2012] King Hann Lim, Kah Phooi Seng, and Li-Minn Ang, “River Flow Lane Detection and Kalman Filtering-Based B-Spline Lane Tracking”, International Journal of Vehicular Technology, March 27, 2012. [34: Zhang et al. 2015] Yihuan Zhang, Jun Wang, Xiaonian Wang, Chaocheng Li, Liang Wang, “A real-time curb detection and tracking method for UGVs by using a 3D-LIDAR sensor”, in Proceedings of IEEE Conference on Control Applications, Sydney, Australia, pp. 1020-1025, Sept. 21-23, 2015. [35: Musicki et al. 1994] D. Musicki, R. Evans, and S. Stankovic, “Integrated probabilistic data association”, IEEE transactions on Automatic Control, Vol. 39, pp. 1237-1241, Jun. 1994. [36: Besl & McKay 1992] P.J. Besl and N.D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, pp. 239-256, Feb. 1992. [37: Fischler & Bolles 1981] Martin A. Fischler and Robert C. Bolles “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”, Communications of the ACM, vol. 24, pp. 381-395, Jun. 1981. [38: Kortli et al. 2017] Yassin Kortli, Mehrez Marzougui, and Belgacem Bouallegue, “A novel illumination-invariant lane detection system”, in proceeding of Anti-Cyber Crimes 2nd International Conference, Abha, Saudi Arabia, March 26-27, 2017. Books [39: Laganiѐre 2011] R. Laganiѐre, OpenCV 2 Computer Vision Application Programming Cookbook, 1st Ed., Editor: Neha Shetty, Packt Publishing Ltd., May 2011. [40: Gonzalez & Woods 2008] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3 rd ed., Editor:S. G. Miaou, Taiwan: Pearson, 2008. [41: Shaoiro et al. 2001] G. Lindar, Shapiro, Strockman, and C. George, Computer Vision, 1st Ed., Editor: Linda Shapiro, 2001. Websites [42: Iterative Closet Point Algorithm 2013] Author: Jakob Wilm Available: https://www.mathworks.com/matlabcentral/fileexchange/27804-iterative-closest-po int?requestedDomain=www.mathworks.com | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68525 | - |
dc.description.abstract | 自動駕駛輔助系統是近年來相當熱門的主題,為了避免因為駕駛者沒有專心或是錯誤的判斷所導致的車禍意外,因此利用機器的人工智慧、電腦視覺以及影像處理技術的演算法配合安裝在車上的感測器來協助駕駛者的安全認知與判斷,在輔助系統當中,定位與感知是兩項重要的課題。
可駕駛區域分析是駕駛輔助系統對於感知的一項重要依據。在道路上,車子主要行駛於車道標線之內。然而,並非所有環境的道路都有車道標線可以遵循。因此,路面地形的邊界是另一項可駕駛區域的重要特徵,雖然多層雷射測距可以用來感知不同距離的道路邊界,但其結果容易受地形變化影響。此外,道路上的車道標線在影像上的延伸線所得到的消失點,可以用來提高反透視投影轉換的正確性。 定位是另一項輔助駕駛系統的核心之一,全球定位系統 (GPS) 是當前行車定位為不可或缺的技術。然而在複雜的動態環境中行駛,尤其大城市,GPS 多路徑反射的問題會很明顯。這樣得到的 GPS 定位信息很容易有幾米的誤差,因此必須藉助其他感測器來輔助定位。 本篇論文主要利用單眼相機得到的影像來擷取車道標線的資訊,透過初始消失點估測與反透視投影來協助近、遠距離的車道線偵測,本文的車道線偵測主要在高速公路、城市及校院區三個場景分別在不同條件環境,如: 照明不均、障礙物出現和車道的形狀,來進行偵測。在照明不均與轉彎的道路上,車道線皆可以被準確偵測,但在有障礙物出現的場景會有部分的錯誤,在短暫連續的直線的車道場景,車道線偵測的準確率可以達到 80%。 此外,利用多層雷射測距與機率資料結合濾波器 (IPDAF)來追蹤車道路面上地形的邊界,藉由疊代最近點演算法在偵測的地形邊界與已知的地形邊界地圖計算出均方根誤差,在地形邊界的準確率可以達到 80%。根據濾波器得到的追蹤機率,路面邊界的追蹤成功率在第一、第二層雷射可以達到 96%。接著透過座標轉換,結合影像座標上的車道線資訊與雷射極座標上的路面邊界資訊轉換到以車身為參考的平面座標來進行可駕駛區域的分析。最後,將多層雷射偵測得到的路面地形的邊界資訊視為道路上結構化的特徵,與高精度地圖匹配,來進行 GPS 偏移的修正。 | zh_TW |
dc.description.abstract | Automatic driving assistance system has become a popular research in recent years. The algorithms based on knowledge of machine intelligence, computer vision, and image processing combining with sensors nstalled on advanced safety vehicle have been proposed to avoid collisions or accidents which are caused by the lack of recognition and miss judgement by driver. Localization and perception are two important issues for automatic driving assistance system.
Drivable region analysis is one of the most important foundations for perception of driving assistance system. Generally, cars are driven on road between the both sides of lane markings. However, there is no any lane markings for references on nonstructural roads in some environments. Therefore, curbs are another road features for drivable region. Although road boundaries with different distance can be detected by multi-layer laser scanner, the results are affected by variation of terrain. In addition, a vanishing point provided by intersection of both sides of extending lines of lane markings in image can enhance the performance of inverse perspective mapping. Localization is another core of driving assistance system. Global positioning system, named GPS, is indispensable technology for currently driving positioning. While driving in complex and dynamic environments, especially downtown, it is obvious to suffer from the problem of multipath interference from satellites causing the meters drift of GPS. Thus, it is necessary to combine with other sensors for localization to correct the meters error of GPS. In this thesis, the information of lane markings are extracted based on image of monocular camera. Through the ways of vanishing point estimation and inverse perspective mapping, the proposed lane detection method can detect lane markings in near and far region. The experimental scenes for lane detection are classified into 3 categories which represent freeway, downtown, and campus in different conditions, including uneven illumination, obstacles on road, and shape of road. Lane markings can be detected precisely in conditions of uneven illumination and curve road, while there is partially false detection caused by obstacles on road in far region. In a sequence of continuous time steps, accuracy of proposed lane detection methods is high as 80% in scenes with straight lanes. Besides, multi-layer laser and integrated probabilistic data association filter (IPDAF) are used to detect curbs which are the boundaries of road surface. The results of curbs detection are evaluated by root mean square error of iterative closet point algorithm between curbs and curb map in prior with accuracy high as 80%. According to tracking probabilities provided by IPDAF, the accuracy of road boundaries tracking by layer 1 and layer 2 of multi-layer laser ranging near distance can be achieved to 96%. Then, in the way of coordinate transformation, the integrated information of lane markings in image coordinate and curbs on polar coordinate of laser is transformed to world plane coordinate where reference point is position of ego-vehicle for the propose of drivable region analysis. Finally, the curbs detected by multi-layer laser are considered as structural features on road. Using curbs as features and map matching with curb map in prior, the drift error of GPS can be corrected. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T02:24:03Z (GMT). No. of bitstreams: 1 ntu-106-R04921065-1.pdf: 10750169 bytes, checksum: 0821a551328d9c5b46894b0c1f48fcc0 (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 摘要 .................................................................................................................................. v
ABSTRACT ................................................................................................................. viii CONTENTS ................................................................................................................... xii LIST OF FIGURES ....................................................................................................... xiv LIST OF TABLES .......................................................................................................... xx Chapter 1 Introduction .................................................................................................. 1 1.1 Motivation .................................................................................................... 1 1.2 Problem Formulation .................................................................................... 2 1.3 Contributions ................................................................................................ 5 1.4 Organization of the Thesis ............................................................................ 7 Chapter 2 Background and Literature Survey ............................................................... 9 2.1 Autonomous Driving Systems ...................................................................... 9 2.2 Vision-Based Perception.............................................................................. 11 2.3 Laser-Based Perception .............................................................................. 15 Chapter 3 Related Algorithms ..................................................................................... 17 3.1 Pinhole Model and Inverse Perspective Mapping (IPM) ........................... 17 3.2 Sobel Edge Detection ................................................................................. 26 3.3 Hough Transform ........................................................................................ 27 3.4 Otsu’ s Method for Auto Thresholding ....................................................... 31 3.5 Integrated Probabilistic Data Association filter (IPDAF)........................... 32 3.6 Iterative Closet Point (ICP) ........................................................................ 34 3.7 Random Sample Consensus (RANSAC) .................................................... 36 Chapter 4 Vision-Based Lane Detection ..................................................................... 39 4.1 System Architecture .................................................................................... 39 4.2 Vanishing Point Estimation ........................................................................ 42 4.2.1 Vanishing Line Estimation ............................................................. 43 4.2.2 Vanishing Point Candidates Finding via Line Extraction ............... 44 4.2.3 Spatiotemporal-Based Weighted Map of Vanishing Points ............ 47 4.3 Lane Feature Extraction.............................................................................. 56 4.3.1 Horizontal Gradient Feature Extraction ......................................... 56 4.3.2 Image Enhancement ....................................................................... 59 4.3.3 Adaptive Thresholding Column by Column .................................. 61 4.3.4 Lane Feature Identification ............................................................. 63 4.4 Lane Position Mapping ............................................................................... 70 4.4.1 Estimation of Camera Model Using Position of a Vanishing Point 70 4.4.2 Distance Estimation of Points on Road Surface ............................. 72 Chapter 5 Curb Detection and Road Boundary Tracking ........................................... 74 5.1 System Architecture .................................................................................... 74 5.2 Feature Extraction and Classification ......................................................... 77 5.2.1 Break Point Detection ..................................................................... 79 5.2.2 Line Segment Extraction ................................................................ 82 5.2.3 Line Combination and Classification for Curb Detection .............. 85 5.3 Road Boundary Tracking ............................................................................ 87 5.3.1 Measurement Model ....................................................................... 88 5.3.2 Target Model ................................................................................... 89 5.3.3 Application of IPDAF .................................................................... 90 Chapter 6 Experimental Results and Analysis ............................................................ 97 6.1 Hardware Platform and Installation .......................................................... 101 6.2 Experimental Scenes Analysis .................................................................. 106 6.3 Lane Detection ........................................................................................... 114 6.4 Curb Detection and Localization .............................................................. 132 6.5 Road Boundary Tracking .......................................................................... 158 6.6 Drivable Region Classification ................................................................. 171 6.7 Drivable Region Detection Analysis ........................................................ 181 6.8 Summary of Results ................................................................................. 194 Chapter 7 Conclusions and Future Works ................................................................. 208 7.1 Conclusions .............................................................................................. 208 7.2 Future Works ............................................................................................ 209 References ..................................................................................................................... 211 | |
dc.language.iso | en | |
dc.title | 結合基於單眼影像的車道線偵測與多層雷射的路緣偵測之可駕駛道路區域分析 | zh_TW |
dc.title | Monocular Vision-Based Lane Detection and Multi-Layer
Laser Based Curb Detection for Drivable-Region Analysis | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 簡忠漢(Jong-Hann Jean),李後燦(Hou-Tsan Lee),黃正民(Cheng-Ming Huang) | |
dc.subject.keyword | 車道線偵測,道路地形邊界追蹤,可駕駛區域偵測,消失點偵測,機率資料結合濾波器,地圖匹配, | zh_TW |
dc.subject.keyword | Lane detection,curb detection,drivable region detection,vanishing point detection,integrated probabilistic data association filter,map matching, | en |
dc.relation.page | 216 | |
dc.identifier.doi | 10.6342/NTU201703689 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2017-08-20 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf 目前未授權公開取用 | 10.5 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。