請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74941完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 連豊力 | |
| dc.contributor.author | Ching-Yu Lin | en |
| dc.contributor.author | 林景昱 | zh_TW |
| dc.date.accessioned | 2021-06-17T09:10:50Z | - |
| dc.date.available | 2019-11-04 | |
| dc.date.copyright | 2019-11-04 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-08-13 | |
| dc.identifier.citation | [1: Afia et al. 2014] A. Ben-Afia, L. Deambrogio, D. Salos, A. C. Escher, C. Macabiau, L. Soulier, and V. Gay-Bellile, “Review and Classification of Vision-based Localization Techniques in Unknown Environments,” IET Rader, Sonar and Navigation, vol. 8, no. 9, pp. 1059–1072, Dec. 2014.
[2: González et al. 2016] D. González, J. Pérez, V. Milanés, and F. Nashashibi, “ A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1135–1145, Apr. 2016. [3: Paden et al. 2016] B. Paden, M. Cáp, S. Zheng, S. Z. Yong, D. Yershov, and E. Frazzoli, “ A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles,” IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 33–55, Mar. 2016. [4: Zhu et al. 2017] H. Zhu, K. V. Yuen, L. Mihaylova, and H. Leung, “Overview of Environment Perception for Intelligent Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 10, pp. 2584–2601, Oct. 2017. [5: Skog & Händel. 2009] I. Skog and P. Händel, “In-Car Positioning and Navigation Technologies—A Survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 1, pp. 4–21, Mar. 2009. [6: Hillel et al. 2014] A. B. Hillel, R. Lerner, D. Levi, and G. Raz, “Recent Progress in Road and Lane Detection: A Survey,” Machine Vision and Application, vol. 25, no. 3, pp. 727–745, Apr. 2014. [7: Kuutti et al. 2018] S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough, and A. Mouzakitis, “A Survey of the State-of-the-Art Localisation Techniques and Their Potentials for Autonomous Vehicle Applications,” IEEE Internet of Things Journal, vol. 5, no. 2, pp. 829–846, Apr. 2018. [8: Suhr et al. 2017] J. K. Suhr, J. Jang, D. Min, and H. G. Jung “Sensor Fusion-Based Low-Cost Vehicle Localization System for Complex Urban Environments,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 50, pp. 1078–1086, May. 2017. [9: Cui et al. 2016] D. Cui, J. Xue, and N. Zheng, “Real-Time Global Localization of Robotic Cars in Lane Level via Lane Marking Detection and Shape Registration,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1039–1050, Apr. 2016. [10: Rose et al. 2014] C. Rose, J. Britt, J. Allen, and D. Bevly, “An Integrated Vehicle Navigation System Utilizing Lane-Detection and Lateral Position Estimation Systems in Difficult Environments for GPS,” IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 6, pp. 2615–2629, Dec. 2014. [11: Schreiber et al. 2013] M. Schreiber, C. Knöppel, and U. Franke, “LaneLoc: Lane Marking based Localization using Highly Accurate Map,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Gold Coast, Australia, pp. 636–641, Jun. 23–26, 2013. [12: Jo et al. 2013] K. Jo, K. Chu, and M. Sunwoo, “GPS-Bias Correction for Precise Localization of Autonomous Vehicles,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Gold Coast, Australia, pp. 449–454, Jun. 23–26, 2013. [13: Tao et al. 2013] Z. Tao, Ph. Bonnifait, V. Frémont, and J. Ibañez-Guzman, “Mapping and localization using GPS, lane markings and proprioceptive sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, pp. 406– 412, Nov. 3–7, 2013. [14: Ranganathan et al. 2013] A. Ranganathan, D. Ilstrup, and T. Wu, “Light-weight Localization for Vehicles using Road Markings,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, pp. 921–927, Nov. 3–7, 2013. [15: Levinson et al. 2008] J. Levinson, M. Montemerlo, S. Thrun, “Map-Based Precision Vehicle Localization in Urban Environments,” in Proceedings of Robotics: Science and Systems, Atlanta,USA, Jun. 27–30, 2007 [16: Jeong et al. 2017] J. Jeong, Y. Cho, and A. Kim, “Road-SLAM : Road Marking based SLAM with Lane-level Accuracy,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Redondo, Beach, CA, pp. 1736–1743, Jun. 11–14, 2017. [17: Borkar et al. 2012] A. Borkar, M. Hayes, and M.T. Smith, “A Novel Lane Detection System With Efficient Ground Truth Generation,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 1, pp. 365–374, Mar. 2012. [18: Yoo et al. 2017] J. H. Yoo, S. W. Lee, S.K. Park, and D. H. Kim,, “A Robust Lane Detection Method Based on Vanishing Point Estimation Using the Relevance of Line Segments,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 12, pp. 3254–3266, Dec. 2017. [19: Kong et al. 2009] H. Kong, J. Y. Audibert, and J. Ponce, “Vanishing point detection for road detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, pp. 2113–2120, Jun. 20–25, 2009. [20: Liu et al. 2013] G. Liu, F. Wörgötter, and I. Markelic, “Stochastic Lane Shape Estimation Using Local Image Descriptors,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 1, pp. 13–21, Mar. 2013. [21: Jung et al. 2016] S. Jung, J. Youn, and S. Sull, “Efficient Lane Detection Based on Spatiotemporal Images,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 1, pp. 289–294, Mar. 2013. [22: Yoo et al. 2013] H. Yoo, U. Yang, and K. Sohn, “Gradient-enhancing Conversion for Illumination- robust Lane Detection,” IEEE Transactions on Intelligent Transportation systems, vol. 14, no. 3, pp. 1083–1094, Sep. 2013. [23: Aly 2008] M. Aly, “Real time Detection of Lane Markers in Urban Streets,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Eindhoven, Netherlands, pp. 7–12, Jun. 4–6, 2008 [24: Jung & Kelber 2004] C. R. Jung and C. R. Kelber, “A Robust Linear-Parabolic Model for Lane Following,” in Proceedings of Brazilian Symposium on Computer Graphics and Image Processing, Curitiba, PR, Brazil, pp. 72–79, Oct. 17–20, 2004. Road Marker Recognition: [25: Veit et al. 2008] T. Veit, J. Tarel, P. Nicolle, and P. Charbonnier, “Evaluation of Road Marking Feature Extraction,” in Proceedings of International IEEE Conference on Intelligent Transportation Systems, Beijing, China, pp. 7–12, Oct. 12–15, 2008. [26: Foucher et al. 2011] P. Foucher, Y. Sebsadji, J. P. Tarel, P. Charbonnier, and P. Nicolle, “Detection and Recognition of Urban Road Markings Using Images,” in Proceedings of International IEEE Conference on Intelligent Transportation Systems, Washington, USA, pp. 174–181, Oct. 5–7, 2011. [27: Liu et al. 2015] W. Liu, J. Lv, B. Yu, W. Shang, and H. Yuan, “Multi-type Road Marking Recognition Using Adaboost Detection and Extreme Learning Machine Classification,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, COEX, Seoul, Korea, pp. 41–46, June. 28– July 1, 2015. [28: Chen et al. 2015] T. Chen, Z. Chen, Q Shi, and X. Huang, “Road Marking Detection and Classification Using Machine Learning Algorithms,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, COEX, Seoul, Korea, pp. 617–621, June. 28– July 1, 2015. [29: Hyeon et al. 2016] D. Hyeon, S. Lee, S. Jung, S.W. Kim, and S.W. Seo, “Robust Road Marking Detection Using Convex Grouping Method in Around-View Monitoring System,” in Proceedings of IEEE Conference on Intelligent Vehicles Symposium, Gothenburg, Sweden, pp. 1004–1009, June. 19– 22, 2016. Related Algorithms: [30: Huang et al. 1979] T. Huang, G. Yang, and G. Tang, “A Fast Two-dimensional Median Filtering Algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 27, no. 1, pp. 13–18, Feb. 1979. [31: Dahyot 2009] R. Dahyot, “Statistical Hough Transform,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 8, pp. 1502–1509, Aug. 2009. [32: Zhang 2000] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, no. 11, pp. 1330–1334, Nov. 2000. [33: Jeong & Kim 2016] J. Jeong, and A. Kim, “Adaptive Inverse Perspective Mapping for Lane Map Generation with SLAM,” in Proceedings of International Conference on Ubiquitous Robots and Ambient Intelligence, Renin Square, Xian, China, pp. 38–41, Aug. 19– 22. 2016. [34: Possegger et al. 2015] H. Possegger, T. Mauthner, and H. Bischof, “In Defense of Color-based Model-free Tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, pp. 2113–2120, Jun. 7–12, 2015. [35: Rublee et al. 2011] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proceedings of IEEE International Conference on Computer Vision, Barcelona, Spain, pp. 2564–2571, Nov. 6–13, 2011. [36: Chen 2011] H. Chen, “Autonomous Intelligent Vehicles Theory, Algorithm, and Implementation,” London, UK: Springer Press, 2011. [37: Thrun et al. 2005] S. Thrun, W. Burgard, and D. Fox, “Probabilistic Robotics,” Cambridge, MA, USA: MIT Press, 2005. [38: OpenCV from OpenCV official website 2017] Open Source Computer Vision Library. (2017, March 2). In OpenCV official website. Retrieved March 2, 2017, [Online] Available: http://opencv.org/ [39: Ministry of Transportation and Communications R.O.C. 2018] Ministry of Transportation and Communications R.O.C. (Jul. 13, 2018). In 道路交 通標誌標線號誌設置規則. [Online] Available: http://law.moj.gov.tw/Index.aspx | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74941 | - |
| dc.description.abstract | 精準車輛定位是發展自動駕駛系統中不可或缺的先決條件。其系統必須達到 車道級的定位精度以確保安全導航。在目前商用的解決方案中,全球定位系統(GPS) 是最為廣泛使用的技術。然而,GPS 可能會受到遮擋、大氣擾動的多路徑效應等影 響進而產生無法滿足車道級定位精度的誤差。為了彌補 GPS 不準確的定位結果, 近年來基於感測器融合的車輛定位方法被廣泛的研究。其中融合了來自多種感測 器(如相機、光達和雷達)的不同測量值以感知自身車輛周圍的特徵,並且將其和已 知的地圖比對以進一步的修正 GPS 的誤差。
本文提出了一個基於視覺的感知系統,利用單目相機來偵測道路標記與車道線以提供可靠的視覺測量結果。選擇道路標記與車道線作為欲偵測高階特徵是由於和其他的物體相比,道路標記與車道線具有視覺上獨特的外觀並且能夠容易地被標註在地圖中。路標辨識演算法使用基於模板匹配的方法來辨識路標類型並估計其相對應的位置以及方向。然而,比對每一個可能的位置與方向是非常耗時的。 透過結合車輛的動態資訊,提出的演算法可以顯著地減少路標辨識的搜尋時間以 滿足系統實時運作的需求。在車道線偵測中,通過本文提出的梯度方向一致性與基 於 Inverse Perspective Mapping (IPM)的空間約束,清晰可見的車道線在初始階段中 被準確地初始化。透過先前初始化的結果,基於時序上整合的方法結合了連續時間 內的色彩資訊以及前一時刻偵測到的車道線方向。本文提出的偵測方法在具有挑 戰性的光影變化以及路面被陰影覆蓋的場景中,也能夠有效地追蹤車道線。 最後,感測器融合演算法採用粒子濾波器來有效地融合視覺測量結果、慣性感 測器(IMU)與 GPS 以修正 GPS 誤差,其中路標與車道線的測量結果分別提供相對 於自身車輛的側向偏移與相對位置資訊,慣性感測器則是用來估計自身車輛的動態行為。 本文提出的車輛定位系統在不同光照條件的實際車輛行駛場景下進行評估,實驗結果表明,通過準確的路標辨識與車道線偵測結果,定位的準確度可以達到車 道級精度的要求。 | zh_TW |
| dc.description.abstract | Precise localization is a prerequisite for completing the autonomous driving system. For safe navigation, lane-level accuracy with an error within meter is required for vehicle localization. In current commercially available solution, Global Positioning System (GPS) is most widely used. However, error in the GPS position suffers from occlusions, multipath-effects of atmospheric disturbances failed the required accuracy of the localization system. To compensate the unreliable GPS measurement, sensor fusion- based localization approaches are extensively researched in recent years, which fuses various measurements collected by different sensors, such as camera, LiDAR, and Radar to perceive the features around the ego-vehicle and match them with a known map to further correct the GPS errors.
In this thesis, the proposed vision-based perception system uses a monocular camera to detect the high-level features in the form of road markers and driving lanes for providing reliable visual measurements. Road markers and driving lanes are chosen because of the visually distinctive appearance compared to other objects as well as being easily annotated in the digital map. For road marker recognition, template matching-based method is applied to recognize the marker type and estimate the corresponding position and orientation as well. However, it is extremely time-consuming for matching all possible position and orientation, the proposed method incorporates the vehicle motion to drastically reduce the searching time of road marker, for achieving real time demanding. For driving lane detection, the proposed gradient orientation consistency combined Inverse Perspective Mapping (IPM) spatial constraints is used to initialize the clearly visible lanes in the initial detection. With the knowledge of initialized lanes, temporal integration method is applied to combine the temporal color information and detected orientation of lanes for tracking the potential candidates of driving lanes in the challenging scenarios, such as varying illumination condition and pavement covered by casting shadows. Finally, Particle Filter is employed to integrate the visual measurements, Inertial Measurement Unit (IMU), and GPS complementarily for correcting the GPS errors, where lane measurement and road marker measurement provide the lateral offset and relative position information with respect to the ego-vehicle respectively, and IMU is used to estimate the dynamic behavior of ego-vehicle. The proposed vehicle localization system is evaluated in the real driving scenario of ITRI campus within varying illumination condition and experiment results show that the lane-level localization accuracy is achieved by detecting road markers and driving lanes robustly. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T09:10:50Z (GMT). No. of bitstreams: 1 ntu-108-R04921069-1.pdf: 91474299 bytes, checksum: 2a81435999b80a385790dd65bc30f1ca (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 摘要 vii
ABSTRACT ix CONTENTS xii LIST OF FIGURES xiv LIST OF TABLES xviii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Formulation 3 1.3 Contribution 7 1.4 Organization of the Thesis 9 Chapter 2 Background and Literature Survey 10 2.1 Autonomous Driving Systems 10 2.2 Vision-Based Perception 13 2.2.1 Driving Lane Detection 15 2.2.2 Road Marker Recognition 19 2.3 Localization of Autonomous Vehicle 21 Chapter 3 Related Algorithms 26 3.1 Pinhole Camera Model 26 3.2 Inverse Perspective Mapping (IPM) 28 3.3 Sobel Operator 31 3.4 ORB Feature 32 3.5 Hough Transform 33 3.6 Particle Filter 36 Chapter 4 Road Marker Recognition and Driving Lane Detection 40 4.1 System Architecture 40 4.2 Road Marker Detection 43 4.2.1 Median Local Threshold using Histogram-Based Selection 44 4.2.2 Template Matching-Based Road Marker Recognition 53 4.2.3 Matching Region Reduction Based on Vehicle Motion 60 4.2.4 Reduction of Converting Area of MLT Image 63 4.3 Driving Lane Feature Extraction 68 4.3.1 Image Preprocessing of Lane Detection 69 4.3.2 Gradient Orientation Consistency via Region Growing 73 4.3.3 Feature Clustering 78 4.4 Lane Initialization 81 4.4.1 False Positive Removal using IPM Spatial Constraints 82 4.4.2 Lane Modeling 89 4.5 Temporal-Based Lane Tracking 90 4.5.1 Temporal-Based Discriminative Color Model 91 4.5.2 Generation of Tracking Feature Map 95 4.5.3 Lane Fitting using Tracking Feature Map 101 Chapter 5 Sensor Fusion-Based Localization via Particle Filter 105 5.1 System Architecture 105 5.2 Visual Measurement Extraction 107 5.2.1 Road Marker Recognition 109 5.2.2 Driving Lane Detection 112 5.3 Vehicle Localization using Particle Filter 114 5.3.1 Prediction Model 118 5.3.2 Measurement Update 119 5.3.3 Vehicle Pose Determination and Resampling 121 Chapter 6 Experimental Results and Analysis 123 6.1 Experiment Setup 123 6.1.1 Experiment Hardware Platform 123 6.1.2 ITRI Digital Map 126 6.2 Experimental Scenarios 127 6.3 Road Marker Recognition 133 6.3.1 MLT Parameters and Calculation 134 6.3.2 Threshold of Matching Response 139 6.3.3 Matching Region Reduction 144 6.4 Driving Lane Detection 149 6.5 Analysis of Particle Filter-Based Localization 161 6.5.1 Visual Measurement Result in Simple Scenario 161 6.5.2 Effects on Parameters on Particle Filter 163 6.6 Localization Results in Real Driving Scenario 175 6.6.1 Small-Scale Scene with RTK-GPS Validation 176 6.7 Calculation Time and Hardware Environment 188 6.8 Summary of Experiment Results 191 Chapter 7 Conclusions and Future Works 194 7.1 Conclusions 194 7.2 Future Works 196 References 197 | |
| dc.language.iso | en | |
| dc.subject | 感測器融合 | zh_TW |
| dc.subject | 路面標示辨識 | zh_TW |
| dc.subject | 車輛定位 | zh_TW |
| dc.subject | 車道線偵測 | zh_TW |
| dc.subject | 單眼視覺 | zh_TW |
| dc.subject | 地圖匹配 | zh_TW |
| dc.subject | Map matching | en |
| dc.subject | Lane detection | en |
| dc.subject | Road marker recognition | en |
| dc.subject | Sensor Fusion | en |
| dc.subject | Vehicle localization | en |
| dc.subject | Monocular vision | en |
| dc.title | 利用基於單眼影像的車道線偵測與道路標記辨識之整合型車輛定位系統 | zh_TW |
| dc.title | An Integrated Vehicle Localization System Using Vision-Based Lane Detection and Road Marker Recognition | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 108-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 黃正民,李後燦,簡忠漢 | |
| dc.subject.keyword | 單眼視覺,車道線偵測,路面標示辨識,感測器融合,車輛定位,地圖匹配, | zh_TW |
| dc.subject.keyword | Monocular vision,Lane detection,Road marker recognition,Sensor Fusion,Vehicle localization,Map matching, | en |
| dc.relation.page | 201 | |
| dc.identifier.doi | 10.6342/NTU201804131 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2019-08-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 89.33 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
