Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工程科學及海洋工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88918
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張恆華zh_TW
dc.contributor.advisorHerng-Hua Changen
dc.contributor.author吳祐鴻zh_TW
dc.contributor.authorYou-Hong Wuen
dc.date.accessioned2023-08-16T16:21:19Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-16-
dc.date.issued2023-
dc.date.submitted2023-08-09-
dc.identifier.citation[1] 中華民國交通部. 近五年交通事故統計. 中華民國政府, 18(2):203–211, 2021.
[2] Huifeng Wang, Yunfei Wang, Xiangmo Zhao, Guiping Wang, He Huang, and Jiajia Zhang. Lane detection of curving road for structural highway with straight-curve model on vision. IEEE Transactions on Vehicular Technology, 68(6):5321–5330,2019.
[3] Bandarage Shehani Sanketha Rathnayake and Lochandaka Ranathunga. Lane detection and prediction under hazy situations for autonomous vehicle navigation. In 2018 18th International Conference on Advances in ICT for Emerging Regions (ICTer), pages 99–106, 2018.
[4] Ravi Kumar Satzoda, Suchitra Sathyanarayana, Thambipillai Srikanthan, and Supriya Sathyanarayana. Hierarchical additive hough transform for lane detection. IEEE Embedded Systems Letters, 2(2):23–26, 2010.
[5] Umar Ozgunalp, Rui Fan, Xiao Ai, and Naim Dahnoun. Multiple lane detection algorithm based on novel dense vanishing point estimation. IEEE Transactions on Intelligent Transportation Systems, 18(3):621–632, 2017.
[6] Yue Wang, Eam Khwang Teoh, and Dinggang Shen. Lane detection and tracking using b-snake. Image and Vision Computing, 22(4):269–280, 2004.
[7] Irwin Sobel and Gary Feldman. An isotropic 3x3 image gradient operator. 08 2015.
[8] Noor Ibraheem, Mokhtar Hasan, Rafiqul Khan, and Pramod Mishra. Understanding color models: a review. ARPN Journal of science and technology, 2(3):265–275, 2012.
[9] George Joblove and Donald Greenberg. Color spaces for computer graphics. In Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’78, page 20–25, New York, NY, USA, 1978. Association for Computing Machinery.
[10] Felix Woelk, Ingo Schiller, and Reinhard Koch. An airborne bayesian color tracking system. In IEEE Proceedings. Intelligent Vehicles Symposium, 2005., pages 67–72, 2005.
[11] Richard Duda and Peter Hart. Use of the hough transformation to detect lines and curves in pictures. Commun. ACM, 15(1):11–15, jan 1972.
[12] Mehrez Marzougui, Areej Alasiry, Yassin Kortli, and Jamel Baili. A lane tracking method based on progressive probabilistic hough transform. IEEE Access, 8:84893–84905, 2020.
[13] Martin Fischler and Robert Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
[14] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
[15] Jörg Sander, Martin Ester, Hans-Peter Kriegel, and Xiaowei Xu. Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data mining and knowledge discovery, 2:169–194, 1998.
[16] Erich Schubert, Jörg Sander, Martin Ester, Hans Peter Kriegel, and Xiaowei Xu. Dbscan revisited, revisited: Why and how you should (still) use dbscan. ACM Trans. Database Syst., 42(3), jul 2017.
[17] Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Towards end-to-end lane detection: an instance segmentation approach. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 286–291, 2018.
[18] Jongin Son, Hunjae Yoo, Sanghoon Kim, and Kwanghoon Sohn. Real-time illumination invariant lane detection for lane departure warning system. Expert Systems with Applications, 42(4):1816–1824, 2015.
[19] Yeongho Son, Elijah Lee, and Dongsuk Kum. Robust multi-lane detection and tracking using adaptive threshold and lane classification. Machine Vision and Applications, 30(6):111–124, 2019.
[20] Huifeng Wang, Yunfei Wang, Xiangmo Zhao, Guiping Wang, He Huang, and Jiajia Zhang. Lane detection of curving road for structural highway with straight-curve model on vision. IEEE Transactions on Vehicular Technology, 68(6):5321–5330, 2019.
[21] Dong Wu, Man-Wen Liao, Wei-Tian Zhang, Xing-Gang Wang, Xiang Bai, Wen-Qing Cheng, and Wen-Yu Liu. Yolop: You only look once for panoptic driving perception. Machine Intelligence Research, 19(6):550–562, 2022.
[22] 中華民國政府. 道路交通標誌標線號誌設置規則. 中華民國法律, 2016.
[23] International Telecommunication Union. Bt.601 : Studio encoding parameters of digital television for standard 4:3 and wide screen 16:9 aspect ratios. 2011.
[24] Stuart Lloyd. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28(2):129–137, 1982
[25] Ora Yang. Advance lanefinding. https://github.com/shawshany/Advance_LaneFinding.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88918-
dc.description.abstract交通安全是生活中關於人身安全的重要議題,隨著近年來交通議題在社會上愈來愈受重視,這也帶動了自駕車及車輛輔助駕駛系統的發展。基於影像處理的車道線偵測系統在自駕車中愈來愈受重視。為了解決車道線檢測系統在行駛中,容易受到附近行進中的車輛影響而檢測錯誤的情況,本論文提出了一種基於影像處理的多車道線偵測方法。首先使用高效率的YOLOP模型過濾掉影像中的車子,接著使用全域的二值化方法找出白色和黃色的車道線,再使用快速的邊緣偵測方法擷取車道線特徵。經由逆透視轉換後,我們使用k-Means來將擷取到的邊緣特徵分類成個別的車道線,最後根據各個車道線類別用拋物線函數來擬合車道線。我們的方法在TuSimple資料集中平均Accuracy、Precision和Recall能達到0.727、0.509和0.407。zh_TW
dc.description.abstractTraffic safety is an important topic regarding personal safety in life. With the increasing attention to traffic issues in society in recent years, there is a need for the development of self-driving cars and vehicle driver assistance systems. Image-based lane line detection systems are essential for self-driving cars. In order to prevent the lane line detection from being easily affected by nearby moving vehicles and obstacles during driving, this thesis proposes a multi-lane line detection method based on a series of image processing steps. First, an efficient YOLOP model is used to filter out the cars in the image, and a global thresholding method is used to locate the white and yellow lane lines. Then a fast edge detection method is used to extract the lane line features. After applying the inverse perspective mapping, we employ k-Means to classify the extracted edge features into individual lane lines. Finally, according to each lane line category, a parabolic function is used to fit the lane line. The average Accuracy on the TuSimple dataset reached 0.727, with the Precision of 0.509 and the Recall of 0.407.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-16T16:21:19Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-16T16:21:19Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents目錄
致謝 i
摘要 ii
Abstract iii
目錄 iv
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1 研究背景與動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究目的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 論文架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
第二章 文獻探討 4
2.1 邊緣偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Sobel 運算子 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 色彩空間 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 HSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 幾何模型 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 霍夫轉換 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 隨機取樣一致算法 . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.3 YOLO 偵測演算法 . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.4 DBSCAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 學習型車道線偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 非學習型車道線偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.1 特徵擷取 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5.2 分類方法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.3 幾何模型擬合 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
第三章 研究方法設計 17
3.1 研究架構概述 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 影像處理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1 影像預處理 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2 特徵擷取 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2.1 顏色分割 . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.2.2 邊緣偵測 . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2.3 逆透視轉換 . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.3 聚類分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.4 車道線預測 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
第四章 實驗結果與討論 28
4.1 實驗說明 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1.1 實驗環境 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1.2 資料集 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 評估標準 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 參數分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3.1 特徵擷取參數 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.2 聚類分析參數 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3.3 模型擬合參數 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 實驗結果 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4.1 結果展示 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.4.2 效能分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4.3 執行時間分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.5 其他分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5.1 流程分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.5.2 評分標準分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
第五章 結果與未來展望 48
5.1 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 未來展望 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
參考文獻 49
-
dc.language.isozh_TW-
dc.subject車道線偵測zh_TW
dc.subject邊緣偵測zh_TW
dc.subjectYOLOzh_TW
dc.subject閥值化zh_TW
dc.subject聚類分析zh_TW
dc.subjectYOLOen
dc.subjectedge detectionen
dc.subjectthresholdingen
dc.subjectlane line detectionen
dc.subjectcluster analysisen
dc.title基於色彩分離與邊界強化的車道線偵測zh_TW
dc.titleLane Line Detection Based On Color Separation and Boundary Enhancementen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee丁肇隆;張瑞益;江明彰zh_TW
dc.contributor.oralexamcommitteeChao-Lung Ting;Ray-I Chang;Ming-Chang Chiangen
dc.subject.keyword車道線偵測,YOLO,邊緣偵測,閥值化,聚類分析,zh_TW
dc.subject.keywordlane line detection,YOLO,edge detection,thresholding,cluster analysis,en
dc.relation.page52-
dc.identifier.doi10.6342/NTU202303260-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-08-09-
dc.contributor.author-college工學院-
dc.contributor.author-dept工程科學及海洋工程學系-
dc.date.embargo-lift2028-08-07-
顯示於系所單位:工程科學及海洋工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
  未授權公開取用
54.25 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved