Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68290
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor連豊力(Feng-Li Lian)
dc.contributor.authorSong-Ren Wangen
dc.contributor.author王頌仁zh_TW
dc.date.accessioned2021-06-17T02:16:45Z-
dc.date.available2021-01-04
dc.date.copyright2018-01-04
dc.date.issued2017
dc.date.submitted2017-09-25
dc.identifier.citation[1: Borkar et al. 2012]
Amol Borkar, Monson Hayes, Mark T. Smith, “A Novel Lane Detection System With Efficient Ground Truth Generation,” IEEE Transactions on Intelligent Transportation systems, VOL. 13, NO. 1, pp. 365 – 374, Mar. 2012.
[2: Wu et al. 2015]
Meiqing Wu, Siew-Kei Lam, and Thambipillai, “Nonparametric Technique Based High-Speed Road Surface Detection,” IEEE Transactions on Intelligent Transportation systems, VOL. 16, NO. 2, pp. 874 – 884, Apr. 2015.
[3: Lee 2002]
Joon Woong Lee, “A Machine Vision System for Lane-Departure Detection,”
Computer Vision and Image Understanding, VOL. 86, NO. 1, pp. 52-78, Apr. 2002.
[4: Liu et al. 2013]
Guoliang Liu, Florentin Worgotter and Irene Markelic, “Stochastic Lane Shape Estimation Using Local Image Descriptors,” IEEE Transactions on Intelligent Transportation Systems, VOL. 14, No. 1, Mar. 2013.
[5: Kong et al. 2010]
H. Kong, J. Y. Audibert, J. Ponce, “General Road Detection From a Single Image,” IEEE Transactions on Image Processing, VOL. 19, NO. 8, pp. 2211 – 2220, Aug. 2010.
[6: Lu et al. 2015]
Kaiyue Lu, Siyu Xia, Dandi Chen and Chao Xia, “Unstructured Road Detection From a Single Image,” In Proceedings of the 10th Control Conference, Asian, May 31 - Jun 3, 2015.
[7: Zhang et al. 2015]
Yihuan Zhang, Jun Wang, Xiaonian Wang, Chaocheng Li and Liang Wang, “A Real-time Curb Detection and Tracking Method for UGVs by Using a 3D-LIDAR Sensor,” In Proceedings of the IEEE Conference on Control Applications, Sydney, Australia, Sept. 21 – 23. 2015.
[8: Alvarez & Lopez 2011]
Jose M. Alvarez and Antonio M. Lopez, “Road Detection Based on Illuminant Invariance,” IEEE Transactions on Intelligent Transportation Systems, VOL. 12, No. 1, pp. 184–193, Mar. 2011.
[9: Li et al. 2014]
Qiungquan Li, Long Chen, Ming Li, Shih-Lung Shaw and Andress Nuchter, “A Sensor-Fusion Drivable-Region and Lane-Detection System for Autonomous Vehicle Navigation in Challenging Road Scenarios,” IEEE Transactions on Vehicular Technology, VOL. 63, NO. 2, Feb. 2014.
[10: Jo et al. 2014]
Kichun Jo, Junsoo Kim, Dongchul Kim,Chulhoon Jang and Myoungho Sunwoo, “Development of Autonomous Car—Part I: Distributed System Architecture and Development Process,” IEEE Transactions on Industrial Electronics, VOL. 61, NO. 12, Dec. 2014.
[11: Jo et al. 2015]
Kichun Jo, Junsoo Kim, Dongchul Kim,Chulhoon Jang and Myoungho Sunwoo, “Development of Autonomous Car—Part II: A Case Study on the Implementation of an Autonomous Driving System Based on Distributed Architecture,” IEEE Transactions on Industrial Electronics, VOL. 62, NO. 8, Aug. 2015.
[12: Zhu et al. 2017]
Hao Zhu, Ka-Veng Yuen, Lyudmila Mihaylova, and Henry Leung, “Overview of Environment Perception for Intelligent Vehicles,” IEEE Transactions on Intelligent Transportation Systems, VOL. pp, No. 99, pp. 1-18, Feb. 2017.
[13: Broggi et al. 2009]
Alberto Broggi, Pietro Cerri, Stefano Ghidoni, Paolo Grisleri, and Ho Gi Jung, “A New Approach to Urban Pedestrian Detection for Automatic Braking,” IEEE Transactions on Intelligent Transportation Systems, VOL. 10, NO. 4, Dec. 2009.
[14: Labayrade et al. 2002]
Raphael Labayrade, Didier Aubert, and Jean-Philippe Tarel, “Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry through ”V-disparity “ Representation,” In Proceedings of IEEE Intelligent Vehicle Symposium, Versailles, France, 17-21 Jun. 2002.
[15: Chen & Lu 2016]
Tao Chen and Shijian Lu, “Accurate and Efficient Traffic Sign Detection Using Discriminative AdaBoost and Support Vector Regression,” IEEE Transactions on Vehicular Technology, VOL. 65, NO. 6, Jun. 2016.
[16: Hata & Wolf 2016]
Alberto Y. Hata and Denis F. Wolf, “Feature Detection for Vehicle Localization in Urban Environments Using a Multilayer LIDAR,” IEEE Transactions on Intelligent Transportation Systems, VOL. 17, NO. 2, pp. 420-429, Feb. 2016.
[17: Crisman & Thorpe 1991]
Jill D. Crisman and Charles E. Thorpe, “UNSCARF, A Color Vision System for the Detection of Unstructured Roads,” In Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento, Califomia, Apr. 1991.
[18: Crisman & Thorpe 1993]
Jill D. Crisman and Charles E. Thorpe, “SCARF: A Color Vision System that Tracks Road and Intersections,” IEEE Transactions on Robotics and Automation, VOL. 9, pp. 49-58, Feb. 1993.
[19: He et al. 2004]
Yinghua He, Hong Wand, and Bo Zhang, “Color-Based Road Detection in Urban Traffic Scenes,” IEEE Transactions on Intelligent Transportation system, VOL. 5, NO. 4, pp. 309-318, Dec. 2004.
[20: Fritsch et al. 2014]
Jannik Fritsch, Tobias Kuhnl, and Franz Kummert, “Monocular Road Terrain Detection by Combining Visual and Spatial Information,” IEEE Transactions on Intelligent Transportation Systems, VOL. 15, NO.4, pp. 1586-1596, Aug. 2014.
[21: Siogkas & Dermatas 2013]
George K. Siogkas, and Evangelos S. Dermatas, “Random-Walker Monocular Road Detection in Adverse Conditions Using Automated Spatiotemporal Seed Selection,” IEEE Transactions on Intelligent Transportation Systems, VOL. 14, NO. 2, pp. 527-538, Jun. 2013.
[22: Rotaru et al. 2008]
Calin Rotaru, Thorsten Graf, and Jianwei Zhang, “Color image segmentation in HSI space for automotive applications,” Journal of Real-Time Image Processing, VOL. 3, NO. 4, pp. 311-322, Dec. 2008.
[23: Rasmussen 2004]
C. Rasmussen, “Grouping dominant orientations for ill-structured road following,”
In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 27 – July 19, 2004.
[24: Canny 1986]
John Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. PAMI-8, NO. 6, pp. 679-698, Nov. 1986.
[25: Li et al. 2016]
Zuo-quan Li, Hui-Min Ma, and Zheng-Yu Liu, “Road Lane Detection with Gabor Filter,” In Proceedings of the 2013 International Conference on Information System and Artificial Intelligence, Hong Kong, China, 24-26 Jun. 2016.
[26: Fischler & Bolles 1981]
Martin A. Fischler and Robert C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, VOL. 24, NO. 6, pp. 381–395, Jun. 1981.
[27: Taoka et al. 2007]
Takeshi Taoka, Makoto Manabe, and Masahiro Fukui, “An efficient curvature lane recognition algorithm by piecewise linear approach,” In Proceedings of IEEE 65th Vehicular Technology Conference, pp. 2530–2534, Dublin, Ireland, 22-25 Apr. 2007.
[28: Truong et al. 2008]
Quoc Bao Truong, Byung Ryong Lee, Nam Geon Heo, Young Jim Yum, and Jong Gook Kim, “Lane boundaries detection algorithm using vector lane concept,” In Proceedings of 10th International Conference on Control, Automation, Robotics and Vision, pp. 2319–2325, Hanoi, Vietnam, 17-20 Dec. 2008.
[29: Jung & Kelber 2005]
Cla´udio Rosito Jung and Christian Roberto Kelber, “Lane following and lane departure using a linear-parabolic model,” In Proceedings of Image and Vision Computing, VOL. 23, NO. 13, pp. 1192–1202, Nov. 2005.
[30: Wang et al. 2000]
Yue Wang, Dinggang Shen, Eam Khwang Teoh, “Lane detection using spline model,” Pattern Recognition Letters, VOL. 21, NO. 8, pp. 677-689, 2000.
[31: Wang et al. 2004]
Y. Wang, E. K. Teoh, D. Shen, “Lane detection and tracking using B-snake,” Image and Vision Computing, VOL. 22, NO. 4, pp. 269-280, 2004.
[32: Zhou et al. 2010]
Shengyan Zhou, Yanhua Jiang, Junqiang Xi, Jianwei Gong, Guangming Xiong, Huiyan Chen, “A Novel Lane Detection based on Geometrical Model and Gabor Filter,” In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, Jun. 2010.
[33: Chiu & Lin 2005]
Kuo-Yu Chiu and Sheng-Fuu Lin, “Lane detection using color-based segmentation,” In Proceedings of IEEE Intelligent Vehicles Symposium, June 6-8, 2005.
[34: Sun et al. 2006]
Tsung-Ying Sun, Shang-Jeng Tsai and Vincent Chan, “HSI Color Model Based Lane-Marking Detection,” In Proceedings of the IEEE Intelligent Transportation Systems Conference, Toronto, Canada, Sept. 17-20, 2006.
[35: Gopalan et al. 2012]
Raghuraman Gopalan, Tsai Hong, Michael Shneier, and Rama Chellappa, “A Learning Approach Towards Detection and Tracking of Lane Markings,” IEEE Transactions on Intelligent Transportation Systems, VOL. 13, NO. 3, pp. 1088-1098, Sept. 2012.
[36: Zhang 2000]
Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 22, NO. 11, pp. 1330-1334, Nov. 2000.
[37: Grigorescu et al. 2002]
Simona E. Grigorescu, Nicolai Petkov, and Peter Kruizinga, “Comparison of Texture Features Based on Gabor Filters” IEEE Transactions on Image Processing, VOL. 11, NO. 10, Oct. 2002.
[38: Fukunaga & Hostetler 1975]
Keinosuke Fukunaga and Larry D. Hostetler, “The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition,” IEEE Transactions on Information Theory, VOL. 21, No. 1, Jan. 1975.
[39: Comaniciu & Neer 1999]
Dorin Comaniciu and Peter Meer, “Mean Shift Analysis and Applications,” In Proceedings of the Seventh IEEE International Conference on Computer Vision. Kerkyra, Greece, 20-27, Sept. 1999.
[40: Haghighat et al. 2015]
Mohammad Haghighat, Saman Zonouz, and Mohamed Abdel-Mottaleb, “CloudID: Trustworthy cloud-based and cross-enterprise biometric identification,” Expert Systems with Applications, VOL. 42, NO. 21, pp. 7905-7916, 2015.
[41: Kamarainen et al. 2006]
Joni-Kristian Kamarainen, Ville Kyrki, and Heiki Kalviainen, “Invariance Properties of Gabor Filter-Based Features, Overview and Applications,” IEEE Transactions on Image Processing, VOL. 15, NO. 5, May 2006.
[42: Kortli et al. 2017]
Yassin Kortli, Mehrez Marzougui, Belgacem Bouallegue, J.Subash Chandra Bose, Paul Rodrigues, Mohamed Atri, “A novel illumination invariant lane detection system,” In Proceedings of 2017 2nd International Conference on Anti-Cyber Crimes, Abha, Saudi Arabia, 26-27 Mar. 2017.
[43: Ozgunalp & Dahnoun 2014]
Umar Ozgunalp and Naim Dahnoun,”Robust lane detection and tracking based on novel feature extraction and lane categorization,” In Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 4-9, July 2014.
Websites:
[44: Garmin]
The product details of NUVI 4592R.
(http://www.garmin.com.tw/products/discontinued/nuvi4592r/)
[45: PAPAGO]
The product details of PAPAGO 3.
(http://www.papago.com.tw/products/CarDVR/Details.aspx?pid=7)
Book:
[46: Hartley & Zisserman 2004]
Richard Hartley and Andrew Zisserman, “Multiple View Geometry in Computer Vision,” Second Edition, Cambridge University Press, 2004.
[47: Gonzalez & Woods 2008]
Rafael C. Gonzalez and Richard Eugene Woods, “Digital Image Processing,” Third Edition, Upper Saddle River, New Jersey, Pearson Prentice Hall, 2008.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68290-
dc.description.abstract近年來為了避免因為駕駛者沒有專心或是錯誤的判斷所導致的車禍意外,駕駛輔助系統成為十分熱門的議題。車道線偵測在駕駛輔助系統中佔有關鍵性的角色,能在車道偏離時給予警示或是能夠自動修正有危險的動作。
本篇論文提出一個基於紋理的方法,從單眼相機得到的影像中,利用賈伯濾波器來找尋影像中的紋理,並且基於對車道線的三個假設,結合色彩資訊和紋理分布,篩選出可能是車道線的區域,並利用直線模型來表示其對應位置。
本方法主要分成三個部分,首先是利用賈柏濾波器找尋紋理,藉由影像中的紋理找到消失點位置,並結合車道線的色彩資訊,去除不相關的地方。再來,利用紋理強度累加的概念,找到具代表性的車道線紋理方向,最後將符合車道線紋理的區域表示出來,並藉由紋理一致性來最佳化代表車道線的直線。
本文的車道線偵測分別在高速公路、城市、鄉鎮、校園的不同場景做實驗,在多數情況下車道線可以被準確偵測,但在紅線過暗或是車道線被障礙物遮蔽時,偵測會發生錯誤,基於兩個條件判斷,在高速公路上能達到較理想的96.67% 成功率,其他如城市、鄉鎮、校園則只能分別達到 60%, 80%, 50%的成功率。
zh_TW
dc.description.abstractIn recent years, in order to avoid the accidents which are caused by the lack of recognition and miss judgement by drivers, the driving assistance system become a very popular issue. Lane detection plays an important role in the driving assistance system. The position of lane can provide information for system to give a warning when the vehicle occur departure or automatically correct the dangerous action in the future.
This paper presents a texture-based method. Gabor filter is utilized to find the texture orientation on the image from monocular camera. Based on the three assumptions on the lane, combine with color cue and texture orientation distribution to select the possible lane region. At the end, use the line model to represent the lane region.
The method is divided into three parts, the first is to find the texture by Gabor filter, and detect the vanishing point with known texture orientation, then remove the unrelated region with the color information of lanes and detected vanishing point. After that, find the representative lane texture orientation with the accumulation of texture intensity. At the end, mark the region in lane texture orientation, and optimize the representative line with orientation consistent number (OCN).
In this thesis, the lane can be detected accurately in the difference scenes, including freeway, downtown, country, and campus. In most time, the lane can be detected accurately except the red lane is too dark or the lane is cover by objects. As a result, based on two conditional limitations, the experiment in freeway can reach 96.67% successful ratio, other scene such as downtown, country, campus can reach 60%, 80%, and 50% respectively.
en
dc.description.provenanceMade available in DSpace on 2021-06-17T02:16:45Z (GMT). No. of bitstreams: 1
ntu-106-R04921064-1.pdf: 7591488 bytes, checksum: e095b4905bd23d3b143f87a4300f551e (MD5)
Previous issue date: 2017
en
dc.description.tableofcontents摘要 i
ABSTRACT iii
CONTENTS v
LIST OF FIGURES viii
LIST OF TABLES xii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Problem Description 2
1.3 Contributions 6
1.4 Organization of the Thesis 7
Chapter 2 Background and Literature Survey 8
2.1 Autonomous Driving Systems 8
2.2 Environment Perception 10
2.3 Drivable Region Detection. 11
2.3.1 Road Detection 11
2.3.2 Lane Detection 14
Chapter 3 Related Algorithms 17
3.1 Camera Pin-Hole Model 17
3.2 Gabor Filter 19
3.3 Color Space 22
3.4 Mean Shift Clustering 24
Chapter 4 Lane Detection Based On Gabor Filter 26
4.1 Texture Orientation Estimation 28
4.1.1 Eliminate the Untrusted Textures 33
4.2 Vanishing Point Detection (Decide the ROI) 38
4.2.1 Locally Adaptive Soft Voting (LASV) 40
4.3 Lane Detection 44
4.3.1 Texture Orientation Distribution 45
4.3.2 Color Feature Extraction 50
4.3.3 Representative Line Model 53
4.3.4 Orientation Consistent Number (OCN) 54
Chapter 5 Experimental Results and Analysis 59
5.1 Hardware Setup 59
5.2 Experimental Results 63
5.2.1 Freeway Scenes 72
5.2.2 Downtown Road Scenes 91
5.2.3 Country Road Scenes 114
5.2.4 Campus Scenes 137
5.3 Summary of Experimental Results 156
Chapter 6 Conclusions and Future Works 162
6.1 Conclusions 162
6.2 Future Works 163
REFERENCES 165
dc.language.isoen
dc.subject賈柏濾波器zh_TW
dc.subject紋理偵測zh_TW
dc.subject消失點偵測zh_TW
dc.subject車道線偵測zh_TW
dc.subject單眼相機zh_TW
dc.subjectmonocular cameraen
dc.subjectvanishing point detectionen
dc.subjecttexture detectionen
dc.subjectGabor filteren
dc.subjectlane detectionen
dc.title基於賈柏濾波器及色彩資訊進行車道線偵測zh_TW
dc.titleLane Detection Based On Gabor Filter and Color Cueen
dc.typeThesis
dc.date.schoolyear106-1
dc.description.degree碩士
dc.contributor.oralexamcommittee簡忠漢,李後燦,黃正民
dc.subject.keyword賈柏濾波器,紋理偵測,消失點偵測,車道線偵測,單眼相機,zh_TW
dc.subject.keywordGabor filter,vanishing point detection,texture detection,lane detection,monocular camera,en
dc.relation.page171
dc.identifier.doi10.6342/NTU201704028
dc.rights.note有償授權
dc.date.accepted2017-09-25
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-106-1.pdf
  未授權公開取用
7.41 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved