請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73367
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 丁肇隆 | |
dc.contributor.author | Wei-Hong Lin | en |
dc.contributor.author | 林韋宏 | zh_TW |
dc.date.accessioned | 2021-06-17T07:30:51Z | - |
dc.date.available | 2020-07-25 | |
dc.date.copyright | 2019-07-25 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-06-12 | |
dc.identifier.citation | [1] Krizhevsky, A., et al. (2012). ImageNet classification with deep convolutional neural networks. Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. Lake Tahoe, Nevada, Curran Associates Inc.: 1097-1105.
[2] Szegedy, C., et al. (2015). Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). [3] Hinton, G. E., et al. (2012). 'Improving neural networks by preventing co-adaptation of feature detectors.' CoRR abs/1207.0580. [4] Girshick, R. B., et al. (2014). 'Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.' 2014 IEEE Conference on Computer Vision and Pattern Recognition: 580-587. [5] Girshick, R. (2015). Fast R-CNN. Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), IEEE Computer Society: 1440-1448. [6] Felzenszwalb, P. F. and D. P. Huttenlocher (2004). 'Efficient Graph-Based Image Segmentation.' Int. J. Comput. Vision 59(2): 167-181. [7] Ren, S., et al. (2015). Faster R-CNN: towards real-time object detection with region proposal networks. Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1. Montreal, Canada, MIT Press: 91-99. [8] M. Shneier, Road Sign Detection and Recognition, IEEE Computer Society International Conference on Computer Vision and Pattern Recognition,2005. [9] Barnes, N., et al. (2008). 'Real-Time Speed Sign Detection Using the Radial Symmetry Detector.' IEEE Transactions on Intelligent Transportation Systems 9(2): 322-332. [10] Wu, J., et al. (2009). Real-Time Automatic Road Sign Detection. 2009 Fifth International Conference on Image and Graphics. [11] Belaroussi, R., et al. (2010). Road Sign Detection in Images: A Case Study. 2010 20th International Conference on Pattern Recognition. [12] Fatmehsari, Y. R., et al. (2010). Gabor wavelet for road sign detection and recognition using a hybrid classifier. 2010 International Conference on Multimedia Computing and Information Technology (MCIT). [13] Escalera, S., et al. (2011). Background on Traffic Sign Detection and Recognition: 5-13. [14] Hechri, A. and A. Mtibaa (2012). Automatic detection and recognition of road sign for driver assistance system. 2012 16th IEEE Mediterranean Electrotechnical Conference. [15] Romadi, M., et al. (2013). Detection and recognition of road signs. 2013 3rd International Symposium ISKO-Maghreb. [16] Romadi, M., et al. (2014). Detection and recognition of road signs in a video stream based on the shape of the panels. 2014 9th International Conference on Intelligent Systems: Theories and Applications (SITA-14). [17] Jianmin, D. and M. Viktor (2015). Real time road edges detection and road signs recognition. 2015 International Conference on Control, Automation and Information Sciences (ICCAIS). [18] Kale, A. J. and R. C. Mahajan (2015). A road sign detection and the recognition for Driver Assistance Systems. 2015 International Conference on Energy Systems and Applications. [19] Chakraborty, S. and K. Deb (2015). Bangladeshi road sign detection based on YCbCr color model and DtBs vector. 2015 International Conference on Computer and Information Engineering (ICCIE). [20] Zhao, J. D., et al. (2017). Research on Road Traffic Sign Recognition Based on Video Image. 2017 10th International Conference on Intelligent Computation Technology and Automation (ICICTA). [21] Wang, C. (2018). Research and Application of Traffic Sign Detection and Recognition Based on Deep Learning. 2018 International Conference on Robots & Intelligent System (ICRIS). [22] Arunmozhi, A., et al. (2018). Stop Sign and Stop Line Detection and Distance Calculation for Autonomous Vehicle Control. 2018 IEEE International Conference on Electro/Information Technology (EIT). [23] Zong, W. and Q. Chen (2014). Traffic Light Detection Based on Multi-feature Segmentation and Online Selecting Scheme. 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC). [24] Sooksatra, S. and T. Kondo (2014). Red traffic light detection using fast radial symmetry transform. 2014 11th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). [25] John, V., et al. (2015). 'Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching.' IEEE Transactions on Computational Imaging 1(3): 159-173. [26] Waisakurnia, W. and D. H. Widyantoro (2016). Traffic light candidate elimination based on position. 2016 10th International Conference on Telecommunication Systems Services and Applications (TSSA). [27] Al-Nabulsi, J., et al. (2017). Traffic light detection for colorblind individuals. 2017 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT). [28] Lee, S., et al. (2018). Traffic light detection and recognition based on Haar-like features. 2018 International Conference on Electronics, Information, and Communication (ICEIC). [29] Wonghabut, P., et al. (2018). Traffic Light Color Identification for Automatic Traffic Light Violation Detection System. 2018 International Conference on Engineering, Applied Sciences, and Technology (ICEAST). [30] Yudin, D. and D. Slavioglo (2018). Usage of fully convolutional network with clustering for traffic light detection. 2018 7th Mediterranean Conference on Embedded Computing (MECO). [31] Radu Timofte, Karel Zimmermann, and Luc van Gool, Multi-view traffic sign detection, recognition, and 3D localisation, Journal of Machine Vision and Applications (MVA 2011), DOI 10.1007/s00138-011-0391-3, December 2011, Springer-Verlag. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73367 | - |
dc.description.abstract | 人工智慧的發展近年來突飛猛進,幾乎在各個領域都有其發揮的空間而且效果顯著,最明顯的例子之一就是自駕車的發展。車身配備以雷達、電腦視覺、GPS等技術分析車子周遭環境,判斷行進路線是否偏移、與前車的距離、交通標誌變化、及是否有障礙物或其他突發狀況等等。但是現階段離L5級的完全自動駕駛還有一大段距離,就算是L3級自動駕駛,目前也只適合應用在有條件的獨立軌道,如捷運、高鐵等大眾運輸工具,更遑論完全自駕車的普及。現在各車廠推出的新車最多也只有到L2級的程度,行車仍然得依靠駕駛判斷當時的情況做適當的反應。為了減少因駕駛本身疏忽違反交通標誌而釀災,本研究針對紅綠燈和部分交通標誌,以電腦視覺、影像辨識、卷積神經網路(Convolution Neural Network)及Fast R-CNN(Fast Region-based Convolution Neural Network)等技術為基礎,設計一套駕駛輔助系統,以實現交通標誌及燈號的提醒與紅綠燈號轉變時的警告,並且提供簡單有效的過濾方法,降低因偵測錯誤輸出錯誤警告的風險,在道路實測上取得良好成果。 | zh_TW |
dc.description.abstract | The development of artificial intelligence has grown by leaps and bounds in recent years. It has its own space and effect in almost every field. One of the most obvious examples is the development of self-driving cars. The body is equipped with radar, computer vision, GPS and other technologies to analyze the surrounding environment of the car, to determine whether the route is offset, the distance from the preceding car, the change of traffic signs, and whether there are obstacles or other emergencies. However, at this stage, there is still a long distance from the fully automatic driving of the L5 class. Even the L3 automatic driving is currently only suitable for use in conditional independent tracks, such as MRT, high-speed rail and other mass transit vehicles, let alone fully self-driving. Popularity. At present, the new cars introduced by various car manufacturers are only up to the L2 level. The driving still has to rely on the driving to judge the situation at that time to respond appropriately. In order to reduce the risk of driving violations of traffic signs by the driver itself, this study is aimed at traffic lights and some traffic signs, including computer vision, image recognition, Convolution Neural Network and Fast R-CNN (Fast Region-based Convolution). Based on technologies such as Neural Network, a driver assistance system is designed to realize the warning of traffic signs and lights and the warning when the traffic lights change. And provide a simple and effective filtering method to reduce the risk of outputting false alarms due to detection errors, and achieve good results in road measurement. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T07:30:51Z (GMT). No. of bitstreams: 1 ntu-108-R06525063-1.pdf: 3656000 bytes, checksum: 40c02db1acd17ab339e168361104ea55 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 口試委員會審定書 #
誌謝 i 中文摘要 ii ABSTRACT iii CONTENTS iv LIST OF FIGURES vii LIST OF TABLES x Chapter 1 緒論 1 1.1 研究動機 1 1.2 論文架構 2 Chapter 2 文獻回顧 3 Chapter 3 方法介紹 6 3.1 影像分析及處理 6 3.1.1 影像基本性質 6 3.1.2 型態學處理 8 3.1.3 連通體 11 3.2 卷積神經網路 11 3.2.1 卷積層 12 3.2.2 線性整流層 13 3.2.3 池化層 14 3.2.4 全連接層和softmax 14 3.2.5 AlexNet 15 3.2.6 GoogLeNet 16 3.2.7 遷移學習 17 3.3 R-CNN 18 Chapter 4 研究方法與流程 20 4.1 交通標誌 20 4.1.1 HSV通道結合 22 4.1.2 Fast R-CNN定位 27 4.1.3 CNN分類 30 4.1.4 緩衝處理 31 4.2 紅綠燈 33 4.2.1 HSV濾波 35 4.2.2 型態學分析 37 4.2.3 CNN分類 39 4.2.4 緩衝處理 40 4.2.5 燈號轉換偵測 41 Chapter 5 實驗結果討論 45 5.1 實驗環境與資料蒐集 45 5.2 定義偵測結果 46 5.3 實驗結果 46 5.3.1 交通標誌 46 5.3.2 紅綠燈 49 Chapter 6 結論 52 REFERENCE 53 | |
dc.language.iso | zh-TW | |
dc.title | 深度學習應用於交通標誌及燈號偵測 | zh_TW |
dc.title | Deep Learning Applied To Traffic Signs And Signal Detection | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 張瑞益,張恆華,謝傳璋 | |
dc.subject.keyword | 影像辨識,電腦視覺,駕駛輔助,機器學習,交通標誌,紅綠燈, | zh_TW |
dc.subject.keyword | image recognition,computer vision,driving assistance,machine learning,traffic signs,traffic lights, | en |
dc.relation.page | 55 | |
dc.identifier.doi | 10.6342/NTU201900847 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-06-12 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 工程科學及海洋工程學研究所 | zh_TW |
顯示於系所單位: | 工程科學及海洋工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 3.57 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。