請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72646
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 周家蓓 | |
dc.contributor.author | Kin-Wai Leong | en |
dc.contributor.author | 梁健偉 | zh_TW |
dc.date.accessioned | 2021-06-17T07:02:41Z | - |
dc.date.available | 2022-07-31 | |
dc.date.copyright | 2019-07-31 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-07-30 | |
dc.identifier.citation | [1] AASHTO M247-13. Glass Beads Used in Pavement Markings. American Association of State Highway and Transportation Officials, Washington DC, USA. (2013).
[2] Abdulqadir, Omar Y. 'Distance Measurement Using Dual Laser Source and Image Processing Techniques.'Al-Rafidain University College For Sciences35 (2015): 266-286. [3] Ahmad, Touqeer, et al. 'Symbolic road marking recognition using convolutional neural networks.'2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017. [4] Burns, David M., Thomas P. Hedblom, and Terry W. Miller. 'Modern Pavement Marking Systems: Relationship Between Optics and Nighttime Visibility.'Transportation Research Record 2056.1 (2008): 43-51. [5] Babic, Darko, Mario Fiolic, and Petar Prusa. 'Evaluation of Road Markings Retoreflection Measuring Methods.' European Scientific Journal, ESJ 10.7 (2014). [6] Carlson, Paul J., and Matt S. Lupes. Methods for maintaining traffic sign retroreflectivity. No. FHWA-HRT-08-026. 2007. [7] Carlson, Paul. 'Evaluation of sign retroreflectivity measurements from the advanced mobile asset collection (AMAC) system.' College Station, TX: Texas Transportation Institute (2011). [8] Chen, Xiaojun, et al. 'TW-k-means: Automated two-level variable weighting clustering algorithm for multiview data.' IEEE Transactions on Knowledge and Data Engineering 25.4 (2011): 932-944. [9] Chaple, Girish N., R. D. Daruwala, and Manoj S. Gofane. 'Comparisions of Robert, Prewitt, Sobel operator based edge detection methods for real time uses on FPGA.' 2015 International Conference on Technologies for Sustainable Development (ICTSD). IEEE, 2015. [10] Diamandouros, Konstandinos, and Michael Gatscha. 'Rainvision: the impact of road markings on driver behaviour–wet night visibility.' Transportation research procedia 14 (2016): 4344-4353. [11] Gibbons, Ronald Bruce, Jonathan M. Hankey, and Irena Pashaj. Wet Night Visibility of Pavement Markings. Virginia Center for Transportation Innovation and Research, 2004. [12] Gibbons, Ronald Bruce. Pavement Marking Visibility Requirements During Wet Night Conditions. Virginia Center for Transportation Innovation and Research, 2006. [13] Gibbons, Ronald B., et al. 'Development of visual model for exploring relationship between nighttime driving behavior and roadway visibility features.' Transportation research record 2298.1 (2012): 96-103. [14] Gibbons, Ronald B., Brian Williams, and Benjamin Cottrell. 'Refinement of drivers' visibility needs during wet night conditions.' Transportation research record 2272.1 (2012): 113-120. [15] Gruener, Markus, and Ulrich Ansorge. 'Mobile Eye Tracking During Real-World Night Driving: A Selective Review of Findings and Recommendations for Future Research.' Journal of Eye Movement Research 10.2 (2017). [16] Hautiere, Nicolas, Raphaël Labayrade, and Didier Aubert. 'Detection of visibility conditions through use of onboard cameras.' IEEE Proceedings. Intelligent Vehicles Symposium, 2005.. IEEE, 2005. [17] Hautière, Nicolas, Raphaël Labayrade, and Didier Aubert. 'Estimation of the visibility distance by stereovision: A generic approach.' IEICE Transactions on information and systems89.7 (2006): 2084-2091. [18] He, Kaiming, et al. 'Deep residual learning for image recognition.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [19] He, Kaiming, et al. 'Identity mappings in deep residual networks.' European conference on computer vision. Springer, Cham, 2016. [20] He, Kaiming, et al. 'Mask r-cnn.' Proceedings of the IEEE international conference on computer vision. 2017. [21] Jo, Youngtae, and Seungki Ryu. 'Pothole detection system using a black-box camera.' Sensors 15.11 (2015): 29316-29331. [22] Jeong, Jinyong, Younggun Cho, and Ayoung Kim. 'Road-SLAM: Road marking based SLAM with lane-level accuracy.'2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2017. [23] Johnson, Jeremiah W. 'Adapting mask-rcnn for automatic nucleus segmentation.' arXiv preprint arXiv:1805.00500(2018). [24] Kawano, Makoto, et al. 'Road marking blur detection with drive recorder.' 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. [25] Long, Jonathan, Evan Shelhamer, and Trevor Darrell. 'Fully convolutional networks for semantic segmentation.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. [26] Liu, Weifeng, et al. 'Road detection by using a generalized Hough transform.' Remote Sensing 9.6 (2017): 590. [27] Lee, Seokju, et al. 'Vpgnet: Vanishing point guided network for lane and road marking detection and recognition.' Proceedings of the IEEE International Conference on Computer Vision. 2017. [28] Mathibela, Bonolo, Paul Newman, and Ingmar Posner. 'Reading the road: road marking classification and interpretation.' IEEE Transactions on Intelligent Transportation Systems 16.4 (2015): 2072-2081. [29] Neven, Davy, et al. 'Towards end-to-end lane detection: an instance segmentation approach.' 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018. [30] Ozgunalp, Umar, and Sertan Kaymak. 'Lane detection by estimating and using restricted search space in hough domain.' Procedia computer science 120 (2017): 148-155. [31] Olaverri-Monreal, Cristina, et al. 'Collaborative approach for a safe driving distance using stereoscopic image processing.' Future Generation Computer Systems 95 (2019): 880-889. [32] Pollefeys, Marc, et al. 'Detailed real-time urban 3d reconstruction from video.' International Journal of Computer Vision 78.2-3 (2008): 143-167. [33] Pike, Adam M., Shamanth P. Kuchangi, and Robert J. Benz. 'Quantitative versus Qualitative Assessment of Pavement Marking Visibility.' Transportation Research Record 2169.1 (2010): 88-94. [34] Redmon, Joseph, et al. 'You only look once: Unified, real-time object detection.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [35] Redmon, Joseph, and Ali Farhadi. 'YOLO9000: better, faster, stronger.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [36] Schnell, Thomas, Fuat Aktan, and Yi-Ching Lee. 'Nighttime visibility and retroreflectance of pavement markings in dry, wet, and rainy conditions.' Transportation research record1824.1 (2003): 144-155. [37] Salman, Yasir Dawood, Ku Ruhana Ku-Mahamud, and Eiji Kamioka. 'Distance measurement for self-driving cars using stereo camera.' proceeding 6th Int. Conf. Comput. informations. No. 105. 2017. [38] Vokhidov, Husan, et al. 'Recognition of damaged arrow-road markings by visible light camera sensor based on convolutional neural network.' Sensors 16.12 (2016): 2160. [39] Yuheng, Song, and Yan Hao. 'Image segmentation algorithms overview.' arXiv preprint arXiv:1707.02051(2017). [40] Ye, Fan, and Adam Pike. 'Studying the Nighttime Visibility Performance of Retroreflective Pavement Markers.' Transportation Research Record (2019): 0361198119846105. [41] Zhou, Zongwei, et al. 'Unet++: A nested u-net architecture for medical image segmentation.' Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, Cham, 2018. 3-11. [42] Zohourian, Farnoush, et al. 'Superpixel-based Road Segmentation for Real-time Systems using CNN.' VISIGRAPP (5: VISAPP). 2018. [43] 周家蓓,黃壬信,廖子凱 '自動化鋪面即時影像偵測之研究.' 中華民國運輸學會第八屆論文研討會論文集(1993):641-648. [44] 周家蓓.'整合影像處理與線型雷射掃瞄技術之鋪面三維輪廓檢測系統開發.' (2004). [45] 蔡全義. '國內外熱拌塑膠反光標線之發展與現況探討.'屏東科技大學土木工程系所學位論文(2009): 1-106. [46] 周家蓓, and 李柏. '鋪面剖面掃描儀應用於自動化鋪面檢測之研究.'鋪面工程 (2012): 11-18. [47] 陳傑琪.'基於車道線辨識之前車偵測及加速.'國立中山大學資訊工程學系碩士論文(2013) [48] 陳佶宏. '應用電腦視覺於道路標線品質評鑑之研究.'義守大學土木與生態工程學系學位論文 (2014): 1-102. [49] 行政院工程會。施工綱要規範,第 02898 章標線,7.0 版,(2011). [50] 交通部,內政部. '道路交通標誌標線號誌設置規則.' 民國 83 年三版 (2011)。 [51] 呂昀軒. '玻璃珠材料對熱處理聚酯標線反光性能影響探討.' 臺灣大學土木工程學研究所學位論文 (2017): 1-164. [52] 經濟部標準檢驗局,中華民國國家標準 CNS 15834:2015,道路標線使用性能,(2015)。 [53] 經濟部標準檢驗局,中華民國國家標準 CNS 4342:2016,交通反光標誌用玻璃珠,(2016)。 | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72646 | - |
dc.description.abstract | 因應目前國內外對道路安全的重視程度以及要求都在不斷的提高,而道路標線作為道路的重要組成部份,相關之驗收及檢測標準目前卻非常缺少。目前國際上常用於標線反光度的評估指標主要有兩個:一為擴散照明下之輝度係數(Qd),另一為回歸反射輝度係數(RL),然而該反光度指標的量測方法以及要求較高,難以在一般道路上進行廣泛且具有恆常性之檢測,也因此導致道路上的不同標線反光度能力參差不齊。有鑑於此,本研究提出運用車載相機進行影像辨識建立具有鑑別效果的標線反光能力檢測系統,應用Mask R-CNN模型找出影像中標線所在的位置,把該位置中的像素點亮度取平均作為該標線之照片亮度,並使用標線在影像之位置利用雙鏡頭之視差應用前方交會法進行距離測量,最後以上述之數據與標線反光度指標RL進行比較,訂定一個能夠分辨標線合乎最低標線反光能力要求之照片亮度值作為門檻值,以供日後標線養護單位養護之參考。研究建立的標線照片亮度值與標線反光度指標RL在不同分級的趨勢具有一致性,能夠相當的反映標線的反光能力以及可視程度。
而本研究之另一部份為受試者實驗,用以探討駕駛者在實際駕駛時對標線之可視狀況與影像辨識之數據差異、不同標線類型、標線所在之鋪面類型、車輛行進狀況、外部光源對受試者之直接影響等等。而根據實驗結果顯示受試者駕駛時標線之可視狀況基本都比影像辨識要來得好,不同標線之可視距離與RL值大小對應相乎,但不同年齡層、車輛行進狀況對於標線之可視距離的影響都不大,本研究為首次利用雙鏡頭影像取得標線夜間之照片亮度值與反光能力關係,為道路管理單位在使用中標線之反光性能檢測功能奠定新的發展方向。 | zh_TW |
dc.description.abstract | In view of the current domestic and international emphasis on road safety and the requirements are constantly improving, road markings as an important part of the road the relevant acceptance and testing standards are currently very lacking. At present, there are two main evaluation indexes commonly used in the international standard for the grading of road marking: one is the luminance coefficient under diffuse illumination (Qd) and the other is coefficient of retroreflected luminance(RL). However, the measurement method of those index and requirements are high, and it is difficult to carry out extensive and constant detection on the general road, and as a result, the reflective capability of different markings on the road is uneven. In view of this, our research proposes to use the on-board camera for image recognition to establish a road marking reflective capability detection system with discriminative effect. Mask R-CNN model is used to find the position of the target road marking in the image, and the brightness of the pixel in the position is taken, after that the average brightness of those pixel is used as the brightness of the road marking. The maximum visible road marking distance is measured by using the different position of the pixel for a same marking in the dual lens to apply the front intersection method. Finally, compare the above data with the coefficient RL, set a photo brightness value that can distinguish the road marking from the minimum road marking's reflective capability as a threshold for future road marking maintenance reference. The brightness value of the road marking photograph established by the study is consistent with the trend of the grading index RL in different grading, which can reflect the reflective capability and visibility of the road marking.
The other part of the research is a subject experiment to explore the data difference between the visual condition and the image recognition of the driver during actual driving, such as: different types of markings, type of pavement where the marking is located, the state of vehicles, impact of external light sources on the subject, etc. According to the experimental results, the visual condition of the subject's driving the road marking is basically better than the image recognition. The visual distance of different markings corresponds to the RL value, and the visual distance of the flexible pavement is generally It will be higher than the rigid pavement. However, in different age groups, the vehicle's traveling condition has little effect on the visual distance of the marking. But if there is an external light source, the visible distance of the material selected in this study exceeds the minimum reaction time required under the general road speed limit, and the brightness of the photo of the road marking is also meets with the system's analysis exceeds the threshold set by the system. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T07:02:41Z (GMT). No. of bitstreams: 1 ntu-108-R06521521-1.pdf: 3469788 bytes, checksum: 883b038bc5fd2cdc0354d319b2b89c66 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 中文摘要 I
ABSTRACT V 目錄 VII 圖目錄 IX 表目錄 XI 第1章 緒論 1 1.1 研究動機 1 1.2 研究內容與方法 3 1.3 研究流程 4 第2章 文獻回顧 6 2.1影像辨識 6 2.1.1閾值分割方法 6 2.1.2區域增長法 7 2.1.3邊緣檢測分割方法 8 2.1.4基於卷積神經網路(Convolutional neural network, CNN)的弱監督學習方法 9 2.2標線反光能力 13 2.2.1 光線的反射類型 14 2.2.2 標線反光性能指標 15 2.2.3 相關的標線可見度研究 19 2.3距離測量 20 2.4文獻回顧小結 22 第3章 演算法 23 3.1 預處理 24 3.2 標線辨識 26 3.2.1 Mask R-CNN 28 3.2.2 模型訓練 29 3.3 距離測量 33 3.4 影像之標線數值選取及方法 35 A.影像辨識之特徵點選用 35 B.辨識距離之選用 36 C.影像回歸值之計算 38 第4章 實驗方法 40 4.1 實驗儀器 40 1.1.1 回歸反射輝度儀 40 1.1.2 車載影像記錄系統 41 4.2 實驗場及標線分佈 43 4.3 受試者實驗設計 49 4.3.1實驗變量 50 4.3.2實驗設計 50 4.3.3 受試者實驗 53 第5章 實驗結果分析 55 5.1 影像辨識之變量討論 55 5.1.1標線 55 5.1.2鋪面類型 60 5.2 受試者之變量討論 61 5.2.1 標線 61 5.2.2 鋪面類型 66 5.2.3 車輛行進狀況 67 5.2.4 參與者年齡 68 5.3 其他實驗 70 5.3.1 椰林大道開燈實驗 70 第6章 結論與建議 73 6.1 結論與貢獻 73 6.2 建議 74 參考文獻 76 | |
dc.language.iso | zh-TW | |
dc.title | 應用數位影像檢測道路標線可見度 | zh_TW |
dc.title | Application of digital image on road marking | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 韓仁毓,廖敏志 | |
dc.subject.keyword | 道路標線,反光度,影像辨識,雙頭鏡,受試者實驗, | zh_TW |
dc.subject.keyword | Road Marking,Reflective Capability,Image Recognition,Dual lens,subject experiment, | en |
dc.relation.page | 80 | |
dc.identifier.doi | 10.6342/NTU201902237 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-07-31 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 土木工程學研究所 | zh_TW |
顯示於系所單位: | 土木工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 3.39 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。