請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92014完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳日騰 | zh_TW |
| dc.contributor.advisor | Rih Teng Wu | en |
| dc.contributor.author | 范淳皓 | zh_TW |
| dc.contributor.author | Chun-Hao Fan | en |
| dc.date.accessioned | 2024-02-27T16:33:10Z | - |
| dc.date.available | 2024-03-18 | - |
| dc.date.copyright | 2024-03-16 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-02-18 | - |
| dc.identifier.citation | [1] 建築物結構老化與劣化非破壞性檢測技術之建置. 2006.
[2] T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama. Optuna: A next-generation hyperparameter optimization framework. In The 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2623–2631, 2019. [3] R. Bellman. Dynamic programming. Science, 153(3731):34–37, 1966. [4] T. A. Carr, M. D. Jenkins, M. I. Iglesias, T. Buggy, and G. Morison. Road crack detection using a single stage detector based deep neural network. In 2018 IEEE Workshop on Environmental, Energy, and Structural Monitoring Systems (EESMS), pages 1–5, 2018. [5] W. Choi and Y.-J. Cha. Sddnet: Real-time crack segmentation. IEEE Transactions on Industrial Electronics, 67(9):8016–8025, 2020. [6] K. Gopalakrishnan, S. K. Khaitan, A. Choudhary, and A. Agrawal. Deep convolu tional neural networks with transfer learning for computer vision-based data-driven pavement distress detection. Construction and Building Materials, 157:322–330, 2017. [7] W. Hammouch, C. Chouiekh, G. Khaissidi, and M. Mrabti. Crack detection and classification in moroccan pavement using convolutional neural network. Infrastructures, 7(11), 2022. [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2015. [9] Y.-A. Hsieh and Y. J. Tsai. Machine learning for crack detection: Review and model performance comparison. Journal of Computing in Civil Engineering, 34(5):04020038, 2020. [10] P. Iakubovskii. Segmentation models pytorch. https://github.com/qubvel/segmentation_models.pytorch, 2019. [11] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285, 1996. [12] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017. [13] S. L. H. Lau, E. K. P. Chong, X. Yang, and X. Wang. Automated pavement crack seg mentation using u-net-based convolutional neural network. IEEE Access, 8:114892–114899, 2020. [14] L.-J. Lin. Reinforcement learning for robots using neural networks. Carnegie Mellon University, 1992. [15] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection, 2018. [16] Y. Liu, J. Yao, X. Lu, R. Xie, and L. Li. Deepcrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing, 338:139–153, 2019. [17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation, 2015. [18] K. Ma, M. Hoai, and D. Samaras. Large-scale continual road inspection: Visual infrastructure assessment in the wild. Procedings of the British Machine Vision Conference 2017, 2017. [19] S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos. Image segmentation using deep learning: A survey, 2020. [20] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning, 2013. [21] K. O’Shea and R. Nash. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015. [22] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomed ical image segmentation, 2015. [23] E. Salcedo, M. Jaber, and J. Requena Carrión. A novel road maintenance priori tisation system based on computer vision and crowdsourced reporting. Journal of Sensor and Actuator Networks, 11(1), 2022. [24] S. S. M. Salehi, D. Erdogmus, and A. Gholipour. Tversky loss function for image segmentation using 3d fully convolutional deep networks. In Q. Wang, Y. Shi, H.-I. Suk, and K. Suzuki, editors, Machine Learning in Medical Imaging, pages 379–387, Cham, 2017. Springer International Publishing. [25] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms, 2017. [26] F. U. A. Shaikh. Effect of cracking on corrosion of steel in concrete. International Journal of Concrete Structures and Materials, 12(1):3, Jan. 2018. [27] W. Tang, R.-T. Wu, and M. Jahanshahi. Crack segmentation in high-resolution im ages using cascaded deep convolutional neural networks and bayesian data fusion. Smart Structures and Systems, 29:221–235, 01 2022. [28] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, vol ume 30, 2016. [29] C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8:279–292, 1992. [30] H. Xu, X. Su, Y. Wang, H. Cai, K. Cui, and X. Chen. Automatic bridge crack detec tion using a convolutional neural network. Applied Sciences, 9(14), 2019. [31] S. Yokoyama and T. Matsumoto. Development of an automatic detector of cracks in concrete using machine learning. Procedia Engineering, 171:1250–1255, 2017. The 3rd International Conference on Sustainable Civil Engineering Structures and Construction Materials - Sustainable Structures for Future Generations. [32] G. Yu, J. Dong, Y. Wang, and X. Zhou. Ruc-net: A residual-unet-based convolutional neural network for pixel-level pavement crack segmentation. Sensors, 23(1), 2023. [33] L. Zhang, F. Yang, Y. Daniel Zhang, and Y. J. Zhu. Road crack detection using deep convolutional neural network. In 2016 IEEE International Conference on Image Processing (ICIP), pages 3708–3712, 2016. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92014 | - |
| dc.description.abstract | 傳統的裂縫檢測依賴人工,耗時且需要大量勞力,近年來由於AI技術的迅速發展,有許多自動化裂縫偵測的研究出現,其中裂縫分割為近期發展的重點,透過卷積神經網路,能將裂縫的特徵萃取出來,把裂縫與影像中的背景進行分類,重構成分割後的影像,此方法是像素等級的裂縫辨識,然而,裂縫分割缺乏了圖像蒐集的自動化,迄今為止的裂縫分割研究多是透過現場拍照或網路上的圖片做為資料集,代表即使有了這項技術,依然要透過人力蒐集照片。
因此,本研究提出了基於深度雙Q類神經網路 (Double Deep Q-Network, DDQN) 的裂縫偵測模型,它是一種強化學習演算法,透過與環境的互動建構出一套行為模式,使機器人能自動抓取裂縫影像,本研究以預訓練的U-Net影像分割模型即時的偵測裂縫並提供獎勵,DDQN透過這項獎勵系統更新網路並不斷優化自己的行為,藉此偵測出最多的裂縫,本研究以過往裂縫分割研究使用的影像資料集作為模擬環境,結果顯示,在訓練環境中,機器人展現了它對行為選擇的準確性,在偵測到一部分的裂縫後便能順著裂縫的輪廓偵測出整條裂縫,且在測試環境,機器人也有良好的適應性,代表此強化學習架構的成功。 | zh_TW |
| dc.description.abstract | Traditional crack detection relies on manual inspection, which is time consuming and labor intensive. In recent years, due to the rapid development of artificial intelligence (AI), there has been many researches on the automation of crack detection. Among these studies, crack segmentation is the most advanced topic. Through convolutional neural networks, features of cracks are extracted, allowing the segmentation of cracks against its background. This method represents pixel-level classification. However, crack segmentation lacks automation in data collection. Most existing crack segmentation studies rely on photography or online images as datasets, indicating that manual efforts are still needed to gather photos.
Therefore, this study proposes a crack detection model based on the deep double Q neural network (DDQN). It is a reinforcement learning algorithm for the agent to learn through interactions with the environment, enabling the robot to automatically capture crack images. The study utilizes a pre-trained U-Net image segmentation model for real-time crack detection, providing rewards. The DDQN continuously optimizes its behavior through the reward system, thereby trained to capture the maximum number of cracks. Using the image dataset employed in past crack segmentation studies as a simulated environment, the results demonstrate the robot''s accuracy in behavior selection in the training environment. After detecting a portion of the crack, it can seamlessly gather images along the entire crack. In the testing environment, the robot maintains this capability, indicating the success of the reinforcement learning framework in automating the collection of crack images. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-02-27T16:33:10Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-02-27T16:33:10Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 摘要 i
Abstract iii 目次 v 圖次 ix 表次 xi 第一章 緒論 1 1.1 研究背景與動機 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 研究目的 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 研究範圍 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 研究流程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 第二章 文獻探討 7 2.1 裂縫影像分割 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 深度強化學習演算法 . . . . . . . . . . . . . . . . . . . . . . . . . . 8 第三章 研究方法設計 9 3.1 資料前處理與模擬環境建立 . . . . . . . . . . . . . . . . . . . . . . 9 3.2 裂縫影像分割 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.1 CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.2 FCN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2.3 U-Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2.3.1 理論 . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2.3.2 模型架構 . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 強化學習演算法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3.1 Q 學習法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3.2 深度 Q 類神經網路 . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3.3 深度雙 Q 類神經網路 . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3.3.1 理論 . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3.3.2 模型架構 . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.4 人為控制項 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 第四章 實驗結果與討論 21 4.1 訓練過程 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.1 裂縫影像分割 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.1.1 優化器與超參數設定 . . . . . . . . . . . . . . . . . . 21 4.1.1.2 訓練指標 . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1.1.3 損失函數 . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1.2 深度強化學習 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1.2.1 訓練指標 . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1.2.2 超參數設定與損失函數 . . . . . . . . . . . . . . . . 25 4.2 訓練結果分析 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.1 裂縫影像分割 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.2 深度強化學習 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 第五章 結果與未來展望 35 5.1 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2 限制 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.3 未來展望 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 參考文獻 37 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 深度雙Q類神經網路 | zh_TW |
| dc.subject | 裂縫偵測 | zh_TW |
| dc.subject | 強化學習 | zh_TW |
| dc.subject | U-Net | zh_TW |
| dc.subject | 語意分割 | zh_TW |
| dc.subject | Crack Detection | en |
| dc.subject | Reinforcement Learning | en |
| dc.subject | U-Net | en |
| dc.subject | Semantic Segmentation | en |
| dc.subject | Double Deep Q-Learning | en |
| dc.title | 基於深度強化學習神經網路之自動化裂縫分割與偵測 | zh_TW |
| dc.title | A Deep Reinforcement Learning-based Approach for Autonomous Crack Segmentation and Exploration | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 張國鎮;歐昱辰 | zh_TW |
| dc.contributor.oralexamcommittee | Kuo-Chen Zhang;Yu Chen Ou | en |
| dc.subject.keyword | 裂縫偵測,語意分割,U-Net,強化學習,深度雙Q類神經網路, | zh_TW |
| dc.subject.keyword | Crack Detection,Semantic Segmentation,U-Net,Reinforcement Learning,Double Deep Q-Learning, | en |
| dc.relation.page | 40 | - |
| dc.identifier.doi | 10.6342/NTU202400710 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-02-18 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 土木工程學系 | - |
| 顯示於系所單位: | 土木工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-1.pdf 未授權公開取用 | 15.56 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
