Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96478| Title: | 自主適應無人機探索系統:透過深度強化學習結合內在好奇心模組進行運動規劃 Autonomous Adaptive UAV Exploration System: Motion Planning through Deep Reinforcement Learning with Intrinsic Curiosity Module |
| Authors: | 李俊昇 Chun-Sheng Lee |
| Advisor: | 陳俊杉 Chuin-Shan Chen |
| Keyword: | 自主探索無人機系統,雙延遲深度確定性策略梯度,內在好奇心模組,同時定位與地圖構建,YOLOv7 語意分割, Autonomous UAV Exploration System,Twin Delayed Deep Deterministic Policy Gradient (TD3),Intrinsic Curiosity Module (ICM),Semantic SLAM,YOLOv7, |
| Publication Year : | 2025 |
| Degree: | 碩士 |
| Abstract: | 本研究的核心貢獻在於開發了一套完全自主的無人機探索系統,能夠在無GPS及不需要先備地圖資訊的情況下,透過深度強化學習結合內在好奇心模組實現自主飛行控制並快速生成語義化的3D點雲地圖,以進行後續的任務規劃。此系統旨在以先進技術輔助無人機在各種環境中進行自我感知和互動,從而完成各類複雜任務。
在模擬學習運動控制端,我們使用Cartographer SLAM進行環境建模和自身無人機定位,取代了傳統的GPS需求,並採用了雙延遲深度確定性策略梯度算法結合內在好奇心模組進行訓練,透過取得SLAM提供資訊,學習如何控制飛行速度到達探索點並擴展未知地圖區域。此方法在模擬環境中表現卓越,達到了93.48%的覆蓋率,相較之下,單純使用雙延遲深度確定性策略梯度算法僅能達到71.23%的覆蓋率。成功完成模擬後,我們將訓練好的模型轉移至實體無人機上進行真實環境測試。實際飛行中,無人機會搭配YOLOv7進行即時物體辨識輔佐飛行。此外,後續採用NAFNet模型進行影像去模糊化以增強AI辨識能力。此做法解決了因飛行晃動過程中拍攝的影像模糊無法準確辨識的問題,使得清晰的RGB色彩資訊或基於分割的辨識結果能夠投影回點雲圖上。這樣增強的3D點雲對理解內部結構和真實資訊具有很大的幫助,有利於後續的任務規劃和分析。 這項研究展示了結合SLAM、深度強化學習和基於AI的影像處理技術,創建了一個穩健且適應性強的無人機探索系統的潛力。此系統不僅能夠自主導航並繪製環境地圖,還能處理和整合即時數據以提高運行效率。其在模擬中取得高覆蓋率,並成功轉移至實際應用,顯示了所開發系統的實用性。未來,這一套無人機系統將廣泛應用於更多工程領域。 The core contribution of this research lies in the development of a fully autonomous UAV exploration system that can achieve autonomous flight control and quickly generate semantic 3D point clouds for mission planning without the need for GPS or pre-mapped information. This system aims to utilize advanced technologies to enable UAVs to autonomously perceive and interact with their environments, thereby completing various complex tasks. In the simulation phase for motion control learning, we used Cartographer SLAM for environmental modeling and UAV localization, effectively replacing the traditional need for GPS. The motion control module was trained using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm combined with an Intrinsic Curiosity Module (ICM). By leveraging SLAM-provided information, the system learns how to control flight speed to reach exploration points and expand unknown map areas. This approach demonstrated outstanding performance in the simulation environment, achieving a coverage rate of 93.48%, compared to only 71.23% with TD3 alone. Following successful simulations, the trained model was transferred to a physical UAV for real-world testing. During these tests, the UAV utilized YOLOv7 for real-time object recognition to assist in navigation. Additionally, an offline deblurring technique using the NAFNet model was employed to enhance AI recognition capabilities. This approach solved the issue of motion-induced blur during flight, allowing clear RGB color information or segmentation-based recognition results to be projected back onto the point cloud. This enhanced 3D point cloud significantly aids in understanding internal structures and real-world details, facilitating subsequent mission planning and analysis. This research demonstrates the potential of integrating SLAM, deep reinforcement learning, and AI-based image processing to create a robust and adaptable UAV exploration system. This system not only autonomously navigates and maps its environment but also processes and integrates real-time data to improve operational efficiency. The high coverage rates achieved in simulations and the successful transfer of these capabilities to real-world applications highlight the practical applicability of the developed system. In the future, this UAV system is expected to be widely applied in various engineering fields. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96478 |
| DOI: | 10.6342/NTU202500026 |
| Fulltext Rights: | 同意授權(全球公開) |
| metadata.dc.date.embargo-lift: | 2030-01-04 |
| Appears in Collections: | 土木工程學系 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-113-1.pdf Until 2030-01-04 | 68.68 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
