Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 土木工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96478
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳俊杉zh_TW
dc.contributor.advisorChuin-Shan Chenen
dc.contributor.author李俊昇zh_TW
dc.contributor.authorChun-Sheng Leeen
dc.date.accessioned2025-02-19T16:09:09Z-
dc.date.available2025-02-20-
dc.date.copyright2025-02-19-
dc.date.issued2025-
dc.date.submitted2025-01-10-
dc.identifier.citation[1] Georgi Valentinov HrISToV, Plamen Zlatkov ZAHArIEV, and Ivan Hristov BELoEV. A review of the characteristics of modern unmanned aerial vehicles. Actatechnologica agriculturae, 19(2):33–38, 2016.
[2] Daniele Giordan, Marc S Adams, Irene Aicardi, Maria Alicandro, Paolo Allasia, Marco Baldo, Pierluigi De Berardinis, Donatella Dominici, Danilo Godone, Peter Hobbs, et al. The use of unmanned aerial vehicles (uavs) for engineering geology applications. Bulletin of Engineering Geology and the Environment, 79:3437–3481, 2020.
[3] Faiyaz Ahmed, JC Mohanta, Anupam Keshari, and Pankaj Singh Yadav. Recent advances in unmanned aerial vehicles: a review. Arabian Journal for Science and Engineering, 47(7):7963–7984, 2022.
[4] Ran Zhang, Aohui Ouyang, and Zili Li. Automatic uav inspection of tunnel in frastructure in gps-denied underground environment. In European Workshop on Structural Health Monitoring, pages 519–526. Springer, 2022.
[5] Matěj Petrlík, Tomáš Báča, Daniel Heřt, Matouš Vrba, Tomáš Krajník, and Martin Saska. A robust uav system for operations in a constrained environment. IEEE Robotics and Automation Letters, 5(2):2169–2176, 2020. 73
[6] Lyu Mingyang, Zhao Yibo, Chao Huang, and Hailong Huang. Unmanned aerial vehicles for search and rescue: A survey. Remote Sensing, 15:3266, 06 2023. doi: 10.3390/rs15133266.
[7] Carlos Sampedro, Alejandro Rodriguez-Ramos, Hriday Bavle, Adrian Carrio, Paloma de la Puente, and Pascual Campoy. A fully-autonomous aerial robot for search and rescue applications in indoor environments using learning-based techniques. Journal of Intelligent & Robotic Systems, 95:601– 627, 2018. URL https://api.semanticscholar.org/CorpusID:115873208.
[8] Adil Farooq, Antreas Anastasiou, Nicolas Souli, Christos Laoudias, Panayiotis S. Kolios, and Theocharis Theocharides. Uavautonomousindoorexplorationandmapping for sar missions: Reflections from the icuas 2022 competition. In 2022 19th International Conference on Ubiquitous Robots (UR), pages 621–626, 2022. doi:10.1109/UR55393.2022.9866527.
[9] Fabio Toriumi, Túlio Bittencourt, and Marcos Futai. Uav-based inspection of bridge and tunnel structures: an application review. Revista IBRACON de Estruturas e Materiais, 16:1–19, 01 2023. doi: 10.1590/S1983-41952023000100003.
[10] Chuanxiang Gao, Xinyi Wang, Ruoyu Wang, Zuoquan Zhao, Yu Zhai, Xi Chen, and Ben Chen. A uav-based explore-then-exploit system for autonomous indoor facility inspection and scene reconstruction. Automation in Construction, 148:104753, 01 2023. doi: 10.1016/j.autcon.2023.104753.
[11] Jing-Heng Lin and Ta-Te Lin. An unmanned aerial vehicle for greenhouse navigation and video-based tomato phenotypic data collection. In 2021 ASABE Annual 74 International Virtual Meeting, page 1. American Society of Agricultural and Biological Engineers, 2021.
[12] Jian Zhang. Ai based algorithms of path planning, navigation and control for mobile ground robots and uavs. arXiv preprint arXiv:2110.00910, 2021.
[13] Arbaaz Khan, Alejandro Ribeiro, Vijay Kumar, and Anthony G Francis. Graph neural networks for motion planning. arXiv preprint arXiv:2006.06248, 2020.
[14] Taha Elmokadem and Andrey Savkin. Towards fully autonomous uavs: A survey. Sensors, 21:6223, 09 2021. doi: 10.3390/s21186223.
[15] Yikun Tian, Binchao Yang, Hong Yue, and Jinchang Ren. Visual slam of unmanned aerial vehicle: a survey. In The 6th International Conference on Machine Vision and Information Technology, page 1, 2022.
[16] SergioGarcía,MElenaLópez,RafaelBarea,LuisMBergasa,AlejandroGómez,and Eduardo J Molinos. Indoor slam for micro aerial vehicles control using monocular camera and sensor fusion. In 2016 international conference on autonomous robot systems and competitions (ICARSC), pages 205–210. IEEE, 2016.
[17] Yi Lin, Fei Gao, Tong Qin, Wenliang Gao, Tianbo Liu, William Wu, Zhenfei Yang, and Shaojie Shen. Autonomous aerial navigation using monocular visual-inertial fusion: Lin et al. Journal of Field Robotics, 35, 07 2017. doi: 10.1002/rob.21732.
[18] Stephan Weiss, Davide Scaramuzza, and Roland Siegwart. Monocular-slam–based navigation for autonomous micro helicopters in gps-denied environments. Journal of Field Robotics, 28(6):854–874, 2011. 75
[19] Andrew Howard. Real-time stereo visual odometry for autonomous ground vehicles. In 2008 IEEE/RSJInternational Conference on Intelligent Robots and Systems, pages 3946–3952, 2008. doi: 10.1109/IROS.2008.4651147.
[20] Liangwen Tang, Sheng Yang, Nong Cheng, and Qing Li. Toward autonomous navigation using an rgb-d camera for flight in unknown indoor environments. 2014 IEEE Chinese Guidance, Navigation and Control Conference, CGNCC 2014, pages 2007–2012, 01 2015. doi: 10.1109/CGNCC.2014.7007485.
[21] RanLong, Christian Rauch, Tianwei Zhang, Vladimir Ivan, Tin Lun Lam, and Sethu Vijayakumar. Rgb-dslam in indoor planar environments with multiple large dynamic objects. IEEE robotics and automation letters, 7(3):8209–8216, 2022.
[22] Roberto Valenti, Ivan Dryanovski, Carlos Jaramillo, Daniel Strom, and Jizhong Xiao. Autonomous quadrotor flight using onboard rgb-d visual odometry. Proceedings- IEEE International Conference on Robotics and Automation, pages 5233–5238, 09 2014. doi: 10.1109/ICRA.2014.6907628.
[23] Albert S Huang, Abraham Bachrach, Peter Henry, Michael Krainin, Daniel Maturana, Dieter Fox, and Nicholas Roy. Visual odometry and mapping for autonomous flight using an rgb-dcamera. In Robotics Research: The 15th International Symposium ISRR, pages 235–252. Springer, 2017.
[24] Ji Zhang and Sanjiv Singh. Loam : Lidar odometry and mapping in real-time. Robotics: Science and Systems Conference (RSS), pages 109–111, 01 2014.
[25] Tixiao Shan and Brendan Englot. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International 76 Conference on Intelligent Robots and Systems (IROS), pages 4758–4765. IEEE, 2018.
[26] Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In 2020IEEE/RSJinternationalconferenceonintelligentrobotsandsystems(IROS), pages 5135–5142. IEEE, 2020.
[27] Kenji Koide, Jun Miura, and Emanuele Menegatti. A portable three-dimensional lidar-based system for long-term and wide-area people behavior measurement. International Journal of Advanced Robotic Systems, 16, 04 2019. doi: 10.1177/1729881419841532.
[28] Wolfgang Hess, Damon Kohler, Holger Rapp, and Daniel Andor. Real-time loop closure in 2d lidar slam. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1271–1278, 2016.
[29] Robert Milijas, Lovro Markovic, Antun Ivanovic, Frano Petric, and Stjepan Bogdan. Acomparison of lidar-based slam systems for control of unmanned aerial vehicles. In 2021 International Conference on Unmanned Aircraft Systems (ICUAS), pages 1148–1154. IEEE, 2021.
[30] Fernando Caballero and Luis Merino. Dll: Direct lidar localization. a map-based localization approach for aerial robots. In 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 5491–5498. IEEE, 2021.
[31] Juraj Oršulić, Robert Milijas, Ana Batinovic, Lovro Markovic, Antun Ivanovic, and Stjepan Bogdan. Flying with cartographer: Adapting the cartographer 3d graph slam 77 stack for uav navigation. In 2021 Aerial Robotic Systems Physically Interacting with the Environment (AIRPHARO), pages 1–7. IEEE, 2021.
[32] Hasan Ismail, Rohit Roy, Long-Jye Sheu, Wei-Hua Chieng, and Li-Chuan Tang. Exploration-based slam (e-slam) for the indoor mobile robot using lidar. Sensors, 22 (4):1689, 2022.
[33] KangchengLiu. Arobustandefficientlidar-inertial-visual fused simultaneous localization and mapping system with loop closure. In 2022 12th international conference on CYBER technology in automation, control, and intelligent systems (CYBER), pages 1182–1187. IEEE, 2022.
[34] Haojun Luo and Chih-Yung Wen. A low-cost relative positioning method for uav/ugv coordinated heterogeneous system based on visual-lidar fusion. Aerospace, 10:924, 10 2023. doi: 10.3390/aerospace10110924.
[35] Pawarut Karaked, Watcharapol Saengphet, and Suradet Tantrairatn. Multi-sensor fusion with extended kalman filter for indoor localization system of multirotor uav. In 2022 19th International Joint Conference on Computer Science and Software Engineering (JCSSE), pages 1–5. IEEE, 2022.
[36] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8:279 292, 1992.
[37] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[38] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, 78 Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[39] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[40] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International conference on machine learning, pages 1587–1596. PMLR, 2018.
[41] Fadi AlMahamid and Katarina Grolinger. Autonomous unmanned aerial vehicle navigation using reinforcement learning: A systematic review. Engineering Applications of Artificial Intelligence, 115:105321, 2022.
[42] Ahmad Taher Azar, Anis Koubaa, Nada Ali Mohamed, Habiba A Ibrahim, Zahra Fathy Ibrahim, Muhammad Kazim, Adel Ammar, Bilel Benjdira, Alaa MKhamis, Ibrahim A Hameed, et al. Drone deep reinforcement learning: A review. Electronics, 10(9):999, 2021.
[43] JeminHwangbo,InkyuSa,RolandSiegwart, andMarcoHutter. Controlofaquadrotor with reinforcement learning. IEEE Robotics and Automation Letters, 2(4):2096 2103, 2017.
[44] Guanlin Wu, Zhuokai Zhao, and Yutao He. Relax: Reinforcement learning enabled 2d-lidar autonomous system for parsimonious uavs. arXiv preprint arXiv:2309.08095, 2023.
[45] Reinis Cimurs, Il Hong Suh, and Jin Han Lee. Goal-driven autonomous exploration through deep reinforcement learning. IEEE Robotics and Automation Letters, 7(2): 730–737, 2021. 79
[46] Reinis Cimurs, Il Hong Suh, and Jin Han Lee. Information-based heuristics for learned goal-driven exploration and mapping. In 2021 18th International Conference on Ubiquitous Robots (UR), pages 571–578. IEEE, 2021.
[47] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961 2969, 2017.
[48] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7464–7475, 2023.
[49] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015–4026, 2023.
[50] Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, and Xihui Liu. Sam3d: Segment anything in 3d scenes. arXiv preprint arXiv:2306.03908, 2023.
[51] Kyle Genova, Xiaoqi Yin, Abhijit Kundu, Caroline Pantofaru, Forrester Cole, Avneesh Sud, Brian Brewington, Brian Shucker, and Thomas Funkhouser. Learning 3d semantic segmentation with only 2d image supervision. In 2021 International Conference on 3D Vision (3DV), pages 361–372. IEEE, 2021.
[52] WenhuiWang,HangboBao,LiDong,JohanBjorck, ZhiliangPeng, QiangLiu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. Image as a foreign language: Beit pretraining for vision and vision-language tasks. 80 In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19175–19186, 2023.
[53] Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7262–7272, 2021.
[54] YWang,TShi,PYun,LTai,andMLiu. Pointseg: Real-timesemanticsegmentation based on 3d lidar point cloud. arxiv 2018. arXiv preprint arXiv:1807.06288.
[55] BoWang, MingZhu, YingLu, Jiarong Wang, Wen Gao, and Hua Wei. Real-time 3d object detection from point cloud through foreground segmentation. IEEE Access, PP:1–1, 06 2021. doi: 10.1109/ACCESS.2021.3087179.
[56] Martin Simon, Karl Amende, Andrea Kraus, Jens Honer, Timo Samann, Hauke Kaulbersch, Stefan Milz, and Horst Michael Gross. Complexer-yolo: Real-time 3d object detection and tracking on semantic point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
[57] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017.
[58] Garrick Brazil, Abhinav Kumar, Julian Straub, Nikhila Ravi, Justin Johnson, and Georgia Gkioxari. Omni3d: A large benchmark and model for 3d object detection in the wild. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13154–13164, 2023. 81
[59] Yu Zheng, Yueqi Duan, Jiwen Lu, Jie Zhou, and Qi Tian. Hyperdet3d: Learning a scene-conditioned 3d object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5585–5594, 2022.
[60] Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Real-time 3d object detection from point clouds. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7652–7660, 2018.
[61] Shaoshan Liu, Jie Tang, Chao Wang, Quan Wang, and Jean-Luc Gaudiot. Implementing acloud platform for autonomous driving. arXiv preprint arXiv:1704.02696, 2017.
[62] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs,Rob Wheeler, Andrew Y Ng, et al. Ros: an open-source robot operating system. In ICRA workshop on open source software, volume 3, page 5. Kobe, Japan, 2009.
[63] Shiying Feng, Xiaofeng Li, Lu Ren, and Shuiqing Xu. Reinforcement learning with parameterized action space and sparse reward for uav navigation. Intelligence Robotics, 3:161–75, 06 2023. doi: 10.20517/ir.2023.10.
[64] Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. In European conference on computer vision, pages 17–33. Springer, 2022.
[65] Seungjun Nah, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, Radu Timofte, and Kyoung Mu Lee. Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 0–0, 2019.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96478-
dc.description.abstract本研究的核心貢獻在於開發了一套完全自主的無人機探索系統,能夠在無GPS及不需要先備地圖資訊的情況下,透過深度強化學習結合內在好奇心模組實現自主飛行控制並快速生成語義化的3D點雲地圖,以進行後續的任務規劃。此系統旨在以先進技術輔助無人機在各種環境中進行自我感知和互動,從而完成各類複雜任務。

在模擬學習運動控制端,我們使用Cartographer SLAM進行環境建模和自身無人機定位,取代了傳統的GPS需求,並採用了雙延遲深度確定性策略梯度算法結合內在好奇心模組進行訓練,透過取得SLAM提供資訊,學習如何控制飛行速度到達探索點並擴展未知地圖區域。此方法在模擬環境中表現卓越,達到了93.48%的覆蓋率,相較之下,單純使用雙延遲深度確定性策略梯度算法僅能達到71.23%的覆蓋率。成功完成模擬後,我們將訓練好的模型轉移至實體無人機上進行真實環境測試。實際飛行中,無人機會搭配YOLOv7進行即時物體辨識輔佐飛行。此外,後續採用NAFNet模型進行影像去模糊化以增強AI辨識能力。此做法解決了因飛行晃動過程中拍攝的影像模糊無法準確辨識的問題,使得清晰的RGB色彩資訊或基於分割的辨識結果能夠投影回點雲圖上。這樣增強的3D點雲對理解內部結構和真實資訊具有很大的幫助,有利於後續的任務規劃和分析。

這項研究展示了結合SLAM、深度強化學習和基於AI的影像處理技術,創建了一個穩健且適應性強的無人機探索系統的潛力。此系統不僅能夠自主導航並繪製環境地圖,還能處理和整合即時數據以提高運行效率。其在模擬中取得高覆蓋率,並成功轉移至實際應用,顯示了所開發系統的實用性。未來,這一套無人機系統將廣泛應用於更多工程領域。
zh_TW
dc.description.abstractThe core contribution of this research lies in the development of a fully autonomous UAV exploration system that can achieve autonomous flight control and quickly generate semantic 3D point clouds for mission planning without the need for GPS or pre-mapped information. This system aims to utilize advanced technologies to enable UAVs to autonomously perceive and interact with their environments, thereby completing various complex tasks.

In the simulation phase for motion control learning, we used Cartographer SLAM for environmental modeling and UAV localization, effectively replacing the traditional need for GPS. The motion control module was trained using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm combined with an Intrinsic Curiosity Module (ICM). By leveraging SLAM-provided information, the system learns how to control flight speed to reach exploration points and expand unknown map areas. This approach demonstrated outstanding performance in the simulation environment, achieving a coverage rate of 93.48%, compared to only 71.23% with TD3 alone. Following successful simulations, the trained model was transferred to a physical UAV for real-world testing. During these tests, the UAV utilized YOLOv7 for real-time object recognition to assist in navigation. Additionally, an offline deblurring technique using the NAFNet model was employed to enhance AI recognition capabilities. This approach solved the issue of motion-induced blur during flight, allowing clear RGB color information or segmentation-based recognition results to be projected back onto the point cloud. This enhanced 3D point cloud significantly aids in understanding internal structures and real-world details, facilitating subsequent mission planning and analysis.

This research demonstrates the potential of integrating SLAM, deep reinforcement learning, and AI-based image processing to create a robust and adaptable UAV exploration system. This system not only autonomously navigates and maps its environment but also processes and integrates real-time data to improve operational efficiency. The high coverage rates achieved in simulations and the successful transfer of these capabilities to real-world applications highlight the practical applicability of the developed system. In the future, this UAV system is expected to be widely applied in various engineering fields.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-19T16:09:09Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-19T16:09:09Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements iii
摘要 v
Abstract vii
Contents ix
List of Figures xiii
List of Tables xv
Chapter1 Introduction 1
1.1 Backgroundb and Motivation 1
1.2 Research Objectives 2
1.3 Organization of Thesis 3
Chapter 2 Literature Review 5
2.1 Autonomous UAV system 5
2.1.1 Applications of UAV 5
2.1.2 Comprehensive Overview of UAV 6
2.2 SLAM 6
2.2.1 Visual SLAM 6
2.2.2 LiDAR-based SLAM 8
2.2.3 Multi-sensor Fusion 9
2.3 Deep Reinforcement Learning 10
2.3.1 Algorithms 10
2.3.2 Applicationsin Robotics Domain 11
2.4 AI Recognition 12
2.4.1 Image-Based 12
2.4.2 Point Cloud-Based 13
Chapter 3 Methdology 15
3.1 Research Framework 15
3.2 System Architecture 15
3.3 Cartographer SLAM 17
3.3.1 Input Sensor Data 19
3.3.2 Pose Extrapolator 19
3.3.3 Local SLAM 19
3.3.4 Global SLAM 20
3.3.5 Occupancy Grid Map 20
3.4 Deep Reinforcement Learning 23
3.4.1 Exploration Strategy 23
3.4.2 State Setup 25
3.4.3 Reward Setup 26
3.4.4 Twin Delayed Deep Deterministic Policy Gradient (TD3) 27
3.5 Intrinsic Curiosity Module (ICM) 30
3.5.1 Sparse Reward Problem 30
3.5.2 ICM components 31
3.5.2.1 Encoder Network 32
3.5.2.2 Forward Network 32
3.5.2.3 Inverse Network 33
3.6 Training Setup 34
3.6.1 TD3 with ICM 34
3.6.2 Loss Function 36
3.7 AI Recognition 38
3.8 Back Projection 39
3.8.1 Map Reconstruction 40
3.8.2 Image Deblurring 40
3.8.3 Projective Transformation 41
Chapter4 Experiments 45
4.1 Motion Control Learning 45
4.1.1 Training Model and Worlds 45
4.1.2 Hyperparameters Setting 47
4.1.3 Training Results 49
4.1.4 Evaluation 52
4.2 Real World Transferring 53
4.2.1 UAV Hardware Configuration 53
4.2.2 SLAM Results 56
4.2.3 Motion Control 57
4.2.4 AI Recognition 60
4.2.5 Back Projection 63
Chapter 5 Conclusion and FutureWork 69
5.1 Conclusion 69
5.2 Future Work 71
References 73
-
dc.language.isoen-
dc.subject同時定位與地圖構建zh_TW
dc.subjectYOLOv7 語意分割zh_TW
dc.subject內在好奇心模組zh_TW
dc.subject雙延遲深度確定性策略梯度zh_TW
dc.subject自主探索無人機系統zh_TW
dc.subjectYOLOv7en
dc.subjectAutonomous UAV Exploration Systemen
dc.subjectTwin Delayed Deep Deterministic Policy Gradient (TD3)en
dc.subjectIntrinsic Curiosity Module (ICM)en
dc.subjectSemantic SLAMen
dc.title自主適應無人機探索系統:透過深度強化學習結合內在好奇心模組進行運動規劃zh_TW
dc.titleAutonomous Adaptive UAV Exploration System: Motion Planning through Deep Reinforcement Learning with Intrinsic Curiosity Moduleen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee簡韶逸;林之謙zh_TW
dc.contributor.oralexamcommitteeShao-Yi Chien;Jacob J. Linen
dc.subject.keyword自主探索無人機系統,雙延遲深度確定性策略梯度,內在好奇心模組,同時定位與地圖構建,YOLOv7 語意分割,zh_TW
dc.subject.keywordAutonomous UAV Exploration System,Twin Delayed Deep Deterministic Policy Gradient (TD3),Intrinsic Curiosity Module (ICM),Semantic SLAM,YOLOv7,en
dc.relation.page82-
dc.identifier.doi10.6342/NTU202500026-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-01-13-
dc.contributor.author-college工學院-
dc.contributor.author-dept土木工程學系-
dc.date.embargo-lift2030-01-04-
顯示於系所單位:土木工程學系

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf
  此日期後於網路公開 2030-01-04
68.68 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved