請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93687完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 黃漢邦 | zh_TW |
| dc.contributor.advisor | Han-Pang Huang | en |
| dc.contributor.author | 趙鈺麟 | zh_TW |
| dc.contributor.author | Yu-Lin Zhao | en |
| dc.date.accessioned | 2024-08-07T16:25:23Z | - |
| dc.date.available | 2024-08-08 | - |
| dc.date.copyright | 2024-08-07 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-02 | - |
| dc.identifier.citation | [1] "Ceres Solver: Tutorial & Reference," accessed 12/01 2023. <http://ceres-solver.org/>
[2] B. Alsadik and S. Karam, "The Simultaneous Localization and Mapping (Slam)-an Overview," Journal of Applied Science and Technology Trends, vol. 2, no. 02, pp. 147 - 158, 2021. [3] V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481-2495, 2017. [4] 2021, "Onnx: Open Neural Network Exchange," accessed 12/01 2023. <https://github.com/onnx/onnx> [5] B. Balasuriya, B. Chathuranga, B. Jayasundara, N. Napagoda, S. Kumarawadu, D. Chandima, and A. Jayasekara, "Outdoor Robot Navigation Using Gmapping Based Slam Algorithm," 2016 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, pp. 403-408, 2016. [6] T.D. Barfoot, State Estimation for Robotics. Cambridge University Press, 2017. [7] B. Bescos, C. Campos, J.D. Tardós, and J. Neira, "Dynaslam Ii: Tightly-Coupled Multi-Object Tracking and Slam," IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5191-5198, 2021. [8] B. Bescos, J.M. Fácil, J. Civera, and J. Neira, "Dynaslam: Tracking, Mapping, and Inpainting in Dynamic Scenes," IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 4076-4083, 2018. [9] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, "Simple Online and Realtime Tracking," 2016 IEEE International Conference on Image Processing (ICIP), pp. 3464-3468, 2016. [10] I.E. Bouazzaoui, S.A.R. Florez, and A.E. Ouardi, "Enhancing Rgb-D Slam Performances Considering Sensor Specifications for Indoor Localization," IEEE Sensors Journal, vol. 22, no. 6, pp. 4970-4977, 2022. [11] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J.J. Leonard, "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age," IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, 2016. [12] C. Campos, R. Elvira, J.J.G. Rodríguez, J.M.M. Montiel, and J.D. Tardós, "Orb-Slam3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap Slam," IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874-1890, 2021. [13] Z. Cao, G. Hidalgo, T. Simon, S.E. Wei, and Y. Sheikh, "Openpose: Realtime Multi-Person 2d Pose Estimation Using Part Affinity Fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 1, pp. 172-186, 2021. [14] P. Chanprakon, T. Sae-Oung, T. Treebupachatsakul, P. Hannanta-Anan, and W. Piyawattanametha, "An Ultra-Violet Sterilization Robot for Disinfection," 2019 5th International Conference on Engineering, Applied Sciences and Technology (ICEAST): IEEE, pp. 1-4, 2019. [15] C. Chen, H. Zhu, M. Li, and S. You, "A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives," Robotics and Autonomous Systems, vol. 7, no. 3, p. 45, 2018. [16] T.-L. Chen, Y.-H. Chen, Y.-L. Zhao, and P.-C. Chiang, "Application of Gaseous Clo2 on Disinfection and Air Pollution Control: A Mini Review," Aerosol and Air Quality Research, vol. 20, no. 11, pp. 2289-2298, 2020. [17] J. Cheng, L. Zhang, Q. Chen, X. Hu, and J. Cai, "A Review of Visual Slam Methods for Autonomous Driving Vehicles," Engineering Applications of Artificial Intelligence, vol. 114, p. 104992, 2022. [18] S. Choi, J. Park, and W. Yu, "Resolving Scale Ambiguity for Monocular Visual Odometry," 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI): IEEE, pp. 604-608, 2013. [19] S.-Y. Chung and H.-P. Huang, "Slammot-Sp: Simultaneous Slammot and Scene Prediction," Advanced Robotics, vol. 24, no. 7, pp. 979-1002, 2010. [20] Y. Cong, C. Gu, T. Zhang, and Y. Gao, "Underwater Robot Sensing Technology: A Survey," Fundamental Research, vol. 1, no. 3, pp. 337-345, 2021. [21] J. Coughlan and A.L. Yuille, "The Manhattan World Assumption: Regularities in Scene Statistics Which Enable Bayesian Inference," Advances in Neural Information Processing Systems vol. 13, 2000. [22] A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse, "Monoslam: Real-Time Single Camera Slam," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, 2007. [23] R. De Maesschalck, D. Jouan-Rimbaud, and D.L. Massart, "The Mahalanobis Distance," Chemometrics and Intelligent Laboratory Systems, vol. 50, no. 1, pp. 1-18, 2000. [24] L. Du, A.T.S. Ho, and R. Cong, "Perceptual Hashing for Image Authentication: A Survey," Signal Processing: Image Communication, vol. 81, p. 115713, 2020. [25] H. Durrant-Whyte and T. Bailey, "Simultaneous Localization and Mapping: Part I," IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006. [26] F. Farahi and H.S. Yazdi, "Probabilistic Kalman Filter for Moving Object Tracking," Signal Processing: Image Communication, vol. 82, p. 115751, 2020. [27] Q.C. Feng and X. Wang, "Design of Disinfection Robot for Livestock Breeding," Procedia Computer Science, vol. 166, pp. 310-314, 2020. [28] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision Meets Robotics: The Kitti Dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, 2013. [29] G. Grisetti, C. Stachniss, and W. Burgard, "Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters," IEEE transactions on Robotics, vol. 23, no. 1, p. 34, 2007. [30] "Evo: Python Package for the Evaluation of Odometry and Slam," accessed 2023/12/1. <https://github.com/MichaelGrupp/evo> [31] M. Guettari, I. Gharbi, and S. Hamza, "Uvc Disinfection Robot," Environmental Science Pollution Research, pp. 1-6, 2020. [32] A. Gupta and X. Fernando, "Simultaneous Localization and Mapping (Slam) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges," vol. 6, no. 4, p. 85, 2022. [33] H. Gupta, D. Bhardwaj, H. Agrawal, V.A. Tikkiwal, and A. Kumar, "An Iot Based Air Pollution Monitoring System for Smart Cities," 2019 IEEE International Conference on Sustainable Energy Technologies (ICSET), Bhubaneswar, India, pp. 173-177, 2019. [34] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask R-Cnn," Proceedings of the IEEE international conference on computer vision, Venice, Italy, pp. 2961-2969, 2017. [35] W. Hess, D. Kohler, H. Rapp, and D. Andor, "Real-Time Loop Closure in 2d Lidar Slam," 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1271-1278, 2016. [36] Y.-T. Hong and H.-P. Huang, "A Comparison of Outdoor 3d Reconstruction between Visual Slam and Lidar Slam," CACS2023, pp. 1-6, 2023. [37] X. Hou, Y. Wang, and L.P. Chau, "Vehicle Tracking Using Deep Sort with Low Confidence Track Filtering," 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1-6, 2019. [38] W. Hu, X. Li, W. Luo, X. Zhang, S. Maybank, and Z. Zhang, "Single and Multiple Object Tracking Using Log-Euclidean Riemannian Subspace and Block-Division Appearance Model," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 12, pp. 2420-2440, 2012. [39] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, "Kinectfusion: Real-Time 3d Reconstruction and Interaction Using a Moving Depth Camera," Proceedings of the 24th annual ACM symposium on User interface software and technology, pp. 559–568, 2011. [40] "Ultralytics Yolo (Version 8.0.0) " accessed 01/10 2024. <https://github.com/ultralytics/ultralytics> [41] R. Jonker and T. Volgenant, "Improving the Hungarian Assignment Algorithm," Operations Research Letters, vol. 5, no. 4, pp. 171-175, 1986. [42] G. Kim and A. Kim, "Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3d Point Cloud Map," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4802-4809, 2018. [43] B.M. Kitt, J. Rehder, A.D. Chambers, M. Schonbein, H. Lategahn, and S. Singh, "Monocular Visual Odometry Using a Planar Road Model to Solve Scale Ambiguity," 2011. [44] G. Klein and D. Murray, "Parallel Tracking and Mapping for Small Ar Workspaces," Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality: IEEE Computer Society, pp. 1-10, 2007. [45] J.J. Kuffner and S.M. LaValle, "Rrt-Connect: An Efficient Approach to Single-Query Path Planning," Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), vol. 2, pp. 995-1001 vol.2, 2000. [46] C.H. Kuo, C. Huang, and R. Nevatia, "Multi-Target Tracking by on-Line Learned Discriminative Appearance Models," 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 685-692, 2010. [47] R.B. Langley, "Rtk Gps," GPS World, vol. 9, no. 9, pp. 70-76, 1998. [48] N. Le, A. Heili, and J.-M. Odobez, "Long-Term Time-Sensitive Costs for Crf-Based Tracking by Detection," Computer Vision – ECCV 2016 Workshops, Cham, G. Hua and H. Jégou, Eds.: Springer International Publishing, pp. 43-51, 2016. [49] Y. Lee, K.A. Kozar, and K.R. Larsen, "The Technology Acceptance Model: Past, Present, and Future," Communications of the Association for Information Systems, vol. 12, no. 1, p. 50, 2003. [50] J.J. Leonard, H.F.J.I.T.o.r. Durrant-Whyte, and Automation, "Mobile Robot Localization by Tracking Geometric Beacons," vol. 7, no. 3, pp. 376-382, 1991. [51] Q. Li, R. Li, K. Ji, and W. Dai, "Kalman Filter and Its Application," 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS), pp. 74-77, 2015. [52] L. Lin, Y. Lu, C. Li, H. Cheng, and W. Zuo, "Detection-Free Multiobject Tracking by Reconfigurable Inference with Bundle Representations," IEEE Transactions on Cybernetics, vol. 46, no. 11, pp. 2447-2458, 2016. [53] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C.L. Zitnick, "Microsoft Coco: Common Objects in Context," Computer Vision – ECCV 2014, Cham, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds.: Springer International Publishing, pp. 740-755, 2014. [54] Y. Liu and J. Miura, "Rds-Slam: Real-Time Dynamic Slam Using Semantic Segmentation Methods," IEEE Access, vol. 9, pp. 23772-23785, 2021. [55] Y. Liu, Z. Ning, Y. Chen, M. Guo, Y. Liu, N.K. Gali, L. Sun, Y. Duan, J. Cai, and D. Westerdahl, "Aerodynamic Analysis of Sars-Cov-2 in Two Wuhan Hospitals," Nature, vol. 582, no. 7813, pp. 557-560, 2020. [56] Z. Liu and F. Zhang, "Balm: Bundle Adjustment for Lidar Mapping," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3184-3191, 2021. [57] D.G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International journal of computer vision, vol. 60, no. 2, pp. 91-110, 2004. [58] S.-R. Lu and H.P. Huang, "Implementation of Pre-Engagement Detection on Human-Robot Interaction in Complex Environments," Graduate Institute of Mechanical Engineering, National Taiwan University, 2020. [59] W. Luo, J. Xing, A. Milan, X. Zhang, W. Liu, and T.-K. Kim, "Multiple Object Tracking: A Literature Review," Artificial Intelligence, vol. 293, p. 103448, 2021. [60] A.J.I.j.o.c.v. Martinelli, "Closed-Form Solution of Visual-Inertial Structure from Motion," vol. 106, no. 2, pp. 138-152, 2014. [61] M. Mohammed, I.S. Arif, S. Al-Zubaidi, S.H.K. Bahrain, A. Sairah, and Y. Eddy, "Design and Development of Spray Disinfection System to Combat Coronavirus (Covid-19) Using Iot Based Robotics Technology," Revista Argentina de Clínica Psicológica, vol. 29, no. 5, p. 228, 2020. [62] R. Munoz-Salinas and R.J.a.p.a. Medina-Carnicer, "Ucoslam: Simultaneous Localization and Mapping by Fusion of Keypoints and Squared Planar Markers," 2019. [63] R. Mur-Artal, J.M.M. Montiel, and J.D. Tardos, "Orb-Slam: A Versatile and Accurate Monocular Slam System," IEEE transactions on robotics, vol. 31, no. 5, pp. 1147-1163, 2015. [64] R. Mur-Artal and J.D. Tardós, "Orb-Slam2: An Open-Source Slam System for Monocular, Stereo, and Rgb-D Cameras," IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017. [65] G. Nützi, S. Weiss, D. Scaramuzza, R.J.J.o.i. Siegwart, and r. systems, "Fusion of Imu and Vision for Absolute Scale Estimation in Monocular Slam," vol. 61, no. 1-4, pp. 287-299, 2011. [66] T. Qin, P. Li, and S. Shen, "Vins-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator," IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004-1020, 2018. [67] M. Quigley, J. Faust, T. Foote, and J. Leibs, "Ros: An Open-Source Robot Operating System," 2009 ICRA workshop on open source software, Kobe, Japan, vol. 3, p. 5, 2009. [68] K. Reif, S. Gunther, E. Yaz, and R. Unbehauen, "Stochastic Stability of the Discrete-Time Extended Kalman Filter," IEEE Transactions on Automatic control, vol. 44, no. 4, pp. 714-728, 1999. [69] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-Cnn: Towards Real-Time Object Detection with Region Proposal Networks," IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137-1149, 2017. [70] G. Rodola, "Psutil Documentation," 2023. [Online]. Available: https://psutil.readthedocs.io/en/latest/ [71] A. Roshan Zamir, A. Dehghan, and M. Shah, "Gmcp-Tracker: Global Multi-Object Tracking Using Generalized Minimum Clique Graphs," Computer Vision – ECCV 2012, Berlin, Heidelberg, A. Fitzgibbon, S. Lazebnik, P. Perona, Y. Sato, and C. Schmid, Eds.: Springer Berlin Heidelberg, pp. 343-356, 2012. [72] R.B. Rusu and S. Cousins, "3d Is Here: Point Cloud Library (Pcl)," 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, pp. 1-4, 2011. [73] D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler, and D. Cremers, "The Tum Vi Benchmark for Evaluating Visual-Inertial Odometry," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1680-1687, 2018. [74] A. Segal, D. Haehnel, and S. Thrun, "Generalized-Icp," Robotics: science and systems, vol. 2, no. 4, p. 435, 2009. [75] T. Shan and B. Englot, "Lego-Loam: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758-4765, 2018. [76] T. Shan, B. Englot, C. Ratti, and D. Rus, "Lvi-Sam: Tightly-Coupled Lidar-Visual-Inertial Odometry Via Smoothing and Mapping," 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 5692-5698, 2021. [77] S. Särkkä, A. Vehtari, and J. Lampinen, "Rao-Blackwellized Particle Filter for Multiple Target Tracking," Information Fusion, vol. 8, no. 1, pp. 2-15, 2007. [78] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, "A Benchmark for the Evaluation of Rgb-D Slam Systems," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573-580, 2012. [79] S. Thrun, W. Burgard, and D. Fox, "Probabilistic Robotics. Ma," ed: Cambridge: MIT Press, 2005. [80] L. Tiseni, D. Chiaradia, M. Gabardi, M. Solazzi, D. Leonardis, and A. Frisoli, "Uv-C Mobile Robots with Optimized Path Planning: Algorithm Design and on-Field Measurements to Improve Surface Disinfection against Sars-Cov-2," IEEE Robotics & Automation Magazine, vol. 28, no. 1, pp. 59-70, 2021. [81] M. Trajković and M. Hedley, "Fast Corner Detection," Image and Vision Computing, vol. 16, no. 2, pp. 75-87, 1998. [82] B. Triggs, P.F. McLauchlan, R.I. Hartley, and A.W. Fitzgibbon, "Bundle Adjustment—a Modern Synthesis," International workshop on vision algorithms: Springer, pp. 298-372, 1999. [83] V.M. Trinh, M.-H. Yuan, Y.-H. Chen, C.-Y. Wu, S.-C. Kang, P.-C. Chiang, T.-C. Hsiao, H.-P. Huang, Y.-L. Zhao, J.-F. Lin, C.-H. Huang, J.-H. Yeh, and D.-M. Lee, "Chlorine Dioxide Gas Generation Using Rotating Packed Bed for Air Disinfection in a Hospital," Journal of Cleaner Production, vol. 320, p. 128885, 2021. [84] S. Umeyama, "Least-Squares Estimation of Transformation Parameters between Two Point Patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 04, pp. 376-380, 1991. [85] C.-C. Wang, C. Thorpe, S. Thrun, M. Hebert, and H. Durrant-Whyte, "Simultaneous Localization, Mapping and Moving Object Tracking," The International Journal of Robotics Research, vol. 26, no. 9, pp. 889-916, 2007. [86] W.-T. Weng, H.-P. Huang, Y.-L. Zhao, and C.-Y. Lin, "Development of a Visual Perception System on a Dual-Arm Mobile Robot for Human-Robot Interaction," Sensors, vol. 22, no. 23, 2022. [87] W.T. Weng, H.P. Huang, and Y.L. Zhao, "Camera Calibration Deployed in Mobile Robots," International Journal of iRobotics vol. 5, no. 4, pp. 1-8, 2022. [88] N. Wojke and A. Bewley, "Deep Cosine Metric Learning for Person Re-Identification," 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 748-756, 2018. [89] G. Xue, J. Wei, R. Li, and J. Cheng, "Lego-Loam-Sc: An Improved Simultaneous Localization and Mapping Method Fusing Lego-Loam and Scan Context for Underground Coalmine," Sensors, vol. 22, no. 2, 2022. [90] H. Ye, Y. Chen, and M. Liu, "Tightly Coupled 3d Lidar Inertial Odometry and Mapping," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, pp. 3144-3150, 2019. [91] L. Ye, W. Li, L. Zheng, and Y. Zeng, "Lightweight and Deep Appearance Embedding for Multiple Object Tracking," IET Computer Vision, vol. 16, no. 6, pp. 489-503, 2022. [92] C. Yu, Z. Liu, X.J. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, "Ds-Slam: A Semantic Visual Slam Towards Dynamic Environments," 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1168-1174, 2018. [93] J. Zhang and S. Singh, "Loam: Lidar Odometry and Mapping in Real-Time," Robotics: Science and Systems vol. 2, p. 9, 2014. [94] J. Zhang and S. Singh, "Visual-Lidar Odometry and Mapping: Low-Drift, Robust, and Fast," 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2174-2181, 2015. [95] Y.-L. Zhao, Y.-T. Hong, and H.-P. Huang, "Comprehensive Performance Evaluation between Visual Slam and Lidar Slam for Mobile Robots: Theories and Experiments," Applied Sciences, vol. 14, no. 9, p. 3945, 2024. [96] Y.-L. Zhao, H.-P. Huang, P.-C. Chiang, J.-H. Chen, Y.-C. Chen, and H.-T. Wang, "Development of Robots for People with Dementia: Technologies and Applications," JCSME vol. 44, no. 4, pp. 351-360, 2023. [97] Y.L. Zhao, H.P. Huang, T.L. Chen, P.C. Chiang, Y.H. Chen, J.H. Yeh, C.H. Huang, J.F. Lin, and W.T. Weng, "A Smart Sterilization Robot System with Chlorine Dioxide for Spray Disinfection," IEEE Sensors Journal, vol. 21, no. 19, pp. 22047-22057, 2021. [98] H. Zhou, W. Ouyang, J. Cheng, X. Wang, and H. Li, "Deep Continuous Conditional Random Fields with Asymmetric Inter-Object Constraints for Online Multi-Object Tracking," IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 1011-1022, 2019. [99] H. Zhu, M. Zhou, and R. Alkins, "Group Role Assignment Via a Kuhn–Munkres Algorithm-Based Solution," IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 42, no. 3, pp. 739-750, 2011. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93687 | - |
| dc.description.abstract | 保持機器人建圖和定位的高精度是極具挑戰性的工作,尤其是在複雜多變的環境中,如動態環境、缺少明顯特徵的室內環境以及不平整的室外環境。傳統的單感測器SLAM(同步建圖與定位)方法面臨諸多困難,需要採用更為先進和多樣化的技術來應對這些挑戰。
本論文致力於提升機器人SLAM的強健性、穩定性和準確性。首先,我們建立了搭載深度相機和雷射雷達的智慧型機器人平臺,並分別應用於防疫消毒和協助失智症老人的照護。其次,針對動態環境,我們設計並實現了支援多目標跟蹤的動態環境語義SLAM系統,該系統能夠在未知環境中進行機器人狀態、環境特徵和多目標狀態的聯合估計。隨後,我們對室內外視覺SLAM和雷射雷達SLAM的定位精度、建圖效果和性能進行了綜合評估。最後,我們提出了基於相機與雷射雷達融合的SLAM方法,充分利用兩種儀器的互補優勢。 這些演算法均經過公開資料集或實際場景的測試,結果表明,本論文提出的SLAM系統具有顯著的強健性、穩定性和準確性。 | zh_TW |
| dc.description.abstract | Maintaining high accuracy in robotic mapping and localization is a highly challenging task, especially in complex and dynamic environments, such as those with moving objects, featureless indoor spaces, and uneven outdoor terrains. Traditional single-sensor SLAM (Simultaneous Localization and Mapping) methods face numerous difficulties and require more advanced and diverse technologies to address these challenges.
This research aims to enhance the robustness, stability, and accuracy of SLAM by developing an intelligent robotic platform with depth cameras and 3D LiDAR, and by applying it to both pandemic disinfection and dementia elderly care. The platform implements a specially designed dynamic-environment semantic SLAM system which supports multi-object tracking and concurrent estimation of environmental features, multiple object states in unknown environments, and the robot’s own state. In addition, a comprehensive evaluation is made of the localization accuracy, mapping quality, and performance of indoor and outdoor visual SLAM and LiDAR SLAM. A novel SLAM method is then proposed based on the fusion of camera and LiDAR data, leveraging the complementary advantages of both sensors. These algorithms are tested both on public datasets and in real-world scenarios. The results demonstrate that this new SLAM system exhibits significant robustness, stability, and accuracy. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-07T16:25:22Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-07T16:25:23Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 ii Abstract iii List of Tables viii List of Figures ix Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Contributions 2 1.3 Organization of Dissertation 3 Chapter 2 Literature Review 6 2.1 Simultaneous Localization and Mapping 6 2.2 Visual SLAM 10 2.3 LiDAR SLAM 13 2.4 Visual LiDAR SLAM 16 2.5 Benchmark and Evaluation 17 Chapter 3 Creating a Robot Platform 20 3.1 Fundamentals of Robotics 20 3.1.1 Hardware 21 3.1.2 Software 22 3.2 Disinfection Robot 23 3.2.1 Hardware 24 3.2.2 Software 30 3.3 Dual-Arm Mobile Robot 36 3.3.1 Hardware 37 3.3.2 Software 39 3.4 Summary 42 Chapter 4 Development of Semantic SLAM for Dynamic Environments 43 4.1 Introduction 43 4.1.1 Object Detection 44 4.1.2 Dynamic SLAM 45 4.1.3 Multiple Object Tracking (MOT) 47 4.2 Methods 49 4.2.1 Semantic SLAM System 49 4.2.2 Development of MOT 51 4.3 Experimental Results 57 4.3.1 SLAM Evaluation 58 4.3.2 MOT Evaluation 60 4.4 Summary 61 Chapter 5 Comparison of Visual SLAM and LiDAR SLAM for Mobile Robots 63 5.1 Introduction 63 5.2 Methods 65 5.2.1 Gmapping 65 5.2.2 ORB-SLAM3 66 5.2.3 SC-LeGO-LOAM 68 5.2.4 Cartographer 70 5.2.5 3D Reconstruction 71 5.3 Performance Evaluation 72 5.3.1 Benchmark 73 5.3.2 Indoor Environment 74 5.3.3 Outdoor Environment 78 5.3.4 Computational Resources 82 5.4 Summary 84 Chapter 6 Multi-Sensor Fusion SLAM 87 6.1 Introduction 87 6.2 System Overview 88 6.3 Methods 89 6.3.1 Alignment Essentials 89 6.3.2 LiDAR Odometry 92 6.3.3 Visual Odometry 95 6.4 Experimental Results 96 6.5 Summary 98 Chapter 7 Conclusions and Future Work 100 7.1 Summary 100 7.1.1 Creating a Robot Platform 100 7.1.2 Development of Semantic SLAM for Dynamic Environments 101 7.1.3 Comparison of Visual SLAM and LiDAR SLAM for Mobile Robots 102 7.1.4 Multi-Sensor Fusion SLAM 102 7.2 Future Work 103 References 105 Biography 113 | - |
| dc.language.iso | en | - |
| dc.subject | 感測器融合 | zh_TW |
| dc.subject | 移動機器人 | zh_TW |
| dc.subject | 多目標跟蹤 | zh_TW |
| dc.subject | SLAM | zh_TW |
| dc.subject | 三維重建 | zh_TW |
| dc.subject | 視覺SLAM | zh_TW |
| dc.subject | 光達SLAM | zh_TW |
| dc.subject | LiDAR SLAM | en |
| dc.subject | SLAM | en |
| dc.subject | 3D reconstruction | en |
| dc.subject | visual SLAM | en |
| dc.subject | multiply object tracking | en |
| dc.subject | mobile robots | en |
| dc.subject | sensor fusion | en |
| dc.title | 實時語義同步定位與地圖建構融合系統開發與在動態環境中移動機器人的應用 | zh_TW |
| dc.title | Real-time Semantic SLAM Fusion for Mobile Robots in Dynamic Environments | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.oralexamcommittee | 蔣本基;傅楸善;程登湖;蔡清池;郭重顯 | zh_TW |
| dc.contributor.oralexamcommittee | Pen-Chi Chiang;Chiou-Shann Fuh;Teng-Hu Cheng;Ching-Chih Tsai;Chung-Hsien Kuo | en |
| dc.subject.keyword | SLAM,移動機器人,多目標跟蹤,視覺SLAM,光達SLAM,感測器融合,三維重建, | zh_TW |
| dc.subject.keyword | SLAM,mobile robots,multiply object tracking,visual SLAM,LiDAR SLAM,sensor fusion,3D reconstruction, | en |
| dc.relation.page | 114 | - |
| dc.identifier.doi | 10.6342/NTU202402897 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-08-06 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 機械工程學系 | - |
| 顯示於系所單位: | 機械工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 4.95 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
