請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90699
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 傅立成 | zh_TW |
dc.contributor.advisor | Li-Chen Fu | en |
dc.contributor.author | 游祖霖 | zh_TW |
dc.contributor.author | Zu Lin Ewe | en |
dc.date.accessioned | 2023-10-03T17:14:15Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-10-03 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-06 | - |
dc.identifier.citation | J. A. Gonzalez-Aguirre, R. Osorio-Oliveros, K. L. Rodríguez-Hernández, et al., “Service Robots: Trends and Technology,” Applied Sciences, vol. 11, no. 22, p. 10702, Jan. 2021, ISSN: 2076-3417. DOI: 10.3390/app112210702. [Online]. Available: https://www.mdpi.com/2076-3417/11/22/10702 (visited on 06/28/2023).
S. M. Lee and D. Lee, “Opportunities and challenges for contactless healthcare services in the post-COVID-19 Era,” Technological Forecasting and Social Change, vol. 167, p. 120712, Jun. 2021, ISSN: 0040-1625. DOI: 10.1016/j.techfore.2021.120712. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S004016252100144X (visited on 06/28/2023). A. Vercelli, I. Rainero, L. Ciferri, M. Boido, and F. Pirri, “Robots in Elderly Care,” DigitCult - Scientific Journal on Digital Cultures, vol. 2, no. 2, pp. 37–50, Mar. 2018, ISSN: 2531-5994. DOI: 10.4399/97888255088954. [Online]. Available: https://digitcult.lim.di.unimi.it/index.php/dc/article/view/54 (visited on 06/28/2023). S. Mukherjee, M. M. Baral, C. Venkataiah, S. K. Pal, and R. Nagariya, “Service robots are an option for contactless services due to the COVID-19 pandemic in the hotels,” DECISION, vol. 48, no. 4, pp. 445–460, Dec. 2021, ISSN: 2197-1722. DOI: 10.1007/s40622-021-00300-x. [Online]. Available: https://doi.org/10.1007/s40622-021-00300-x (visited on 06/28/2023). J. O’Keefe and J. Dostrovsky, “The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat,” Brain Research, vol. 34, no. 1, pp. 171–175, Nov. 1971, ISSN: 0006-8993. DOI: 10.1016/0006-8993(71)90358-1. R. A. Epstein, E. Z. Patai, J. B. Julian, and H. J. Spiers, “The cognitive map in humans: Spatial navigation and beyond,” Nature Neuroscience, vol. 20, no. 11, pp. 1504–1513, Nov. 2017, Number: 11 Publisher: Nature Publishing Group, ISSN: 1546-1726. DOI: 10.1038/nn.4656. [Online]. Available: https://www.nature.com/articles/nn.4656 (visited on 06/29/2023). S. Song and H. Myung, “Floorplan-based Localization and Map Update Using LiDAR Sensor,” in 2021 18th International Conference on Ubiquitous Robots (UR), ISSN: 2325-033X, Jul. 2021, pp. 30–34. DOI: 10.1109/UR52253.2021.9494685. X. Wang, R. J. Marcotte, and E. Olson, “GLFP: Global Localization from a Floor Plan,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), ISSN: 2153-0866, Nov. 2019, pp. 1627–1632. DOI: 10.1109/IROS40897.2019.8968061. H. Chu, D. K. Kim, and T. Chen, “You are here: Mimicking the human thinking process in reading floor-plans,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2210–2218. DOI: 10.1109/ICCV.2015.255. F. Boniardi, A. Valada, R. Mohan, T. Caselitz, and W. Burgard, “Robot Localization in Floor Plans Using a Room Layout Edge Extraction Network,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), ISSN: 21530866, Nov. 2019, pp. 5291–5297. DOI: 10.1109/IROS40897.2019.8967847. J. Noonan, H. Rotstein, A. Geva, and E. Rivlin, “Vision-Based Indoor Positioning of a Robotic Vehicle with a Floorplan,” in 2018 International Conference on Indoor Positioning and Indoor Navigation (IPIN), ISSN: 2471-917X, Sep. 2018, pp. 1–8. DOI: 10.1109/IPIN.2018.8533855. H. Oleynikova, Z. Taylor, R. Siegwart, and J. Nieto, Sparse 3D Topological Graphs for Micro-Aerial Vehicle Planning, arXiv:1803.04345 [cs], Jul. 2018. DOI: 10. 48550/arXiv.1803.04345. [Online]. Available: http://arxiv.org/abs/1803.04345 (visited on 05/23/2023). S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). The MIT Press, Aug. 2005, ISBN: 978-0-262-20162-9. O. Mendez, S. Hadfield, N. Pugeault, and R. Bowden, “SeDAR - Semantic Detection and Ranging: Humans can Localise without LiDAR, can Robots?” In 2018 IEEE International Conference on Robotics and Automation (ICRA), ISSN: 2577087X, May 2018, pp. 6053–6060. DOI: 10.1109/ICRA.2018.8461074. J. Li, C. L. Chan, J. Le Chan, Z. Li, K. W. Wan, and W. Yun Yau, “Cognitive Navigation for Indoor Environment Using Floorplan,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), ISSN: 2153-0866, Sep. 2021, pp. 9030–9037. DOI: 10.1109/IROS51168.2021.9635850. J. Johnson, R. Krishna, M. Stark, et al., “Image retrieval using scene graphs,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), ISSN: 1063-6919, Jun. 2015, pp. 3668–3678. DOI: 10.1109/CVPR.2015.7298990. L. Gao, B. Wang, and W. Wang, “Image Captioning with Scene-graph Based Semantic Concepts,” ser. ICMLC 2018, New York, NY, USA: Association for Computing Machinery, Feb. 2018, pp. 225–229, ISBN: 978-1-4503-6353-2. DOI: 10. 1145/3195106.3195114. [Online]. Available: https://dl.acm.org/doi/10.1145/3195106.3195114 (visited on 06/29/2023). N. Xu, A.-A. Liu, J. Liu, W. Nie, and Y. Su, “Scene graph captioner: Image captioning based on structural visual representation,” Journal of Visual Communication and Image Representation, vol. 58, pp. 477–485, Jan. 2019, ISSN: 1047-3203. DOI: 10.1016/j.jvcir.2018.12.027. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1047320318303535 (visited on 06/29/2023). J. Gu, S. Joty, J. Cai, H. Zhao, X. Yang, and G. Wang, “Unpaired Image Captioning via Scene Graph Alignments,” Seoul, Korea (South): IEEE, Oct. 2019, pp. 10322–10331, ISBN: 978-1-72814-803-8. DOI: 10.1109/ICCV.2019.01042. [Online]. Available: https://ieeexplore.ieee.org/document/9010917/ (visited on 06/29/2023). J. Yang, Y. Z. Ang, Z. Guo, K. Zhou, W. Zhang, and Z. Liu, Panoptic Scene Graph Generation, arXiv:2207.11247 [cs], Jul. 2022. DOI: 10.48550/arXiv.2207.11247. [Online]. Available: http://arxiv.org/abs/2207.11247 (visited on 06/29/2023). X. Chang, P. Ren, P. Xu, Z. Li, X. Chen, and A. Hauptmann, “A Comprehensive Survey of Scene Graphs: Generation and Application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 1–26, Jan. 2023, arXiv:2104.01111 [cs], ISSN: 0162-8828, 2160-9292, 1939-3539. DOI: 10.1109/ TPAMI.2021.3137605. [Online]. Available: http://arxiv.org/abs/2104. 01111 (visited on 06/28/2023). C. Zhang, W.-L. Chao, and D. Xuan, An Empirical Study on Leveraging Scene Graphs for Visual Question Answering, arXiv:1907.12133 [cs], Jul. 2019. DOI: 10.48550/arXiv.1907.12133. [Online]. Available: http://arxiv.org/abs/1907.12133 (visited on 06/29/2023). Z. Yang, Z. Qin, J. Yu, and Y. Hu, Scene Graph Reasoning with Prior Visual Relationship for Visual Question Answering, arXiv:1812.09681 [cs], Aug. 2019. DOI: 10.48550/arXiv.1812.09681. [Online]. Available: http://arxiv.org/abs/1812.09681 (visited on 06/29/2023). S. Ghosh, G. Burachas, A. Ray, and A. Ziskind, Generating Natural Language ExplanationsforVisualQuestion Answeringusing SceneGraphsandVisualAttention, arXiv:1902.05715 [cs], Feb. 2019. DOI: 10.48550/arXiv.1902.05715. [Online]. Available: http://arxiv.org/abs/1902.05715 (visited on 06/29/2023). T. He, L. Gao, J. Song, and Y.-F. Li, Exploiting Scene Graphs for Human-Object Interaction Detection, arXiv:2108.08584 [cs], Aug. 2021. DOI: 10.48550/arXiv.2108.08584. [Online]. Available: http://arxiv.org/abs/2108.08584 (visited on 06/29/2023). J. Wald, H. Dhamo, N. Navab, and F. Tombari, “Learning 3D Semantic Scene Graphs From 3D Indoor Reconstructions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 3961–3970. [Online]. Available: https://openaccess.thecvf.com/content_CVPR_2020/html/Wald_Learning_3D_Semantic_Scene_Graphs_From_3D_Indoor_Reconstructions_CVPR_2020_paper.html (visited on 06/30/2023). I. Armeni, Z.-Y. He, J. Gwak, et al., 3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera, arXiv:1910.02527 [cs], Oct. 2019. DOI: 10.48550/arXiv.1910.02527. [Online]. Available: http://arxiv.org/abs/1910.02527 (visited on 06/29/2023). A. Rosinol, A. Gupta, M. Abate, J. Shi, and L. Carlone, “3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans,” in Proceedings of Robotics: Science and Systems, Corvalis, Oregon, USA, Jul. 2020. DOI: 10.15607/RSS.2020.XVI.079. A. Rosinol, M. Abate, Y. Chang, and L. Carlone, “Kimera: An Open-Source Library for Real-Time Metric-Semantic Localization and Mapping,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), ISSN: 2577-087X, May 2020, pp. 1689–1696. DOI: 10.1109/ICRA40945.2020.9196885. A. Rosinol, A. Violette, M. Abate, et al., “Kimera: From SLAM to spatial perception with 3D dynamic scene graphs,” The International Journal of Robotics Research, vol. 40, no. 12-14, pp. 1510–1546, Dec. 2021, ISSN: 0278-3649. DOI: 10.1177/02783649211056674. [Online]. Available: https://doi.org/10.1177/02783649211056674 (visited on 06/28/2023). D. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, Sep. 1999, 1150–1157 vol.2. DOI: 10.1109/ICCV.1999.790410. P. H. Le-Khac, G. Healy, and A. F. Smeaton, “Contrastive Representation Learning: A Framework and Review,” IEEE Access, vol. 8, pp. 193907–193934, 2020, ISSN: 2169-3536. DOI: 10.1109/ACCESS.2020.3031549. J. L. Blanco and P. K. Rai, Nanoflann: A C++ header-only fork of FLANN, a library for nearest neighbor (NN) with kd-trees, https://github.com/jlblancoc/nanoflann, 2014. M. Muja. and D. G. Lowe., “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proceedings of the Fourth International Conference on Computer Vision Theory and Applications (VISIGRAPP 2009) - Volume 1: VISAPP, INSTICC, SciTePress, 2009, pp. 331–340, ISBN: 978-989-8111-69-2. DOI: 10.5220/0001787803310340. B. Lau, C. Sprunk, and W. Burgard, “Improved updating of Euclidean distance maps and Voronoi diagrams,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, ISSN: 2153-0866, Oct. 2010, pp. 281–286. DOI: 10.1109/IROS.2010.5650794. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788. DOI: 10.1109/CVPR.2016.91. K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980–2988. DOI: 10.1109/ICCV.2017.322. D. Chikurtev, K. Yovchev, A. Chikurteva, and N. Chivarov, “Determination of object location for robotic grasping using depth vision sensor,” in Advances in Service and Industrial Robotics, S. Zeghloul, M. A. Laribi, and J. S. Sandoval Arevalo, Eds., Cham: Springer International Publishing, 2020, pp. 596–605, ISBN: 978-3030-48989-2. G. Brazil, A. Kumar, J. Straub, N. Ravi, J. Johnson, and G. Gkioxari, “Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 13154–13164. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2023/html/Brazil_Omni3D_A_Large_Benchmark_and_Model_for_3D_Object_Detection_CVPR_2023_paper.html (visited on 06/21/2023). Y. Ren, Y. Cai, F. Zhu, S. Liang, and F. Zhang, ROG-Map: An Efficient Robocentric Occupancy Grid Map for Large-scene and High-resolution LiDAR-based Motion Planning, arXiv:2302.14819 [cs], Feb. 2023. DOI: 10.48550/arXiv.2302.14819. [Online]. Available: http://arxiv.org/abs/2302.14819 (visited on 05/16/2023). J. Amanatides and A. Woo, “A Fast Voxel Traversal Algorithm for Ray Tracing,” Proceedings of EuroGraphics, vol. 87, Aug. 1987. K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition, arXiv:1512.03385 [cs], Dec. 2015. DOI: 10.48550/arXiv.1512.03385. [Online]. Available: http://arxiv.org/abs/1512.03385 (visited on 06/28/2023). A. v. d. Oord, Y. Li, and O. Vinyals, Representation Learning with Contrastive Predictive Coding, arXiv:1807.03748 [cs, stat], Jan. 2019. DOI: 10.48550/arXiv.1807.03748. [Online]. Available: http://arxiv.org/abs/1807.03748 (visited on 08/05/2023). M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981, ISSN: 0001-0782. DOI: 10.1145/358669.358692. [Online]. Available: https://dl.acm.org/doi/10.1145/358669.358692 (visited on 06/28/2023). D. Fox, S. Thrun, W. Burgard, and F. Dellaert, “Particle Filters for Mobile Robot Localization,” in Sequential Monte Carlo Methods in Practice, ser. Statistics for Engineering and Information Science, A. Doucet, N. de Freitas, and N. Gordon, Eds., New York, NY: Springer, 2001, pp. 401–428, ISBN: 978-1-4757-3437-9. DOI: 10.1007/978-1-4757-3437-9_19. [Online]. Available: https://doi.org/10.1007/978-1-4757-3437-9_19 (visited on 06/28/2023). K. Shoemake, “Animating rotation with quaternion curves,” in Proceedings of the 12th annual conference on Computer graphics and interactive techniques, ser. SIGGRAPH ’85, New York, NY, USA: Association for Computing Machinery, Jul. 1985, pp. 245–254, ISBN: 978-0-89791-166-5. DOI: 10.1145/325334.325242. [Online]. Available: https://dl.acm.org/doi/10.1145/325334.325242 (visited on 06/29/2023). P. E. Hart, N. J. Nilsson, and B. Raphael, “A Formal Basis for the Heuristic Determination of Minimum Cost Paths,” IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100–107, Jul. 1968, ISSN: 2168-2887. DOI: 10.1109/TSSC.1968.300136. M. Savva, A. Kadian, O. Maksymets, et al., Habitat: A Platform for Embodied AI Research, arXiv:1904.01201 [cs], Nov. 2019. DOI: 10.48550/arXiv.1904.01201. [Online]. Available: http://arxiv.org/abs/1904.01201 (visited on 06/28/2023). K. Yadav, R. Ramrakhya, S. K. Ramakrishnan, et al., Habitat-Matterport 3D Semantics Dataset, arXiv:2210.05633 [cs], Dec. 2022. DOI: 10.48550/arXiv.2210.05633. [Online]. Available: http://arxiv.org/abs/2210.05633 (visited on 06/28/2023). Z. Li, H. Jia, and Y. Zhang, “HartSift: A high-accuracy and real-time SIFT based on GPU,” in 2017 IEEE 23rd International Conference on Parallel and Distributed Systems (ICPADS), ISSN: 1521-9097, Dec. 2017, pp. 135–142. DOI: 10.1109/ICPADS.2017.00029. P. Anderson, A. Chang, D. S. Chaplot, et al., On evaluation of embodied navigation agents, 2018. arXiv: 1807.06757[cs.AI]. S. Macenski and I. Jambrecic, “Slam toolbox: Slam for the dynamic world,” Journal of Open Source Software, vol. 6, no. 61, p. 2783, 2021. DOI: 10.21105/joss. 02783. [Online]. Available: https://doi.org/10.21105/joss.02783. G. Williams, N. Wagener, B. Goldfain, et al., “Information theoretic mpc for model based reinforcement learning,” in 2017 IEEE International Conference on Robotics and Automation(ICRA), 2017, pp. 1714–1721. DOI: 10.1109/ICRA.2017.7989202. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90699 | - |
dc.description.abstract | 受到科技進步、人口結構變化以及對自動化和無接觸互動的需求的推動,服務機器人在大眾應用領域的利用預計在不久的將來會大幅增加。然而,要在未勘察的環境中成功部署機器人,有效的導航能力成為關鍵因素。現有方法主要依賴於自主或遠程操作的環境探索和地圖建立來理解環境,但這些方法對於缺乏技術能力的使用者可能存在挑戰。相反,人類展示了使用抽象平面圖進行導航的卓越能力,利用它們提供的高層次空間信息。這引出了一個問題,即服務機器人能否利用基於樓層平面圖的導航技術來提高它們在陌生環境中的性能和可用性。通過利用平面圖作為導航輔助工具,機器人有可能減輕對廣泛探索和地圖創建的需求,簡化使用者的操作和互動過程。儘管有關平面圖定位和導航的文獻存在,但大多數現有方法僅考慮具有精確測量或比例的樓層平面圖,這可能很難獲得並且可能需要機器人預先探索環境。這一限制影響了它們的實用性,特別是對於旨在大規模部署的服務機器人而言。因此,需要新的方法,可以利用平面圖信息而無需依賴精確的比例或廣泛的預先探索。
本碩士論文旨在解決上述挑戰,研究樓層平面圖導航在未勘察的環境中對服務機器人的可行性。具體而言,我們提出了一種新的平面圖定位方法,利用樓層平面圖的尺度不變幾何特徵,實現無需依賴精確比例信息的導航。此外,我們引入了一種增量圖擴充方法,從機器人觀測中提取實際的空間信息和物件語義信息,豐富了樓層平面圖的圖形表示,提供對環境的準確和全面的理解。最後,我們開發了一個高效的導航框架,能夠利用樓層平面圖的固有結構和實時觀測。通過廣泛的實驗和評估,我們將我們的方法與基線進行比較,評估其在表示環境、定位精度和導航效能方面的質量。 這項研究的成果有助於推動服務機器人在未勘察的環境中的部署,尤其是在廣泛探索和地圖建立可能對使用者來說不切實際或技術上具有挑戰性的情況下。通過利用抽象樓層平面圖導航的優勢,我們旨在促進服務機器人的廣泛應用,使它們能夠無縫地融入各種公共場景。 | zh_TW |
dc.description.abstract | Technological advancements, shifting demographics, and the drive for automation and contactless interactions are prompting an expected surge in service robot use for mass public applications. Nevertheless, effective navigation in unfamiliar environments remains a critical challenge for successful deployment. Current navigation methods, which rely on autonomous or teleoperated exploration and map building, pose technical difficulties for end-users. In contrast, humans can effectively navigate using abstract floorplans, suggesting the potential for service robots to leverage similar techniques. The practical application of floorplan-based navigation, however, is currently limited by methods that require exact measurements or scale and significant pre-exploration.
This thesis aims to address the aforementioned challenges and investigate the feasibility of floorplan-based navigation for service robots in unexplored environments. Specifically, we propose a novel scale-invariant floorplan localization method, enabling navigation without relying on precise scale information. Furthermore, we introduce an incremental graph augmentation approach that enriches the floorplan representation with traversability and semantic information derived from robot observations. Finally, we develop an efficient navigation framework capable of utilizing both the inherent structure of the floorplan and real-time observations. Experimental results demonstrate that our scale-invariant floorplan localization method outperforms baseline methods in most cases when floorplan scale information is unavailable, and our graph-based navigation system exhibits superior success and efficiency compared to grid-based counterparts. Furthermore, qualitative analyses confirm that our method accurately reflects the real-time environmental conditions across diverse settings. The outcomes of this research contribute to the advancement of service robot deployment in unexplored environments, particularly in scenarios where extensive exploration and map building may be impractical or technically challenging for end-users. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-10-03T17:14:14Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-10-03T17:14:15Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 口試委員會審定書 i
Acknowledgements ii 摘要 iv Abstract vi Contents viii List of Figures xi List of Tables xiii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Research Objectives 3 1.3 Contributions 5 1.4 Thesis Overview 7 Chapter 2 Related Work 8 2.1 Floorplan Localization and Navigation 8 2.2 Scene Graph Representation 10 Chapter 3 Methodology 13 3.1 System Overview 13 3.2 Floorplan Interpreter 14 3.2.1 Preliminaries 16 3.2.1.1 Graph Representation 16 3.2.1.2 Nearest Neighbour Search 16 3.2.2 Topology Graph Extraction 18 3.2.2.1 Vertex Extraction 19 3.2.2.2 Edge Following 19 3.2.2.3 Graph Pruning 20 3.2.3 Laserscan Simulation 22 3.3 Perception 24 3.3.1 Camera 25 3.3.2 3D LiDAR 26 3.3.2.1 Boundaries Extraction 27 3.3.2.2 Egocentric Waypoint Subgraph Extraction 28 3.4 Localization 32 3.4.1 Geometric Features Similarity via Contrastive Learning 33 3.4.2 SIFT Pose Estimation 36 3.4.3 Particle Filter Smoothing 40 3.5 Graph Builder 42 3.5.1 Graph Overview 42 3.5.2 Incremental Graph Augmentation 45 3.6 Navigation 50 Chapter 4 Experiments 53 4.1 Experiment Setup 53 4.2 Localization 56 4.2.1 Contrastive Learning for Geometric Features Similarity 56 4.2.2 Top-K Accuracy for Closest Node Recognition 58 4.2.3 Overall Localization Result 59 4.3 Graph Construction 63 4.4 Navigation 68 4.5 Real-world Demonstration 71 Chapter 5 Conclusion 74 References 76 | - |
dc.language.iso | en | - |
dc.title | 在比例未知的樓層平面圖上進行基於空間圖形方法的定位與導航 | zh_TW |
dc.title | Spatial Graph-based Localization and Navigation on Scaleless Floorplan | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 宋開泰;張文中;林沛群;郭重顯 | zh_TW |
dc.contributor.oralexamcommittee | Kai-Tai Song;Wen-Chung Chang;Pei-Chun Lin;Chung-Hsien Kuo | en |
dc.subject.keyword | 平面圖定位,平面圖導航,移動型機器人,場景圖導航,圖形空間表示, | zh_TW |
dc.subject.keyword | Floorplan Localization,Floorplan Navigation,Mobile Robots,Graph-based Navigation,Spatial Graph Representation, | en |
dc.relation.page | 81 | - |
dc.identifier.doi | 10.6342/NTU202303013 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2023-08-08 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 電機工程學系 | - |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 此日期後於網路公開 2026-08-06 | 24.42 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。