請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70241完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 羅仁權 | |
| dc.contributor.author | Michael Chiou | en |
| dc.contributor.author | 邱名彥 | zh_TW |
| dc.date.accessioned | 2021-06-17T04:24:41Z | - |
| dc.date.available | 2023-08-18 | |
| dc.date.copyright | 2018-08-18 | |
| dc.date.issued | 2018 | |
| dc.date.submitted | 2018-08-15 | |
| dc.identifier.citation | [1] R. Goeddel and E. Olson, “Learning semantic place labels from occupancy grids using cnns,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3999–4004, 2016.
[2] N. S¨underhauf, S. Shirazi, F. Dayoub, B. Upcroft, and M. Milford, “On the performance of convnet features for place recognition,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4297–4304, Sept 2015. [3] A. Bubeck, F. Weisshardt, T. Sing, U. Reiser, M. H¨agele, and A. Verl, “Implementing best practices for systems integration and distributed software development in service robotics - the care-o-bot robot family,” in IEEE/SICE International Symposium on System Integration (SII), pp. 609–614, 2012. [4] T. Miquel, J. P. Condomines, R. Chemali, and N. Larrieu, “Design of a robust controller/ observer for tcp/aqm network: First application to intrusion detection systems for drone fleet,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1707–1712, Sept 2017. [5] K. O¨ fja¨ll, M. Felsberg, and A. Robinson, “Visual autonomous road following by symbiotic online learning,” in 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 136–143, June 2016. [6] P. Harmo, T. Taipalus, J. Knuuttila, J. Vallet, and A. Halme, “Needs and solutions - home automation and service robots for the elderly and disabled,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3201–3206, 2005. [7] E. Benowitz, “The curiosity mars rover’s fault protection engine,” in 2014 IEEE International Conference on Space Mission Challenges for Information Technology, pp. 62–66, Sept 2014. 97 [8] “History of artificial intelligence.” https://en.wikipedia.org/wiki/ History_of_artificial_intelligence#The_golden_years_ 1956%E2%80%931974. Accessed: 2018-05-01. [9] J. Manyika, M. Chui, B. Brown, J. Bughin, R. Dobbs, C. Roxburgh, and A. H. Byers, “Big data: The next frontier for innovation, competition, and productivity,” tech. rep., McKinsey Global Institute, June 2011. [10] L. Zhang, S. Wang, and B. Liu, “Deep learning for sentiment analysis : A survey,” CoRR, vol. abs/1801.07883, 2018. [11] A. M. Research, “Robotics technology market.” https://www. alliedmarketresearch.com/robotics-technology-market, 2015. [12] “Pepper the robot fired from grocery store for not being up to the job.” https://www.digitaltrends.com/cool-tech/ pepper-robot-grocery-store/. Accessed: 2018-06-23. [13] “Three chinese restaurants fired their robot workers.” http://www.businessinsider.com/three-chinese-restaurants-fired-their-robot-workers-2016-4. Accessed: 2017-11-23. [14] R. Mur-Artal and J. D. Tard´os, “Orb-slam2: an open-source slam system for monocular,stereo and rgb-d cameras,” IEEE Transaction on Robotics, vol. 3, no. 5, pp. 1255–1262, 2017. [15] M. Labb´e and F. Michaud, “Appearance-based loop closure detection for online large-scale and long-term operation,” IEEE Transaction on Robotics, vol. 29, no. 3, pp. 1255–1262, 2017. [16] M. Labb´e and F. Michaud, “Online global loop closure detection for large-scale multi-session graph-based slam,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2661–2666, 2014. 98 [17] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based slam with raoblackwellized particle filters by adaptive proposals and selective resampling,” in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp. 2432–2437, April 2005. [18] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Transactions on Robotics, vol. 23, pp. 34–46, Feb 2007. [19] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison, “Slam++: Simultaneous localisation and mapping at the level of objects,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2432–2437, 2013. [20] A. Pronobis and P. Jensfelt, “Large-scale semantic mapping and reasoning with heterogeneous modalities,” in IEEE/RSJ International Conference on Intelligent Robotics and Automation (ICRA), pp. 3515–3522, 2012. [21] R. Capobianco, J. S. J. Dichtl, G. Grisetti, L. Iocchi, and D. Nardi, “A proposal for semantic map representation and evaluation,” in European Conference on Mobile Robotics (ECMR), pp. 1–6, 2015. [22] C. Galindo, A. Saffiotti, S. Coradeschi, P.Buschka, J. Fernandez-madrigal, and J. Gonzalez, “Multi-hierarchical semantic maps for mobile robotics,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2278–2283, 2005. [23] M. Hanheide, M. G¨obelbecker, G. S. Horn, A. Pronobis, K. Sj¨o¨o, A. Aydemir, P. Jensfelt, C. Gretton, R. Dearden, M. Janicek, H. Zender, G.-J. Kruijff, N. Hawes, and J. L.Wyatt, “Robot task planning and explanation in open and uncertain worlds,” Artificial Intelligence, vol. 247, pp. 119 – 150, 2017. Special Issue on AI and Robotics. 99 [24] H. Zender, O. M. Mozos, P. Jensfelt, G.-J. Kruijff, and W. Burgard, “Conceptual spatial representations for indoor mobile robots,” Robotics and Autonomous Systems, vol. 56, no. 6, pp. 493 – 502, 2008. From Sensors to Human Spatial Concepts. [25] F. Werner, F. Maire, H. C. J. Sitte, S. Tully, and G. Kantor, “Topological slam using neighbourhood information of places,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4937–4942, 2009. [26] N. S¨underhauf, F. Dayoub, S. McMahon, B. Talbot, R. Schulz, P. Corke, G. Wyeth, B. Upcroft, and M. Milford, “Place categorization and semantic mapping on a mobile robot,” in IEEE/RSJ International Conference on Intelligent Robotics and Automation (ICRA), pp. 5729–5736, 2016. [27] D. W. Ko, C. Yi, and I. H. Suh, “Semantic mapping and navigation: A bayesian approach,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2360–2636, 2013. [28] Y. Furuta, K. Wada, M. Murooka, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Transformable semantic map based navigation using autonomous deep learning object segmentation,” in IEEE International Conference on Humanoid Robots(Humanoids), pp. 614–620, 2016. [29] R. Vazquez-Martin, P. Nunez, A. Bandera, and F. Sandoval, “Spectral clustering for feature-based metric maps partitioning in a hybrid mapping framework,” in IEEE/RSJ International Conference on Intelligent Robotics and Automation (ICRA), pp. 4175–4181, 2009. [30] R. Bormann, F. Jordan, J. H. W. Li, and M. H¨agele, “Room segmentation: Survey, implementation, and analysis,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1019–1026, 2016. [31] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: 100 Toward the robust-perception age,” IEEE Transaction on Robotics, vol. 32, no. 5, pp. 1519–1524, 2017. [32] “Robot operating system - wikipedia, the free encyclopedia.” https://en. wikipedia.org/wiki/Robot_Operating_System. Accessed: 2016-07- 20. [33] M. Quigley, E. Berger, A. Y. Ng, et al., “Stair: Hardware and software architecture,” in AAAI 2007 Robotics Workshop, Vancouver, BC, pp. 31–37, 2007.[34] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “Ros: an open-source robot operating system,” in ICRA Workshop on Open Source Software, 2009. [35] S. Chitta, E. Marder-Eppstein, W. Meeussen, V. Pradeep, A. Rodr´ıguez Tsouroukdissian, J. Bohren, D. Coleman, B. Magyar, G. Raiola, M. L¨udtke, and E. Fern´andez Perdomo, “ros control: A generic and simple control framework for ros,” The Journal of Open Source Software, 2017. [36] “tf-ros wiki.” http://wiki.ros.org/tf. Accessed: 2017-07-01. [37] “urdf - ros wiki.” http://wiki.ros.org/urdf. Accessed: 2016-06-23. [38] “Solidworks to urdf exporter.” http://wiki.ros.org/sw_urdf_exporter. Accessed: 2017-02-02. [39] B. O. Community, “Blender - a 3d modelling and rendering package.” http://www.blender.org. Accessed: 2017-02-03. [40] “gmapping - ros wiki.” http://wiki.ros.org/gmapping. Accessed: 2017-02-01. [41] K. Konolige, E. Marder-Eppstein, and B. Marthi, “Navigation in hybrid metrictopological maps,” in IEEE/RSJ International Conference on Intelligent Robotics and Automation (ICRA), pp. 3041–3047, 2011. 101 [42] J. Ma and J. Zhao, “Robust topological navigation via convolutional neural network feature and sharpness measure,” IEEE Access, vol. 5, pp. 20707–20715, 2017. [43] J. Blanco, J. Fernandez-Madrigal, and J. Gonzalez, “A new approach for large-scale localization and mapping: Hybrid metric-topological slam,” in IEEE/RSJ International Conference on Intelligent Robotics and Automation (ICRA), pp. 2061–2067, 2007. [44] J. Redmon and A. Farhadi, “You only look once: Unified, real-time object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, 2016. [45] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525, 2017. [46] T. Lin, M. Mair, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Doll´ar, “Microsoft coco: Common objects in context,” in arXiv:1405.0312 [cs.CV], 2014. [47] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv, 2018. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70241 | - |
| dc.description.abstract | 隨著服務機器人越來越普遍用於在酒店和醫院等公共場所,機器人與人們互動的方式也發生了轉變。機器人開始自主地與公眾互動,而不是由受過訓練的人員直接控制。對於有意義且成功的人機交互,服務機器人需要了解其周圍環境的幾何形狀和語義屬性。
例如,在醫院中,服務機器人被告知在病房1中向患者2送藥。服務機器人不僅需要了解如何導航到不同的病房,還要了解哪個病人在哪個病房。如果服務機器人不理解環境的語義屬性,在這種情況下,患者二和病房一,它必須系統地瀏覽每個區域並訪問每個患者。鑑於任何環境都可以擁有眾多區域和數百名患者,系統搜索方法對於人機交互中的任何服務任務來說都是非常不足的。 這個例子說明了促進人機交互的新挑戰是通過口語的方式,這是最常見和最直觀的溝通方法。為了讓機器人能夠理解一個人的單詞和表達,需要機器人使用人們使用的相同空間和語義概念來感知世界。第一步包括以與人相同的方式學習環境,通過實施語義地圖來共享諸如起居室或辦公室之類的共同概念。 當前的語義映射工作僅能夠識別單一級別的抽象概念,例如區分門道,走廊和房間;其他作品標籤提供不完整的語義地圖或提供不准確的語義標籤;其他人利用經過訓練的大型捲積神經網絡來執行場景識別。大多數語義映射方法在配備有使用高功率設備的台式計算機上離線執行,該設備不適用於具有有限資源的移動機器人。其他方法提供的語義信息不足以有效地實現在大型動態環境中工作的能力。 本文提出了一種利用卷積神經網絡(ConvNet)訓練用於目標識別(非場景識別),房間分割方法,混合度量拓撲圖的語義映射系統。通過使用受過對象識別訓練的ConvNet,我們能夠執行以下操作,減小ConvNet的大小,使移動系統能夠運行它,同時消除場景識別中訓練數據中的任何訓練偏差。場景識別訓練需要極大量的數據,並且可以偏向於僅識別特定場景,而對象識別更加通用。為防止度量標準空間中出現嚴重錯誤標記或不准確標記,我們使用度量標準拓撲圖。我們採用從對象識別ConvNet生成的語義信息,並將信息存儲在拓撲節點中。為了僅使用相關語義信息對房間進行正確分類,我們通過使用房間分割方法來協助對房間進行分類,從而在語義信息之間創建時空一致性。這允許機器人通過在任意標記對象之前收集更多信息來更正確地識別房間,例如在經過訓練用於僅可使用單個數據點的場景識別的ConvNet中執行的對象。我們以層次結構的形式組織我們的語義信息,以便能夠感知像房間或房間這樣的抽象概念 該方法在具有嵌入計算平台的服務機器人配合許多模擬室內環境和真實環境中進行測試。 實驗結果表明,語義映射算法足夠輕巧,可以在機器人系統上運行,並為機器人提供足夠的語義感知,可以執行不同類型的服務命令,其結果與其他工作相當。 總而言之,我們的論文提供了以下內容:與使用訓練用於場景識別的ConvNets的傳統方法相比,使用較小的捲積神經網絡進行對象識別,我們獲得了更好的語義映射結果。 所提出的方法足夠輕巧,可以在真正的移動機器人平台中完全在線運行。 我們的語義地圖為機器人提供了識別抽象概念的能力,同時保留了對象的知識,例如廚房中刀子的位置。 | zh_TW |
| dc.description.abstract | With service robots becoming more and more common in public areas such as hotels and hospitals, there has been a transition from how robots interact with people. Robots are beginning to autonomously interact with the public instead of being directly controlled by trained personnel. For meaningful and successful interactions, service robots need to understand both geometric and semantic properties of their surroundings.
For example, a hospital service robot is told to deliver medicine to patient Two in Ward One. A service robot needs to understand not only how to navigate to different wards, but to understand which patient is in which ward. If the service robot does not understand the environment's semantic properties, in this case, Patient Two and Ward One, it must systematically navigate through every single area and visit every patient. Given that any environment can possess numerous areas and hundreds of patients, a systematic search method is grossly inadequate for any service task in human-robot interactions. This example illustrates the need for a semantic map representation to facilitate human-robot interactions to allow tasks robots to perform tasks using human-centric terms instead of map coordinates. How to effectively create a semantic map with labels such as reception, ward, hallway, and etc has been a long-standing interest in the robotics community. Current semantic mapping works are capable of identifying only singular levels of abstractions such as differentiating between doorways, corridors, and rooms; other works label provide semantic maps that are incomplete or provide inaccurate semantic labeling; others utilize large convolutional neural networks trained to perform scene recognition. Most semantic mapping methods are performed offline on desktop computers equipped with use high-power equipment which is not suitable for mobile robotics with finite resources . Other methods provide inadequate semantic information to effectively enable the ability to work in large dynamic environments. This thesis proposes a system for semantic mapping by utilizing convolutional neural networks (ConvNet) trained for object recognition (not scene recognition), room segmentation methods, a hybrid metric-topological maps. By using a ConvNet trained for object recognition, we are able to do the following, reduce the size of the ConvNet allowing for mobile systems to be capable of running it, while simultaneously removing any training bias in training data in scene recognition. Scene recognition training requires extremely large amounts of data and can be biased to recognize only specific scenes whereas object recognition is more generalized. To prevent gross mislabeling or inaccurate labelling in metric spaces, we use a metric-topological map. We take the semantic information generated from an object recognition ConvNet and store the information in topological nodes. To correctly classify rooms using only the relevant semantic information, we create a spatial-temporal coherence between semantic information by using room segmentation methods to assist in classifying rooms. This allows the robot to have identify rooms more correctly by collecting more information before arbitrarily labeling objects like performed in a ConvNet trained for scene recognition which may only use a singular data point. We organize our semantic information in the form of a hierarchical structure to allow for awareness of abstract concepts like rooms. This method is tested in many simulated indoor environments and a real environment on a service robot with an embedding computed platform. The experimental results show that the semantic mapping algorithms are lightweight enough to run on the robot system and provides the robot with enough semantic awareness to perform different types of service commands with comparable results to other works. To summarize, our thesis contributes the following: we achieve better semantic mapping results using a smaller convolutional neural network for object recognition as opposed to conventional methods using ConvNets trained for scene recognition. The proposed method is lightweight enough to run completely online in a real mobile robotic platform. Our semantic map provides robots with the ability to recognize abstract concepts while retaining knowledge of objects such as the location of a knife in a kitchen. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T04:24:41Z (GMT). No. of bitstreams: 1 ntu-107-R05921086-1.pdf: 40660994 bytes, checksum: 9f203858ada4d0518cfacc065efbaaa3 (MD5) Previous issue date: 2018 | en |
| dc.description.tableofcontents | 口試委員會審定書. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii 誌謝. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v 中文摘要. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 The Origin and History of Modern Robotics . . . . . . . . . . . . . 1 1.1.2 A History of Artificial Intelligence . . . . . . . . . . . . . . . . . . 3 1.1.3 Introduction of Intelligent Mobile Service Robots . . . . . . . . . . 4 1.1.4 What is Semantic Mapping . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Failed Service Robot Case Scenarios . . . . . . . . . . . . . . . . . . . . . 9 1.4 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.5 Key Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.6 Research Objective and Scope . . . . . . . . . . . . . . . . . . . . . . . . 13 1.7 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Service Robot System Design . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1 Internal Mechanical Structure . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Mobile Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Onboard Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Laser Range Finder Description . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 RGB-D Sensor Description . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.6 Power Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.7 Hardware Computing Platform . . . . . . . . . . . . . . . . . . . . . . . . 28 3 Robot Operating System (ROS) . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2 TF and Unified Robot Description Format (URDF) . . . . . . . . . . . . . 34 3.3 Simulation Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.3.1 The Necessity of Simulations . . . . . . . . . . . . . . . . . . . . . 35 3.3.2 The Role of Simulation . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3.3 Gazebo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4 Semantic Mapping Framework . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 Semantic Information Structure . . . . . . . . . . . . . . . . . . . . . . . 42 4.3 Hybrid Map Representations . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3.1 Metric Mapping Paradigm . . . . . . . . . . . . . . . . . . . . . . . 44 4.3.2 Rao-Blackwellized Particle Filter . . . . . . . . . . . . . . . . . . . 45 4.3.3 Topological Mapping Paradigm . . . . . . . . . . . . . . . . . . . . 46 4.3.4 Topological Map Concepts . . . . . . . . . . . . . . . . . . . . . . 47 4.3.5 Graph Theory Applications in Topological Maps . . . . . . . . . . . 48 4.3.5.1 Breadth-First Search Algorithm . . . . . . . . . . . . . . . 49 4.3.5.2 Depth-First Search Algorithm . . . . . . . . . . . . . . . . 51 4.3.6 Hybrid Mapping Paradigm . . . . . . . . . . . . . . . . . . . . . . 52 4.4 Convolutional Neural Networks Application . . . . . . . . . . . . . . . . . 53 4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4.2 Training and Evaluating a General ConvNet . . . . . . . . . . . . . 54 4.4.2.1 Convolution Operation . . . . . . . . . . . . . . . . . . . . 55 4.4.2.2 Non Linearity Operators/Activation Function . . . . . . . . 57 4.4.2.3 Spatial Subsampling . . . . . . . . . . . . . . . . . . . . . 58 4.4.2.4 Fully Connected Layers . . . . . . . . . . . . . . . . . . . 59 4.4.2.5 Training a Convolutional Neural Network . . . . . . . . . . 60 4.4.3 YOLOv2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.5 Distance Transform Based Room Segmentation . . . . . . . . . . . . . . . 62 4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.5.2 Distance Transformation . . . . . . . . . . . . . . . . . . . . . . . 62 4.5.3 Distance Equation Definitions . . . . . . . . . . . . . . . . . . . . . 63 4.5.4 Morphological Operators . . . . . . . . . . . . . . . . . . . . . . . 64 4.5.5 Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.6 Aggregation of Spatial-Semantic Information . . . . . . . . . . . . . . . . 68 4.7 Probabilistic Room Label Classification . . . . . . . . . . . . . . . . . . . 69 4.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.7.2 Room Classification . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5 Experimental Design, Validation, and Discussion . . . . . . . . . . . . . . . 71 5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.2 Hybrid Map and Room Segmentation Results . . . . . . . . . . . . . . . . 76 5.3 Semantic Classification Results . . . . . . . . . . . . . . . . . . . . . . . 81 5.4 Semantic Mapping Performance Analysis . . . . . . . . . . . . . . . . . . 86 5.5 Semantic Navigation Results . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.6 Comparison with Other Works . . . . . . . . . . . . . . . . . . . . . . . . 91 5.7 Contributions and Overall Analysis . . . . . . . . . . . . . . . . . . . . . 93 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Vita . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 | |
| dc.language.iso | en | |
| dc.subject | 同步映射和定位 | zh_TW |
| dc.subject | 知識表示 | zh_TW |
| dc.subject | 人機交互 | zh_TW |
| dc.subject | 物件識別 | zh_TW |
| dc.subject | 拓撲圖 | zh_TW |
| dc.subject | 語義圖 | zh_TW |
| dc.subject | 服務型機器人 | zh_TW |
| dc.subject | Convolutional Neural Networks | en |
| dc.subject | Semantic mapping | en |
| dc.subject | Human-Robot Interactions | en |
| dc.subject | SLAM | en |
| dc.subject | Service Robots | en |
| dc.title | 以深層卷積神經網路實現階層式語意地圖於服務型機器人之應用 | zh_TW |
| dc.title | Hierarchical Semantic Mapping using Deep Convolutional Neural Networks for Autonomous Mobile Service Robot | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 106-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 顏炳郎,鄒杰烔 | |
| dc.subject.keyword | 服務型機器人,語義圖,拓撲圖,物件識別,人機交互,知識表示,同步映射和定位, | zh_TW |
| dc.subject.keyword | Service Robots,Semantic mapping,Convolutional Neural Networks,SLAM,Human-Robot Interactions, | en |
| dc.relation.page | 103 | |
| dc.identifier.doi | 10.6342/NTU201802896 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2018-08-15 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-107-1.pdf 未授權公開取用 | 39.71 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
