請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2300
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 羅仁權 | |
dc.contributor.author | Chang-Jiun Chen | en |
dc.contributor.author | 陳長鈞 | zh_TW |
dc.date.accessioned | 2021-05-13T06:39:02Z | - |
dc.date.available | 2020-08-24 | |
dc.date.available | 2021-05-13T06:39:02Z | - |
dc.date.copyright | 2017-08-24 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-08-16 | |
dc.identifier.citation | [1] D. W. Ko, C. Yi and I. H. Suh, 'Semantic mapping and navigation: A Bayesian approach,' 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 2630-2636.
[2] B. Talbot, O. Lam, R. Schulz, F. Dayoub, B. Upcroft and G. Wyeth, 'Find my office: Navigating real space from semantic descriptions,' 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016, pp. 5782-5787. [3] X. Zhang, B. Li, S. L. Joseph, J. Xiao, Y. Sun, Y. Tian, J. P. Muñoz and C. Yi, 'A SLAM Based Semantic Indoor Navigation System for Visually Impaired Users,' 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, 2015, pp. 1458-1463. [4] O. M. Mozos, C. Stachniss and W. Burgard, 'Supervised Learning of Places from Range Data using AdaBoost,' Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005, pp. 1730-1735. [5] B. Kaleci, Ç. M. Şenler, H. Dutağacı and O. Parlaktuna, 'A probabilistic approach for semantic classification using laser range data in indoor environments,' 2015 International Conference on Advanced Robotics (ICAR), Istanbul, 2015, pp. 375-381. [6] L. Shi, R. Khushaba, S. Kodagoda and G. Dissanayake, 'Application of CRF and SVM based semi-supervised learning for semantic labeling of environments,' 2012 12th International Conference on Control Automation Robotics & Vision (ICARCV), Guangzhou, 2012, pp. 835-840. [7] N. Sünderhauf, F. Dayoub, S. McMahon, B. Talbot, R. Schulz, P. Corke, G. Wyeth, B. Upcroft and M. Milford, 'Place categorization and semantic mapping on a mobile robot,' 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016, pp. 5729-5736. [8] R. Goeddel and E. Olson, 'Learning semantic place labels from occupancy grids using CNNs,' 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 3999-4004. [9] H. J. Chang, C. S. G. Lee, Y. H. Lu and Y. C. Hu, 'P-SLAM: Simultaneous Localization and Mapping With Environmental-Structure Prediction,' in IEEE Transactions on Robotics, vol. 23, no. 2, pp. 281-293, April 2007. [10] D. P. Ström, F. Nenci and C. Stachniss, 'Predictive exploration considering previously mapped environments,' 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, 2015, pp. 2761-2766. [11] “ROS/Introduction - ROS Wiki”, http://wiki.ros.org/ROS/ Introduction, Accessed: 2017-07-12. [12] “ros_control - ROS Wiki”, http://wiki.ros.org/ros_control, Accessed: 2017-07-10. [13] “hokuyo_node - ROS Wiki”, http://wiki.ros.org/hokuyo_node, Accessed: 2017-07-10. [14] “teleop_twist_keyboard - ROS Wiki”, http://wiki.ros.org/teleop_ twist_keyboard, Accessed: 2017-07-10. [15] “sklearn.preprocessing.normalize — scikit-learn 0.18.2 documentation”, http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html, Accessed: 2017-07-12. [16] “SpeechRecognition 3.7.1: Python Package Index - PyPI”, https://pypi.python.org/pypi/SpeechRecognition/, Accessed: 2017-07-13. [17] J. Pennington, R. Socher, C. D. Manning, 'Glove: Global vectors for word representation', Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), vol. 12, pp. 1532-1543, 2014. [18] “Keras Documentation”, https://keras.io/, Accessed: 2017-07-13. [19] “Gazebo”, http://gazebosim.org/, Accessed: 2017-07-14. [20] “rviz - ROS Wiki”, http://wiki.ros.org/rviz, Accessed: 2017-07-14. [21] Charly Huang, “採樣式路徑規劃與立體即時建圖及定位於具社交感知之服務型機器人應用”, 臺灣大學電機工程學研究所學位論文, pp. 1–159, 2016 [22] H. M. Gross, H. J. Boehme, C. Schroeter, S. Mueller, A. Koenig, Ch. Martin, M. Merten and A. Bley, 'ShopBot: Progress in developing an interactive mobile shopping assistant for everyday use,' 2008 IEEE International Conference on Systems, Man and Cybernetics, Singapore, 2008, pp. 3471-3478. [23] V. Kulyukin, C. Gharpure and J. Nicholson, 'RoboCart: toward robot-assisted navigation of grocery stores by the visually impaired,' 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp. 2845-2850. [24] W. Hess, D. Kohler, H. Rapp and D. Andor, 'Real-time loop closure in 2D LIDAR SLAM,' 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016, pp. 1271-1278. [25] K. Sasaki, H. Tjandra, K. Noda, K. Takahashi and T. Ogata, 'Neural network based model for visual-motor integration learning of robot's drawing behavior: Association of a drawing motion from a drawn image,' 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, 2015, pp. 2736-2741. [26] G. Lidoris, F. Rohrmuller, D. Wollherr and M. Buss, 'The Autonomous City Explorer (ACE) project — mobile robot navigation in highly populated urban environments,' 2009 IEEE International Conference on Robotics and Automation, Kobe, 2009, pp. 1416-1422. [27] Z. Zhao and X. Chen, 'Semantic mapping for object category and structural class,' 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, 2014, pp. 724-729. [28] S. Hemachandra, T. Kollar, N. Roy and S. Teller, 'Following and interpreting narrated guided tours,' 2011 IEEE International Conference on Robotics and Automation, Shanghai, 2011, pp. 2574-2579. [29] Jingchen Tong, Dong Chen, Yan Zhuang and Wei Wang, 'Mobile robot indoor semantic mapping using 3D laser scanning and monocular vision,' 2010 8th World Congress on Intelligent Control and Automation, Jinan, 2010, pp. 1212-1217. [30] W. Mei, W. Pan and L. Xie, 'Semantic-understand-based landmark navigation method of robots,' 2012 IEEE International Conference on Computer Science and Automation Engineering (CSAE), Zhangjiajie, 2012, pp. 760-764. [31] E. A. Antonelo and B. Schrauwen, 'On Learning Navigation Behaviors for Small Mobile Robots With Reservoir Computing Architectures,' in IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 4, pp. 763-780, April 2015. [32] J. A. Caley, N. R. J. Lawrance and G. A. Hollinger, 'Deep learning of structured environments for robot search,' 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 3987-3992. [33] L. Tai, S. Li and M. Liu, 'A deep-network solution towards model-less obstacle avoidance,' 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, 2016, pp. 2759-2764. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2300 | - |
dc.description.abstract | 近年的研究已經讓服務型機器人具備能在複雜的室內環境中移動之功能。然而,這些技術往往需要根基於事先建立好的環境地圖,因而無法應用在未知的環境中。與此相對,人類在進入未知的環境時,常依靠問路這一方法,來得知如何抵達某一地點,並進一步移動到該處。目前的移動型機器人,尚缺乏這種依據接收到的口頭指令,在未知的環境中導航的能力。
在本研究中,我們的目標是將於未知環境中導航的功能,實作在移動型機器人上。我們以室內環境作為主體,利用遞迴類神經網路的方法,讓機器人學習人類導航的方法。我們設計了一個導航系統,並以人類的導航紀錄和相對應的導航指令來訓練此系統。我們將導航指令進行分割,並將每個切割出來的簡單指令分類到十個我們所定義的基本指令集當中;而每筆人類的導航紀錄,都是根據某一類基本指令來進行收集的。在訓練類神經網路模型的過程中,我們更提出一驗證的方法來檢驗訓練之模型的有效性。 最後,我們在模擬和實際的環境中測試此導航系統。我們在搬運機器人「企鵝」上實作我們的系統,並實驗其是否能根據不同的導航指令,移動到對應的地點。我們將機器人的移動路徑,與接受同樣指令的人類所走出來的路徑進行比較;而結果顯示,基於此一導航系統的移動型機器人,能達到接近於人類的導航表現。 | zh_TW |
dc.description.abstract | Recent researches have made service robots capable of navigating through complex and clustered indoor environments. However, such techniques require prebuilt maps and cannot be applied to unknown environments. By contrast, when entering an unknown environment, humans can ask someone for directions to figure out how to get to a specific location, and further navigate to the destination by following the instructions. Present mobile robots lack the ability of navigating under unknown environments according to the given verbal instructions.
In this research, we aim to implement the ability of navigating through unknown environments on mobile robots. We focus on indoor environments, using recursive neural networks to make robots learn the methods of navigating from humans. We design a navigation system, which is trained by human-controlled navigating records along with instructions. Instructions are split and then classified into ten basic classes, and each navigating record is collected according to one of these basic instruction classes. During the training process, we propose a validating method to evaluate the effectiveness of our models. Finally, we put our system to the test under both simulation and real environments. We implement the system on a warehouse robot called ‘Penguin’, and test whether it can get to desired positions according to different given instructions. We compare the navigation paths of our mobile robot with those of humans following the same verbal instructions. The results show that our mobile robot can achieve similar performance to that of humans. | en |
dc.description.provenance | Made available in DSpace on 2021-05-13T06:39:02Z (GMT). No. of bitstreams: 1 ntu-106-R04921101-1.pdf: 2644318 bytes, checksum: f7d1290f6bf372891a33ec269937b336 (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 誌謝 i
中文摘要 ii ABSTRACT iii CONTENTS v LIST OF FIGURES viii LIST OF TABLES x Chapter 1 Introduction 1 1.1 Problem Statement 1 1.2 Related Works 4 1.3 Research Objective 6 1.4 Thesis Structure 7 Chapter 2 System Architecture 8 2.1 Hardware Specifications 8 2.1.1 Motors 9 2.1.2 Sensor 10 2.1.3 Central Control Computer 14 2.1.4 Power Supply System 16 2.2 Software Architecture 18 2.2.1 Overview 18 2.2.2 Laser Range Finder Layer 20 2.2.2.1 Interpolation 20 2.2.2.2 Preprocessing 22 2.2.3 Instruction Layer 23 2.2.3.1 Speech Recognition 24 2.2.3.2 Conversion 24 2.2.4 Neural Network Model 25 2.2.5 Post-processing Layer 26 2.2.5.1 Speed Adjusting Function 26 2.2.5.2 Halting Counter 27 Chapter 3 Training Data Set 29 3.1 Instruction 29 3.2 Basic Instruction Sets 32 3.3 Human-Controlled Navigating Records 36 3.3.1 Database 36 3.3.2 Teleoperation Program 37 3.4 Features and Advantages 39 Chapter 4 Training and Experiments 41 4.1 Training Models 41 4.1.1 Implementation 41 4.1.2 Validation 42 4.1.3 Monitors 43 4.2 Experiments and Results 45 4.2.1 Simulation 45 4.2.2 Real Environment 45 4.2.3 Interpolation 46 4.2.4 Comparisons 52 Chapter 5 Conclusions and Future Works 55 REFERENCE 57 VITA 62 | |
dc.language.iso | en | |
dc.title | 智慧服務機器人基於遞迴類神經網路進行未知室內語意導航之研究 | zh_TW |
dc.title | Unknown Indoor Semantic Navigation Based on Recursive Neural Network for Intelligent Service Robotics | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 康仕仲,顏炳郎 | |
dc.subject.keyword | 服務型機器人,移動型機器人,語意式導航,機器人深度學習,人機互動, | zh_TW |
dc.subject.keyword | service robotics,mobile robotics,semantic navigation,deep learning in robotics and automation,human-robot interaction, | en |
dc.relation.page | 62 | |
dc.identifier.doi | 10.6342/NTU201703341 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2017-08-17 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf | 2.58 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。