Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70299
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor羅仁權
dc.contributor.authorWei Shihen
dc.contributor.author石崴zh_TW
dc.date.accessioned2021-06-17T04:25:28Z-
dc.date.available2018-08-16
dc.date.copyright2018-08-16
dc.date.issued2018
dc.date.submitted2018-08-15
dc.identifier.citation[1] S. Zagoruyko and N. Komodakis, “Learning to compare image patches via convolutional neural networks,” in Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Con- ference on, pp. 4353–4361, IEEE, 2015.
[2] G. Grisettiyz, C. Stachniss, and W. Burgard, “Improving grid-based slam with rao- blackwellized particle filters by adaptive proposals and selective resampling,” in Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Confer- ence on, pp. 2432–2437, IEEE, 2005.
[3] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE transactions on Robotics, vol. 23, no. 1, pp. 34– 46, 2007.
[4] W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2d lidar slam,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on, pp. 1271– 1278, IEEE, 2016.
[5] D. Ball, S. Heath, J. Wiles, G. Wyeth, P. Corke, and M. Milford, “Openratslam: an open source brain-based slam system,” Autonomous Robots, vol. 34, no. 3, pp. 149–176, 2013.
[6] M. Milford and G. Wyeth, “Persistent navigation and mapping using a biologically in- spired slam system,” The International Journal of Robotics Research, vol. 29, no. 9, pp. 1131–1153, 2010.
[7] M. Labbe´ and F. Michaud, “Online global loop closure detection for large-scale multi- session graph-based slam,” in Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pp. 2661–2666, IEEE, 2014.
[8] M. Labbe and F. Michaud, “Appearance-based loop closure detection for online large- scale and long-term operation,” IEEE Transactions on Robotics, vol. 29, no. 3, pp. 734– 745, 2013.
[9] R. C. Luo, V. W. Ee, and C.-K. Hsieh, “3d point cloud based indoor mobile robot in 6-dof pose localization using fast scene recognition and alignment approach,” in Multisensor Fusion and Integration for Intelligent Systems (MFI), 2016 IEEE International Confer- ence on, pp. 470–475, IEEE, 2016.
[10] D. Fassbender, M. Kusenbach, and H.-J. Wuensche, “Landmark-based navigation in large-scale outdoor environments,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pp. 4445–4450, IEEE, 2015.
[11] A. Mousavian, J. Kosˇecka´, and J.-M. Lien, “Semantically guided location recognition for outdoors scenes,” in Robotics and Automation (ICRA), 2015 IEEE International Confer- ence on, pp. 4882–4889, IEEE, 2015.
[12] T. Gokhool, R. Martins, P. Rives, and N. Despre´, “A compact spherical rgbd keyframe- based representation,” in Robotics and Automation (ICRA), 2015 IEEE International Con- ference on, pp. 4273–4278, IEEE, 2015.
[13] R. Arroyo, P. F. Alcantarilla, L. M. Bergasa, and E. Romera, “Towards life-long visual localization using an efficient matching of binary sequences from images,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 6328–6335, IEEE, 2015.
[14] E. Garcia-Fidalgo and A. Ortiz, “Vision-based topological mapping and localization methods: A survey,” Robotics and Autonomous Systems, vol. 64, pp. 1–20, 2015.
[15] F. Fraundorfer, C. Engels, and D. Niste´r, “Topological mapping, localization and nav- igation using image collections,” in Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pp. 3872–3877, IEEE, 2007.
[16] A. Pronobis and P. Jensfelt, “Large-scale semantic mapping and reasoning with heteroge- neous modalities,” in Robotics and Automation (ICRA), 2012 IEEE International Confer- ence on, pp. 3515–3522, IEEE, 2012.
[17] B. C. Akdeniz and H. I. Bozma, “Exploration and topological map building in unknown environments,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on, pp. 1079–1084, IEEE, 2015.
[18] D. W. Ko, Y. N. Kim, J. H. Lee, and I. H. Suh, “A scene-based dependable indoor nav- igation system,” in Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pp. 1530–1537, IEEE, 2016.
[19] N. Su¨nderhauf, F. Dayoub, S. McMahon, B. Talbot, R. Schulz, P. Corke, G. Wyeth, B. Up- croft, and M. Milford, “Place categorization and semantic mapping on a mobile robot,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on, pp. 5729– 5736, IEEE, 2016.
[20] F. Blo¨chliger, M. Fehr, M. Dymczyk, T. Schneider, and R. Siegwart, “Topomap: Topological mapping and navigation based on visual slam maps,” arXiv preprint arXiv:1709.05533, 2017.
[21] R. Drouilly, P. Rives, and B. Morisset, “Hybrid metric-topological-semantic mapping in dynamic environments,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ Inter- national Conference on, pp. 5109–5114, IEEE, 2015.
[22] R. C. Luo and C.-J. Chen, “Recursive neural network based semantic navigation of an au- tonomous mobile robot through understanding human verbal instructions,” in Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pp. 1519–1524, IEEE, 2017.
[23] K. Konolige, E. Marder-Eppstein, and B. Marthi, “Navigation in hybrid metric- topological maps,” in Robotics and Automation (ICRA), 2011 IEEE International Con- ference on, pp. 3041–3047, IEEE, 2011.
[24] B. Talbot, O. Lam, R. Schulz, F. Dayoub, B. Upcroft, and G. Wyeth, “Find my office: Navigating real space from semantic descriptions,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on, pp. 5782–5787, IEEE, 2016.
[25] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004.
[26] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in European conference on computer vision, pp. 404–417, Springer, 2006.
[27] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolu- tional neural networks,” in Advances in neural information processing systems, pp. 1097– 1105, 2012.
[28] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770– 778, 2016.
[29] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte carlo localization: Efficient posi- tion estimation for mobile robots,” AAAI/IAAI, vol. 1999, no. 343-349, pp. 2–2, 1999.
[30] D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision avoid- ance,” IEEE Robotics & Automation Magazine, vol. 4, no. 1, pp. 23–33, 1997.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70299-
dc.description.abstract對於服務型機器人而言,移動能力是必備的功能之一。例如,我們可能要求服務型機器人協助我們遞送物品,或是從另一個房間拿取物品;在飯店,房客可以從房間訂餐,服務型機器人就必須有能力將餐點從廚房送至指定房間;在醫院,服務型機器人需要協助運送化學物質或是藥劑容器等等。我們可以想像未來公共場域中將有許多服務型機器人,那麼在這些情況下,服務型機器人就必須了解環境,並具備自動導航的能力。目前的導航方式多半倚賴數值地圖。一般而言,數值地圖是由SLAM(Simultaneous Localization and Mapping)方法所建立的。機器人可以利用數值地圖來規劃抵達目的地之最短路徑。然而,這些自動規劃出來的路徑並不一定是人類最能接受的路徑。有些路徑可能使機器人太靠近角落,或是穿越一些我們不想讓機器人進入的區域。換句話說,數值地圖僅紀錄了空間是否有障礙物等等資訊,卻沒有把人類喜好的路徑給一同紀錄下來。一旦我們希望將服務型機器人快速應用於某個環境時,這樣的缺點便會隨之彰顯。除此之外,數值地圖缺乏語意紀錄。一般而言,數值地圖包含了許多精確的座標。一旦我們希望機器人前往某個目的地,我們便需要給定這個目的地的座標點。這種情況就像當一個人想前往某個地方時,我們提供他經緯度。而機器人將會盡最大可能努力往這個座標點前進。因此,一旦有障礙物阻擋機器人抵達目標點,機器人便會持續認為他尚未抵達目的地,即使他早已抵達指定地點如”廚房”。為了能利用數值地圖以及拓樸式地圖的優點,我們提出一種混合式地圖。在這種混合式地圖當中,拓樸式地圖將負責紀錄語意,而數值地圖則是用來導航。我們將機器人所見影像儲存於拓樸式地圖中,這些資訊將用來進行機器人定位。而在我們進行建圖時機器人所行走的路徑,則會被紀錄為拓樸式地圖的edge。因此機器人在接下來導航時,就能夠規劃人類較能接受的路徑。我們使用類神經網路來比較機器人眼前影像以及儲存於拓樸式地圖中的影像,並且提出一個以影像為基礎的粒子濾波器,可以產生一種「語意姿態」,能夠讓機器人在定位上擁有更好的靈活度。拓樸式地圖以及「語意姿態」將能降低導航時間以及提昇導航成功率。我們在一800平方公尺的室內環境測試我們的演算法。我們紀錄了花費時間以及成功率,實驗結果顯示我們的成功率勝過傳統導航方式達16%,這樣的結果表示我們的方法將能讓機器人更穩定地進行導航。zh_TW
dc.description.abstractThe ability of navigation is a necessity for service robots.For example, we may ask a service to deliver objects, or take something in another room for us.In hotels, guests may order some meal in their room.Service robots need to be capable of carrying the meal from the kitchen to the room.Or in hospitals, service robots need to help deliver medicine or chemical containers.We can imagine that service robots are around us in every public area in the future.In these scenarios, service robots need to understand environments and navigate to destination safely and robustly. Navigation methods nowadays are mainly based on metric maps.Metric maps are normally generated by SLAM (Simultaneous Localization and Mapping) methods.With metric maps provided, robots can plan a shortest to destination with ease.However, these planned paths may not be the most human preferable ones.Some paths may be too close to corner, or passing through some unwanted areas.In other words, metric maps only record information of space occupation.Information of available and human preferable paths are not included.This can be a disadvantage once we want to make a service robot be fastly applied in any indoor environment.Moreover, metric maps are lack of semantic meaning.Metric maps are normally composed of precise coordinates.Once we want a service robot to do navigation, we also need to give it a set of coordinate.This situation is similar to providing longitude and latitude when someone wants to go to a place.In this case, semantic meaning is ignored and not suitable for intrinsic understanding.Without semantic meaning, a service robot may struggling in reaching a precise coordinate.For example, if we want a service robot to go to ``living room', we need to provide a set of coordinate when using metric maps.The robot will try its best reaching the goal coordinate.Therefore, once there is an obstacle stop the robot from reaching the goal coordinate, the robot will judge that it hasn't reached the goal even if it has already be in the ``living room'.Although metric maps are rich in details and suitable for navigation, it is hard to label abstract concept on it.For example, it is difficult to define an appropriate region on metric maps to represent a ``living room'.In the other hand, topological maps can store any data in their topological nodes, such as images or object labels.This can make it easier to integrate semantic meaning into maps.Nevertheless, topological maps are not precise enough.They cannot indicate precise spatial information compared metric maps.It is almost impossible to do navigation with only topological maps presented.To take advantage of both metric maps and topological maps, we propose a hybrid metirc-topomap.In this hybrid map, the topological map is responsible for semantic meaning recording, and the metric maps is used for collision avoidance.We store images in topological nodes, which is used for robot localization.The paths in mapping stage will also be recorded as topological edges.This help robots navigate in a more human preferable way in the future.We use a neural network to compare view from the robot and the images stored in the topological map, and then generate similarity values.We propose an image base particle filter to generate an ``semantic pose', which can give localization results more flexibility.With the help of topological maps and semantic pose, our proposed method can shorten navigation consumed time and make navigation with higher success rate.We test our algorithm in a 800 square meters indoor environment.We record the consumed time and success rate for the 5 paths in the environment.The experimental results show that the robot navigation success rate of our proposed method exceeds traditional navigation methods for about 16%.The result shows that our method can help service robots navigate more robustly.en
dc.description.provenanceMade available in DSpace on 2021-06-17T04:25:28Z (GMT). No. of bitstreams: 1
ntu-107-R05921004-1.pdf: 28086427 bytes, checksum: 610160d1bb4d3098013b53da4645220e (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents誌謝 iii
中文摘要 v
ABSTRACT vii
LIST OF FIGURES xi
LIST OF TABLES xvi
1 Introduction 1
1.1 Objective 1
1.2 Background 2
1.3 Problem Statement 3
1.4 Thesis Organization 6
2 Penguin Shaped Warehouse Robot and Renbo-S Service Robot 9
2.1 Introduction 9
2.2 Mechanism 10
2.2.1 Motors and Controllers 10
2.2.2 Mainboard and Sensors 12
2.2.3 Power Supply System 14
2.3 Software Architecture 16
2.3.1 Robot Operating System 16
2.3.2 TF and URDF Robot Model Expression 19
2.3.3 ROS Control Library 21
2.3.4 Gazebo Simulations 23
3 Real Time Control Environment 27
3.1 EtherCAT Communication 27
3.2 IGH EtherCAT Library 29
3.3 SOEM EtherCAT Library 30
3.4 Xenomai Real Time Environment 32
4 Intrinsic Navigation for Autonomous Mobile Service Robot 35
4.1 System Overview 35
4.2 Mapping Stage 36
4.2.1 Metric Maps and Topological Maps 36
4.2.2 Gmapping 38
4.2.3 Constant Distance Mapping 38
4.3 Image Comparison Mechanisms 40
4.3.1 Introduction 40
4.3.2 SIFT and SURF Features 41
4.3.3 AlexNet and ResNet 44
4.3.4 Convolutional Neural Networks Image Comparisons 45
4.4 Semantic Pose Localization 45
4.4.1 Introduction 45
4.4.2 Similarity Model 47
4.4.3 Particle Filter 48
4.4.4 Image Based Particle Filter 49
4.5 Navigation Process 51
4.5.1 Shortest Path Navigation 51
4.5.2 DWA Local Planner 53
4.5.3 Auxiliary Mechanisms 54
5 Experimental Results and Discussion 57
5.1 Environment I: Floor 1 in iCeiRA Laboratory 57
5.2 Environment II: Floor 3 in iCeiRA Laboratory 58
5.3 Approach I: Warehouse Robot Pure Topological Map Navigation 60
5.4 Approach II: Warehouse Robot Hybrid Map Navigation 63
5.5 Approach III: Renbo-S Hybrid Map Navigation 66
6 Conclusion, Contributions and Future Works 69
Bibliography 71
VITA 75
dc.language.isoen
dc.subject捲積類神經網路zh_TW
dc.subject自主移動機器人zh_TW
dc.subject服務型機器人zh_TW
dc.subject拓樸式地圖zh_TW
dc.subject視覺式導航zh_TW
dc.subjectAutonomous Mobile Roboten
dc.subjectService Roboten
dc.subjectTopological Mapen
dc.subjectVisual Based Navigationen
dc.subjectConvolutional Neural Networken
dc.title結合視覺與雷射建立拓樸式地圖應用於自主移動服務型機器人直覺式導航zh_TW
dc.titleHybrid Visual/Laser Range Finding in Topological Map Generation for Intrinsic Navigation of an Autonomous Mobile Service Roboten
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee張帆人,顏炳郎
dc.subject.keyword自主移動機器人,服務型機器人,拓樸式地圖,視覺式導航,捲積類神經網路,zh_TW
dc.subject.keywordAutonomous Mobile Robot,Service Robot,Topological Map,Visual Based Navigation,Convolutional Neural Network,en
dc.relation.page75
dc.identifier.doi10.6342/NTU201803482
dc.rights.note有償授權
dc.date.accepted2018-08-15
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
27.43 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved