請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/22527完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 羅仁權 | |
| dc.contributor.author | Yu-Chih Lin | en |
| dc.contributor.author | 林瑜智 | zh_TW |
| dc.date.accessioned | 2021-06-08T04:19:58Z | - |
| dc.date.copyright | 2010-07-27 | |
| dc.date.issued | 2010 | |
| dc.date.submitted | 2010-07-20 | |
| dc.identifier.citation | [1] L. E. Parker, B. A. Emmons, “Cooperative multi-robot observation of multiple moving targets,” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 1997), Albuquerque, New Mexico, April 1997.
[2] H. Kobayashi, M. Yanagida, “Moving object detection by an autonomous guard robot,” Proceedings of the 4th IEEE International Workshop on Robot and Human Communication, Tokyo, July 5-7, 1995. [3] http://www.alsok.co.jp/ [4] Y. Shimosasa, J. Kanemoto, K. Hakamada, H. Horii, T. Ariki, Y. Sugawara, F. Kojio, A. Kimura, S. Yuta, “Some results of the test operation of a security service system with autonomous guard robot,” 26th Annual Conference of the IEEE on Industrial Electronics Society (IECON 2000), Nagoya, Japan, Oct. 2000. [5] Y. Shimosasa, J. Kanemoto, K. Hakamada, H. Horii, T. Ariki, Y. Sugawara, F. Kojio, A. Kimura, S. Yuta, “Security service system using autonomous mobile robot,” Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC 1999), Tokyo, Japan, 1999. [6] T. Kajiwara, J. Yamaguchi, J. Kanemoto, “A Security Guard Robot Which Patrols Map Information,” Proceedings of the IEEE/RSJ International Workshop on Intelligent Robots and Systems (IROS 1989), Tsukuba, Japan, September 4-6, 1989. [7] A. Bradshaw, “The UK Security and Fire Fighting Advanced Robot project,” IEE Colloquium on Advanced Robotic Initiatives, UK, 1991. [8] G. N. DeSouza and A. C. Kak, 'Vision for mobile robot navigation: a survey,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.2, pp.237-267, Feb 2002 [9] S. Thrun. 'Robotic Mapping: A Survey', Exploring Anifiiol lnfelligence in rhe New Millenium. Morgan Kaufmann, February 2002. [10] S. Thrun. W.Burgard, D.Fox, “Probabilistic Robotics”,MIT Press, 2005 [11] H. Choset, W. Burgard, S. Hutchinson, G. Kantor, L. E. Kavraki, K. Lynch, and S. Thrun, “ Principles of Robot Motion: Theory, Algorithms, and Implementation,” MIT Press, April 2005. [12] A. Elfes, . 1989. Occupancy Grids: A Probabilistic Framework for Robot Perception and Navigation. Ph.D. thesis, Department of Electrical and Computer Engineering, Carnegie Mellon University. [13] H.P. Moravec and A. Elfes, “High Resolution Maps from WideAngle Sonar,” Proceedings of IEEE International Conference on Robotics and Automation, pp. 116-121, 1985. [14] H. Choset, K. Nagatani, “Topological simultaneous localization and mapping (SLAM): toward exact localization without explicitlocalization,” IEEE Transaction on Robotics and Automation, Vol.17, Issue.2, pp.125-137, 2001 [15] H. Choset, I. Konukseven, A. Rizzi, 'Sensor based planning: a control law for generating the generalized Voronoi graph,' Proceedings of International Conference on Advanced Robotics, pp.333-338, 7-9 Jul 1997 [16] S. Se, D. Lowe, J. Little” Vision-based mobile robot localization and mappingusing scale-invariant features” Proceedings of IEEE International conference on robotics and automation, vol.2, pp.2051-2058, 2001 [17] S. Ahn, W. K. Chung, 'Efficient SLAM algorithm with hybrid visual map in an indoor environment,' International Conference on Control, Automation and Systems, pp.663-667, 17-20 Oct. 2007 [18] Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector machines,2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm [19] P.E.Hart, N. J. Nilsson,B. Raphael, 'A Formal Basis for the Heuristic Determination of Minimum Cost Paths,' IEEE Transactions on Systems Science and Cybernetics, 1968. [20] R.Dechter, P.Judea, 'Generalized best-first search strategies and the optimality of A*,'Journal of the ACM, 1985 [21] R. Chatila, J. Laumond, 'Position referencing and consistent world modeling for mobile robots,' Proceedings of IEEE International Conference on Robotics and Automation, vol.2, pp. 138- 145, Mar 1985 [22] S. Engelson and D. McDermott, “Error Correction in Mobile Robot Map Learning,” Proceedings of IEEE International Conference on Robotics and Automation, pp. 2555-2560, 1992. [23] T. Watanabe, T. Oshitani, 'Parallel recognition of roads from urban maps on generation/verification paradigm of hypotheses,' Proceedings of International Conference on Document Analysis and Recognition, pp.1225-1234, 2001 [24] S.Thrun, “Learning Occupancy Grid Maps with Forward Sensor Models”, Computer Science Department, Stanford University, Stanford, CA 94305, USA [25] W.Burgard, D. Fox, D. Hennig, and T.Schmidt “Estimating the Absolute Position of a Mobile Robot Using Position Probability Grids”, Proceeding of Fourteenth National Conference on Artificial Intelligence, Germany, 1996 [26] S.Thrun, and A. Bucken “Integrating Grid-Based and Topological Maps for Mobile Robot Navigation” Proceedings of the Thirteen National Conference on Artificial Intelligence AAAI, Portland, Oregon, August 1996 [27] F. Dellaert, D. Fox, W. Burgard, and S. Thrun “Monte Carlo Localization for Mobile Robots” Proceedings of International Conference on Robotics and Automation, 1999 [28] B. Kuipers, J.Modayil, P.Beeson, M.MacMahon, and F.Savelli “Local Metrical and Global Topological Maps in the Hybrid Spatial Semantic Hierarchy” Proceedings of IEEE International Conference on Robotics and Automation, 2004 [29] R.Sim, P.Elinas, M.Griffin, and J.J. Little, “Design and analysis of a framework for real-time vision-based SLAM using Rao Black wellised particle filters” The 3rd Canadian Conference on Computer and Robot Vision, 2006 [30] M. Ballesta, A. Gil, O. Martinez, and M. O. Reinoso, “Local Descriptors for Visual SLAM” [31] T. Lemaire, C. Berger, I.K. Jung, and S.Lacroix, “Vision-Based SLAM: Stereo and Monocular Approaches” International Journal of Computer Vision 74(3), 343–364, 2007 [32] R. Munguia, and A. Grau, “Monocular SLAM for Visual Odometry”, IEEE International Symposium on Intelligent Signal Processing, 2007. [33] R. Ozawa, Y. Takaoka, Y. Kida, K. Nishiwaki, J. Chestnutt, J. Kuffner, J. Kagami, Mizoguch, H. Mizoguch and H. Inoue, “Using Visual Odometry to Create 3D Maps for Online Footstep Planning” IEEE International Conference on Systems, Man and Cybernetics, 2005 [34] Y.H. Choi, and S.Y. Oh, “Grid-Based Visual SLAM in Complex Environments” IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006 [35] N.Karlsson, E.di Bernardo, J.Ostrowski, L.Goncalves, P.Pirjanian, and E.Munich M, “The vSLAM Algorithm for Robust Localization and Mapping” Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005. [36] G.Oriolo, G.Ulivi, and M.Vendittelli, “Real-Time Map Building and Navigation for Autonomous Robots in Unknown Environments” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 1998 [37] D.Pangercic, R.B.Rusu, and M.Beetz, “3D-Based Monocular SLAM for Mobile Agents Navigating in Indoor Environments” IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2008 [38] J.Ryu, Z.Deng, and T.Nishimura, “Robust Feature Detection for a Mobile Robot using a Multi-View Single Camera“IEEE/SICE International Symposium on System Integration, 2008 [39] A.Kalay, and I.Ulusoy, “Vision-Based Simultaneous Localization and Map Building: Stereo and Mono SLAM” IEEE 17th Signal Processing and Communications Applications Conference, SIU 2009. [40] K.Celik, S.J. Chung, and A.Somani, “Mono-Vision Corner SLAM for Indoor Navigation” IEEE International Conference on Electro/Information Technology, EIT 2008. [41] J.-P.Tardif, Y.Pavlidis, and K.Daniilidis,”Monocular Visual Odometry in Urban Environments Using an Omnidirectional Camera”, IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008 [42] C. H. Chen, and Y. P. Chan, “SIFT-based Monocluar SLAM with Inverse Depth Parameterization for Robot Localization”, IEEE Workshop on Advanced Robotics and Its Social Impacts, ARSO 2007. [43] A. Gil, O.Reinoso, O.M.Mozos, C.Stachnissi, and W.Burgard, “Improving Data Association in Vision-based SLAM” IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006 [44] E.Wu, W. Zhou, G. Dai, and Q. Wang, “Monocular Vision SLAM for Large Scale Outdoor Environment' International Conference on Mechatronics and Automation, ICMA 2009. [45] F.Bertolli, P.Jensfelt, and H.I.Christensen, “SLAM using Visual Scan-Matching with Distinguishable 3D Points” IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006. [46] S.Se, D.Lowe, and J. Little, ”Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features” Proceedings of IEEE International Conference on Robotics and Automation, ICRA 2001 [47] X. Song, H. K. Lee, and H. Cho “A Sensor Fusion Method for Mobile Robot Navigation” International Joint Conference SICE-ICASE, 2006 [48] L. Shapiro, and G. Stockman, Computer Vision. Prentice Hall, pp. 69–73, 2002. [49] D. Abdelfatah, A.M. Hafez, and M.L. Rabeh, 'An Algorithm for Vision - Based Navigation of an Indoor Mobile Robot,' Information and Communication Technologies, April, Damascus, Syria, 1, 1800-1803 [50] D. Franken, and A. Hupper, “Improved Fast Covariance Intersection for Distributed Data Fusion” 7th International Conference on Information Fusion (FUSION), 2005 | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/22527 | - |
| dc.description.abstract | 本論文所討論的是一種新的建立地圖及機器人導航的演算法,我們稱之為直覺式地圖擷取導航演算法(Intuitive Map Extraction Navigation algorithm, IMEN algorithm)。使用的機器人搭載有三種感測器:超音波陣列、視覺攝影機、及馬達的編碼器。機器人可以利用它們自主地辨識一個未探索過環境的樓層平面圖,並往目的地前進。首先,機器人透過攝影機擷取環境中已知位置的平面圖影像,透過影像處理、樣板比對演算法(Template Matching)、支持向量機(Support Vector Machine)來取得未探索環境的格子地圖(Grid Map)及各房間的相對關係,再透過修改過的A*演算法做路徑規劃,並規劃往操作者給定的目的地前進的路徑。其中,如何將與現實環境中不成比例的樓層平面圖化為機器人可用的機器人地圖,並藉由不準確的相對特徵點間關係來規劃路徑並進行導航及定位機器人的技術,是本論文探討的最大挑戰處。在實驗結果中,我們完成了門牌辨識及地圖辨識的演算法,並成功地導航機器人至目的地處,與傳統利用雷射測距儀花費長時間建立地圖的演算法相比,利用有領先知識的樓層平面圖,此演算法可以在一~兩分鐘之內完成地圖建立及路徑規劃。 | zh_TW |
| dc.description.abstract | The thesis discusses a new algorithm about robot mapping and navigation. We call it IMEN (Intuitive Map Extraction Navigation) algorithm. The experiments utilize robot platform equipped with 3 kinds of sensor, camera, ultrasonic array and encoder. Robot can recognize the floor plan map of unstructured environment and navigate itself to the destination. Under the process, robot can also localize the pose of itself. First, it extracts floor plan image from camera for getting grid map and features by image processing and template matching algorithm. Then, modified A* algorithm is used for path planning. The most challenge problem is that how to extract map and plan path from unproportionate floor plan map and utilize them for localizing robot. In our experimental results, we finished the algorithm of room plate recognition and map recognition. Then, robot can be navigated to the destination successfully. Compared with the traditional mapping method of SLAM which takes long time, robot uses prior knowledge of floor plan map of unstructured environment. Algorithm can build grid map and path planning 1 ~ 2 minuts. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-08T04:19:58Z (GMT). No. of bitstreams: 1 ntu-99-R97921044-1.pdf: 5725725 bytes, checksum: 57538d47ef9d0833dde978410a394709 (MD5) Previous issue date: 2010 | en |
| dc.description.tableofcontents | 誌謝 i
中文摘要 iii Abstract iv Table of Contents v List of Figures vi List of Tables viii Chapter 1 Introduction 1 1.1. Thesis architecture 1 1.2. Motivation 2 1.3. Objective 4 1.4. Literature review 6 Chapter 2 Principal Theories 13 2.1. Kinematic model 13 2.2. Image processing 15 2.3. Pattern recognition 30 2.4. Path Planning 34 Chapter 3 IMEN Algorithm 36 (Intuitive Map Extraction Navigation) 36 3.1. Problem statement 36 3.2. Map recognition 38 3.3. Landmark recognition 57 3.4. Robot navigation and localization 68 Chapter 4 Sensor Fusion 73 4.1. Introduction 73 4.2. Covariance intersection algorithm 75 4.3. Obstacle detection by sensor fusion 82 Chapter 5 Experimental Results 93 5.1. Robot platform 93 5.2. Experimental environment 101 5.3. Experimental results 102 Chapter 6 Conclusions & Contributions 110 6.1. Conclusions 112 6.2. Contributions 114 Reference 116 | |
| dc.language.iso | en | |
| dc.subject | 感測器融合 | zh_TW |
| dc.subject | 機器人導航 | zh_TW |
| dc.subject | 支持向量機 | zh_TW |
| dc.subject | 同步建地圖及定位(SLAM) | zh_TW |
| dc.subject | SLAM | en |
| dc.subject | Support Vector Machine | en |
| dc.subject | Sensor fusion | en |
| dc.subject | Robot navigation | en |
| dc.title | 自走機器人應用樓層平面圖資訊於未建構環境下實現導航及定位 | zh_TW |
| dc.title | Mobile Robot Navigation and Localization Based on Floor Plan Map Information under Unknown environment | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 98-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 張帆人,蘇國嵐 | |
| dc.subject.keyword | 機器人導航,同步建地圖及定位(SLAM),感測器融合,支持向量機, | zh_TW |
| dc.subject.keyword | Robot navigation,SLAM,Sensor fusion,Support Vector Machine, | en |
| dc.relation.page | 120 | |
| dc.rights.note | 未授權 | |
| dc.date.accepted | 2010-07-21 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-99-1.pdf 未授權公開取用 | 5.59 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
