Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/47104
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor連豊力(Feng-Li Lian)
dc.contributor.authorPei-Yi Liuen
dc.contributor.author劉沛怡zh_TW
dc.date.accessioned2021-06-15T05:47:41Z-
dc.date.available2020-09-17
dc.date.copyright2010-08-20
dc.date.issued2010
dc.date.submitted2010-08-18
dc.identifier.citationBooks:
[1: Gonzalez and Woods 2008]
R. C. Gonzalez and R. E. Woods, “Digital Image Processing,” 3rd adapted ed., Editor: S. G. Miaou, Taiwan: Pearson, June 2008.
[2: Braadski and Kaehler 2008]
G. Bradski and A. Kaehler, “Learning OpenCV,” 1st ed., Editor: M. Loukides, U.S.A: O’Reilly, pp.144-158, Sep. 2008.
[3: Duda et al. 2000]
R. O. Duda, P. E. Hart, and D. G. Stork, “Pattern Classification,” 2nd ed., U.S.A, John Wiley and Sons, Nov. 2000.
[4: Hartley & Zisserman 2004]
R. Hartley and A. Zisserman, “Multiple View Geometry in Computer Vision,” 2nd ed., U.K., Cambridge University Press, Mar. 2004.

Papers:
[5: Yagi et al. 2005]
Y. Yagi, K. Imai, K. Tsuji, and M. Yachida, “Iconic Memory-based Omnidirectional Route Panorama Navigation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 1, pp. 78-87, Jan. 2005.
[6: Desouza and Kak 2002]
G. N. DeSouza and A. C. Kak, “Vision for Mobile Robot Navigation: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 2, pp. 237-267, Feb. 2002.
[7: Delahoche et al. 1997]
L. Delahoche, C. Pegard, B. Marhic, and P. Vasseur, “A Navigation System Based on an Ominidirectional Vision Sensor,” in Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robots and Systems, Grenoble, France, Vol. 2, pp. 718-724, Sep. 1997.
[8: Zalzal and Cohen 2006]
V. Zalzal and P. Cohen, “Mutual Localization of Mobile Robotic Platforms using Kalman Filtering,” in Proceedings of the 32nd Annual Conference of the IEEE Industrial Electronics Society, IECON 2006 , Paris, France, pp. 4540-4545, Nov. 2006.
[9: Drocourt et al. 1999]
C. Drocourt, L. Delahoche, C. Pegard, and C. Cauchois, “Localization Method Based on Omnidirectional Stereoscopic Vision and Dead-Reckoning,” in Proceedings of 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyongju, Korea, Vol. 2, pp. 960-965, Oct. 1999.
[10: Clerentin et al. 2001]
A. Clerentin, L. Delahoche, E. Brassart, and C. Pegard, “Omnidirectional Sensors Cooperation for Multi-Target Tracking,” in Proceedings of International Conference on Multisensor Fusion and Integration for Intelligent Systems, Baden-Baden, Germany, pp. 335-340, 2001.
[11: Matsumto et al. 1999]
Y. Matsumoto, K. Ikeda, M. Inaba, and H. Inoue, “Visual Navigation using Omnidirectional View Sequence,” in Proceedings of 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyongju, Korea, Vol. 1, pp. 317-322, Oct. 1999.
[12: Krose et al. 2004]
B. R. Bunschoten, S. T. Hagen, B. Terwijn, and N. Vlassis, “Household Robots Look and Learn: Environment Modeling and Localization from an Omnidirectional Vision System,” IEEE Robotics and Automation Magazine, Vol. 11, No. 4, pp. 45-52, Dec. 2004.
[13: Gaspar et al. 2000]
J. Gaspar, N. Winters, and J. Santos-Victor, “Vision-Based Navigation and Environmental Representations with an Omnidirectional Camera,” IEEE Transactions on Robotics and Automation, Vol. 16, No. 6, pp. 890-898, Dec. 2000.
[14: Goedeme et al. 2007]
T. Goedeme, M. Nuttin, T. Tuytelaars and L. V. Gool, “Omnidirectional Vision Based Topological Navigation,’’ International Journal of Computer Vision, Vol. 74, No. 3, p.p. 219-236, Jan. 2007.
[15: Valgren et al. 2006]
C. Valgren, A. Lilienthal, and T. Ducket, “Incremental Topological Mapping using Omnidirectional Vision,” in Proceedings of 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, pp. 3441-3447, Oct. 2006.
[16: Bonev et al. 2007]
B. Bonev, M. Cazorla and F. Escolano, “Robot Navigation Behaviors based on Omnidirectional Vision and Information Theory,” Journal of Physical Agents, Vol. 1, No. 1, pp. 27-36, July 2007.
[17: Takiguchi et al. 2001]
J. Takiguchi, A. Takeya, K. Nishiguchi, H. Yano, T. Yamaishi, M. Iyoda, and T. Hashizume, “A Study of Autonomous Mobile System in Outdoor Environment. Part 5. Development of a Self-Positioning System with an Omnidirectional Vision System,” in Proceedings of 2001 ICRA IEEE International Conference on Intelligent Robots and Automation, Vol. 2, pp. 1117-1123, Seoul, Korea, May 2001.
[18: Feng et al. 2008]
W. Feng, Y. Liu, and Z. Cao, “Omnidirectional Vision Tracking and Positioning for Vehicles,” in Proceedings of the ICNC’08 Fourth International Conference on Natural Computation, Vol. 6, pp. 183-187, Jinan, China, Oct. 2008.
[19: Baker and Nayar 1999]
S. Baker and S. K. Nayar, “A Theory of Single-Viewpoint Catadioptric Image Formation,” International Journal of Computer Vision, Vol. 35, No. 2, pp. 175-196, Nov.-Dec. 1999.
[20: Grassi and Okamoto 2006]
V. Grassi Jr. and J. Okamoto Jr., “Development of an omnidirectional vision system,” Journal of the Brazilian Society of Mechanical Sciences and Engineering, Vol. 28, No.1, pp. 58-68, Jan.-Mar. 2006.
[21: 周家至 et al. 2006]
周家至, 張津魁, 薛博文 and 程啟正, “以光流為基礎之全方位視覺影像技術研究,” 中國機械工程學會第二十三屆全國學術研討會論文集, 台南, 台灣, 論文編號: B7-002, Nov. 2006.
[22: Yagi and Yachiba 2004]
Y. Yagi and M. Yachida, “Real-Time Omnidirectional Image Sensors,” International Journal of Computer Vision, Vol. 58 , No. 3, pp. 173-207, July-Aug. 2004.
[23: Zhang and Kleeman 2009]
A. M. Zhang and L. Kleeman, “Robust Appearance Based Visual Route Following for Navigation in Large-scale Outdoor Environments,” International Journal of Robotics Research archive, Vol. 28, No. 3, pp. 331-356, Mar. 2009.
[24: Kato et al. 1998]
K. Kato, S. Tsuji, and H. Ishiguro, “Representing Environment through Target-Guided Navigation,” in Proceedings of the 14th International Conference on Pattern Recognition, Brisbane, Qld., Australia, Vol. 2, pp.1794-1798, Aug. 1998.
[25: Lopez-Franco and Bayro-Corrochano 2006]
C. Lopez-Franco and E. Bayro-Corrochano, “Omnidirectional Vision and Invariant Theory for Robot Navigation using Conformal Geometric Algebra,” in Proceedings of the 18th International Conference on Pattern Recognition, Hong Kong, China, Vol. 1, pp. 570-573, Aug. 2006.
[26: Cao et al. 2007]
Z. Cao, S. Liu and J. Roning, “Omnidirectional Vision Localization Based on Particle Filter,” in Proceedings of the 4th International Conference on Image and Graphics, Sichuan, China, pp. 478-483, Aug. 2007.
[27: Demonceaux et al. 2006]
C. Demonceaux, P. Vasseur, and C. Regard, “Omnidirectional Vision on UAV for Attitude Computation,” in Proceedings of 2006 IEEE International Conference on Robotics and Automation, Orlando, FL , U.S.A., pp. 2842-2847, May 2006.
[28: Matsumoto et al. 2000]
Y. Matsumoto, K. Ikeda, M. Inaba, and H. Inoue, “Exploration and Navigation in Corridor Environment Based on Omni-View Sequence,” in Proceedings of 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems, Takamatsu, Japan, Vol. 2, pp. 1505-1510, Oct.-Nov. 2000.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/47104-
dc.description.abstract現代機器人各種應用中,定位為一個重要的課題。機器人定位需要依賴著內部或是外部感測器的資訊,常用的感測器為視覺系統,例如攝影機。具有全方面視野範圍的全景攝影機與一般攝影機相較之下,更容易取得全面的環境資訊。
本論文提出一套使用單一視覺感測器的定位方法。靠著裝設在在移動機器人身上的全景攝影機所截取的連續影像,就可以讓機器人達到自我定位的功能。唯一需要的初始條件為標的物間的距離值。
首先,需要從全景影像中取得標的物相對於機器人的朝向角度。因此將連續的全景影像做幾何轉換以及取樣,再以時間順序組合成一張標的物的變化軌跡圖。利用此軌跡圖上的朝向角度值做為定位方法的輸入值,可估測機器人的自身位置。此定位方法的基本原理是根據機器人在不同位置時,所觀察到的3個標的物相對方向變化。
各種機器人移動路徑下的模擬結果證明此定位方法的可行性。而實際的實驗結果顯示,在較大的旋轉角度與移動距離時有著較好的定位結果,而小旋轉角度與小移動距離情形下會有著較大的誤差產生,原因是後者較容易受到攝影機未裝設在機器人旋轉中心上所造成的影響。
在已知標的物之間距離的貧乏條件下,本論文提出的視覺定位法只需要單一全景攝影機所截取的影像資訊,而不需要其他的感測器輔助。
zh_TW
dc.description.abstractLocalization is an important issue in the robotic field generally. Robots localize themselves by their inner or outer sensor data. A common sensor is visual system, like cameras. In order to acquire the environment information completely, the omnidirectional camera is the best choice as the visual sensor for robot self-localization.
A vision-based localization method with a single visual sensor is proposed in the thesis. Robots can localize themselves by a serious of omnidirectional images, and the only required is the distance between the landmarks.
First, the landmark direction relative to robots has to be obtained from the omnidirectional images. The direction information will be the input of localization method. Therefore, the continuous omnidirectional images are transformed into panoramic images. And then a composition image called omnidirectional route panoramic map image are composed by the sampled data of panoramic images. Robots can successfully self-localize by the variation of directions of three landmarks at different positions.
The correct simulation results of different moving routes are demonstrated. The experimental results show that the case of larger rotation angle and longer translation has higher accuracy. In contrast, the case of smaller rotation angle and shorter translation has lower accuracy. The latter result is gravely affected than the former one by the fact that the camera center is not at the rotation center of mobile robot.
With the limited initial conditions, namely, the distances between landmarks, the localization method proposed in the thesis by using only the omnidirectional camera.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T05:47:41Z (GMT). No. of bitstreams: 1
ntu-99-R97921014-1.pdf: 5922562 bytes, checksum: 789bf27018815a308a60344418d1162d (MD5)
Previous issue date: 2010
en
dc.description.tableofcontents摘要 i
ABSTRACT iii
Contents v
List of figures vii
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Problem Description 3
1.3 Contribution of the Thesis 4
1.4 Organization of the Thesis 5
Chapter 2 Literature Survey 6
2.1 Vision-Based Localization 6
2.2 Omnidirectional Vision-Based Localization 7
2.3 Summary 10
Chapter 3 Line Following Method 11
3.1 System Architecture 12
3.2 Line Extraction 13
3.3 Canny Edge Detection 14
3.4 Hough Line Transform 15
3.5 Motion Control Strategy 18
3.6 Summary 20
Chapter 4 Omnidirectional Vision-Based Localization Method 22
4.1 System Architecture 23
4.2 Omnidirectional Vision System 25
4.3 Omnidirectional Route Panorama Map 26
4.4 Analytic Geometry Relation between Landmarks and Robot 48
4.5 Localization Method 50
4.5.1 Analytic Geometry with a Sequence of Robot Positions 51
4.5.2 Rotation Angle of Different Local Coordinates 60
4.6 Summary 62
Chapter 5 Experimental Results and Analysis 63
5.1 Simulation Results and Analysis 64
5.1.1 Straight Lines 64
5.1.2 Translations and Rotations with known rotating angle 65
5.1.3 Translations and Rotations without known rotating angle 66
5.2 Experimental Results and Analysis 68
5.2.1 Scenarios 68
5.2.2 Straight Lines 71
5.2.3 Translations and Rotations with known rotating angle 76
5.2.4 Translations and Rotations without known rotating angle 78
5.2.5 Causes of Error 81
5.3 Summary 82
Chapter 6 Conclusions and Future Works 84
6.1 Conclusions 84
6.2 Future Works 85
References 86
dc.language.isoen
dc.subject標的物追蹤zh_TW
dc.subject全景影像軌跡zh_TW
dc.subject全景視覺定位系統zh_TW
dc.subject全景攝影機zh_TW
dc.subjectomnidirectional cameraen
dc.subjectvision localization systemen
dc.subjectomnidirectional route panoramicen
dc.subjectlandmark trackingen
dc.title利用靜態標的物全景影像軌跡資訊之室內定位系統zh_TW
dc.titleIndoor Localization System using Omnidirectional Route Panoramic Information of Static Landmarksen
dc.typeThesis
dc.date.schoolyear98-2
dc.description.degree碩士
dc.contributor.oralexamcommittee簡忠漢(Jong-Hann Jean),李後燦(Hou-Tsan Lee)
dc.subject.keyword全景攝影機,全景視覺定位系統,全景影像軌跡,標的物追蹤,zh_TW
dc.subject.keywordomnidirectional camera,vision localization system,omnidirectional route panoramic,landmark tracking,en
dc.relation.page92
dc.rights.note有償授權
dc.date.accepted2010-08-19
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-99-1.pdf
  未授權公開取用
5.78 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved