Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64428
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor連豊力
dc.contributor.authorChun-An Liangen
dc.contributor.author梁俊安zh_TW
dc.date.accessioned2021-06-16T17:46:36Z-
dc.date.available2012-08-18
dc.date.copyright2012-08-18
dc.date.issued2012
dc.date.submitted2012-08-13
dc.identifier.citation[1: Fiala & Ufkes 2011]
M. Fiala and A. Ufkes, “Visual odometry using 3-dimensional video input,” in Proceedings of Canadian Conference on Computer and Robot Vision, St. John's, Canada, pp. 86-93, May 25-27, 2011
[2: Kitt et al. 2010]
B. Kitt, A. Geiger, and H. Lategahn, “Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme,” in Proceedings of IEEE Intelligent Vehicles Symposium, San Diego, USA, pp. 486-492, Jun. 21-24, 2010
[3: Paz et al. 2008]
L. M. Paz, P. Pinies, J.D. Tardos, and J. Neira, “Large-scale 6-DOF SLAM with stereo-in-hand,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 946-957, Oct. 2008
[4: Milella & Siegwart 2006]
A. Milella and R. Siegwart, “Stereo-based ego-motion estimation using pixel tracking and Iterative Closest Point,” in Proceedings of IEEE International Conference on Computer Vision Systems, New York City, USA, p. 21, Jan. 04-07, 2006
[5: Ericson & Astrand 2008]
S. Ericson and B. Astrand, “Stereo visual odometry for mobile robots on uneven terrain,” in Proceedings of Advances in Electrical and Electronics Engineering Special Edition of the World Congress on Engineering and Computer Science, San Francisco, USA, pp. 150-157, Oct. 22-24, 2008
[6: Piyathilaka & Munasinghe 2010]
L. Piyathilaka and R. Munasinghe, “Multi-camera visual odometry for skid steered field robot,” in Proceedings of International Conference on Information and Automation for Sustainability, Colombo, Sri Lanka, pp. 189-194, Dec. 17-19, 2010
[7: Nourani-Vatani et al. 2009]
N. Nourani-Vatani, J. Roberts, M.V. Srinivasan, “Practical visual odometry for car-like vehicles,” in Proceedings of IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 3551-3557, May 12-17, 2009
[8: Pagel 2009]
F. Pagel, “Robust monocular egomotion estimation based on an IEKF,” in Proceedings of Canadian Conference on Computer and Robot Vision, Kelowna, Canada, pp. 213-220, May 25-27, 2009
[9: Scaramuzza et al. 2009]
Davide Scaramuzza, Friedrich Fraundorfer, and Roland Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” in Proceedings of IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 4293-4299, May 12-17, 2009
[10: Scaramuzza & Siegwart 2008]
D. Scaramuzza and R. Siegwart, “Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1015-1026, Oct. 2008
[11: Nakada et al. 2010]
T. Nakada, T. Ohkubo, K. Kobayashi, K. Watanabe, and Y. Kurihara, “A study of visual odometry for mobile robots using omnidirectional camera,” in Proceedings of SICE Annual Conference, Taipei, Taiwan, pp. 2957-2959, Aug. 18-21, 2010
[12: Badino 2004]
H. Badino, “A robust approach for ego-motion estimation using a mobile stereo platform,” in Proceedings of International Workshop on Complex Motion, Günzburg, Germany, pp. 198-208, Oct. 12-14, 2004
[13: Campbell et al. 2005]
J. Campbell, R. Sukthankar, I. Nourbakhsh, and A. Pahwa, “A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision,” in Proceedings of IEEE International Conference on Robotics and Automation, Barcelona, Spain, pp. 3421- 3427, Apr. 18-22, 2005
[14: Koch et al. 2010]
O. Koch, M. R. Walter, A. S. Huang, and S. Teller, “Ground robot navigation using uncalibrated cameras,” in Proceedings of IEEE International Conference on Robotics and Automation, Anchorage, Alaska, pp. 2423-2430, May 3-7, 2010
[15: Scaramuzza & Fraundorfer 2011]
D. Scaramuzza and F. Fraundorfer, “Visual odometry [tutorial],” IEEE Robotics and Automation Magazine, vol. 18, no. 4, pp. 80-92, Dec. 2011
[16: Harris and Stephens 1988]
C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of Alvey Vision Conference, Manchester, United Kingdom, pp. 147–151, 31 Aug.-2 Sep., 1988
[17: Lowe 1999]
D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of IEEE International Conference on Computer Vision, Kerkyra, Corfu, Greece, vol. 2, pp.1150-1157, Sep. 20-27, 1999
[18: Lucas & Kanade 1981]
B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of International Joint Conference on Artificial Intelligence, Vancouver, British Columbia, pp. 674-679, Aug. 24-28, 1981.
[19: Tomasi & Kanade 1991]
C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-CS-91-132, Apr. 1991
[20: Bay et al. 2008]
Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool, “SURF: speeded up robust features,” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346-359, Sep. 2008
[21: Fraundorfer & Scaramuzza 2012]
F. Fraundorfer and D. Scaramuzza, “Visual odometry: part II: matching, robustness, optimization, and applications,” IEEE Robotics and Automation Magazine, vol. 19, no. 2, pp. 78-90, Jun. 2012
[22: Tamura et al. 2009]
Y. Tamura, M. Suzuki, A. Ishii, and Y. Kuroda, “Visual odometry with effective feature sampling for untextured outdoor environment,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, USA, pp. 3492-3497, Oct. 10-15, 2009
[23: Zhu et al. 2007]
Zhiwei Zhu, T. Oskiper, S. Samarasekera, R. Kumar, and H. S. Sawhney, “Ten-fold improvement in visual odometry using landmark matching,” in Proceedings of IEEE International Conference on Computer Vision, Rio de Janeiro, Brazil, pp. 1-8, Oct. 14-21, 2007
[24: Nister et al. 2004]
D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, vol. 1, pp. I-652-I-659, June 27-July 2, 2004
[25: Cheng et al. 2006]
Yang Cheng, M. W. Maimone, and L. Matthies, “Visual odometry on the Mars exploration rovers - a tool to ensure accurate driving and science imaging,” IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 54-62, Jun. 2006
[26: Lovegrove et al. 2011]
S. Lovegrove, A. J. Davison, and J. Ibanez-Guzman, “Accurate visual odometry from a rear parking camera,” in Proceedings of Intelligent Vehicles Symposium, Baden-Baden, Germany, pp. 788-793, Jun. 5-9, 2011
[27: Yu et al. 2011]
Yang Yu, C. Pradalier, and Guanghua Zong, “Appearance-based monocular visual odometry for ground vehicles,” in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Budapest, Hungary, pp. 862-867, July 3-7, 2011
[28: Kazik & Goktogan 2011]
T. Kazik, A. H. Goktogan, “Visual odometry based on the Fourier-Mellin transform for a rover using a monocular ground-facing camera,” in Proceedings of IEEE International Conference on Mechatronics (ICM), Istanbul, Turkey, pp. 469-474, Apr. 13-15, 2011
[29: Zaman 2007]
M. Zaman, “High precision relative localization using a single camera,” in Proceedings of IEEE International Conference on Robotics and Automation, Roma, Italy, pp. 3908-3914, Apr. 10-14, 2007
[30: Fischler & Bolles 1981]
M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communication of the ACM, vol. 24, No. 6, pp. 381–395, Jun. 1981
[31: Kaess et al. 2009]
M. Kaess, Kai Ni, and F. Dellaert, “Flow separation for fast and robust stereo odometry,” in Proceedings of IEEE International Conference on Robotics and Automation, Kobe, Japan, pp. 3539-3544, May 12-17, 2009
[32: Munguia & Grau 2007]
R. Munguia and A. Grau, “Monocular SLAM for visual odometry,” in Proceedings of IEEE International Symposium on Intelligent Signal Processing, Alcala de Henares, Spain, pp. 1-6, Oct. 3-5, 2007
[33: Spence et al. 2005]
Lawrence E. Spence, Arnold J. Insel, and Stephen H. Friedberg, Elementary linear algebra, Taipei, Taiwan: Pearson Education Taiwan Ltd., 2005, pp. 381-383
[34: Duda et al. 2001]
Richard O. Duda, Peter E. Hart, and David G. Stork, Pattern Classification, 2nd ed., New York, NY: A Wiley-Interscience Publication, 2001, pp. 161-174
[35: Crowley & Reignier 1992]
James L. Crowley and Patrick Reignier, “Asynchronous control of rotation and translation for a robot vehicle,” Robotics and Autonomous Systems, vol. 10, pp. 243-251, 1992
[36: OpenSURF for Matlab]
OpenSURF (including Image Warp) [Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/28300-opensurf-including-image-warp (2011/10/14)
[37: Harris Corner Detector for Matlab]
Harris Corner Detector in Matlab, [Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/9272-harris-corner-detector (2012/03/17)
[38: Euclidean Distance]
Euclidean Distance in Wikidepia, [Online]. Available: http://en.wikipedia.org/wiki/Euclidean_distance (2012/07/15)
[39: Camera Calibration Toolbox for Matlab]
Camera Calibration Toolbox for Matlab, [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/ (2011/11/03)
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/64428-
dc.description.abstract對移動型機器人而言,自我運動估測及行進軌跡重建為在運行環境中自我定位的兩項重要議題。機器人定位可運用許多不同的感測器及技術,諸如:旋轉編碼器、慣性測量元件、全球定位系統、雷射測距儀及視覺感測器。相較於其他技術,視覺感測器可以提供資訊豐富的環境觀測資料,且價格低廉,是以為一項機器人定位的優良選擇。本篇論文提供兩種使用地面影像序列的影像里程計演算法。在第一種方法中,影像序列來自於一台經妥善校正的單眼攝影機。因為相機與地面間的幾何關係可經由校正結果重建,影像中的景物可因此反向投影於地面進而得到其於真實世界中的位置。此項使用校正相機的視覺里程計演算法主要包含三個步驟。第一步驟使用了特徵提取及比對技術建立兩影像間的位置對應關係。所提取的特徵將在第二步中投影至地面。最後,機器人的運動狀態將使用高斯核機率表決法加以估測。第二種視覺里程計演算法則使用兩台裝置於機器人側邊的未校正攝影機。由於攝影機的內外部參數皆假設為未知,是以影像座標與世界座標間的幾何關係因此而難以取得。為了克服此一問題,畫格影像只使用中央局部狹小區塊提取影像移動量以減少輻射變形效應,並藉此將情境簡化為雙輪里程計問題。此項使用未校正相機之視覺里程計演算法包含四個步驟。第一步驟中使用區塊比對法由影像中提取複數的運動向量。在此之後,不可靠的運動向量將藉由空間與時序特性加以偵測並剔除。接下來,向量將正規化為符合運動模型的形式。最後一步將計算機器人於畫格間的運動並藉此重建行進軌跡。兩種視覺里程計演算法皆經過電腦模擬測試及真實環境中進行的實驗。zh_TW
dc.description.abstractFor mobile robots, ego-motion estimation and trajectory reconstruction are two important issues for localizing themselves in the operational environments. Numerous kinds of sensors and techniques are used in robot localization, such as wheel encoder, IMU, GPS, LRF, and visual sensors. Comparing to other sensors, visual sensors could obtain information-rich environment data and usually with low prices, which are good options for robot localization. This thesis proposes two visual odometry methods using ground image sequences. In the first method, image sequence is captured from a well-calibrated monocular camera. Due to the geometrical relationship between ground and camera can be reconstructed by the calibration results, the image scenes could be back projected to the ground and thus get the real-world positions. The proposed visual odometry method with calibrated camera includes mainly three steps. In the first step, the positional correspondences between two consecutive images are established by feature extraction and matching. Then, the extracted features are projected onto the ground plane. Finally, the robot motion is estimated with a Gaussian kernel density voting outlier rejection scheme. For the second method, two un-calibrated cameras mounted on the lateral sides of one robot are used. The intrinsic and extrinsic parameters of cameras are assumed to be unknown and, hence, it is hard to obtain the geometric relationship between image coordinate and world coordinate. To overcome this problem, only a small part of image frame is used to extract the motion quantities for reducing the effect of radial distortion and simplifying the problem as an ordinary wheel odometry problem. The proposed method with un-calibrated cameras includes four steps. In the first step, multiple motion vectors are extracted by block matching. Then, based on the spatial and temporal distribution of motion vectors, unreliable vectors are then determined and deleted. The vectors would be normalized to the desired form to fit the motion model in the next step. Finally, the motion in each frame is calculated and the trajectory is also reconstructed. Both two methods are tested by simulations and real-environment experiments.en
dc.description.provenanceMade available in DSpace on 2021-06-16T17:46:36Z (GMT). No. of bitstreams: 1
ntu-101-R99921014-1.pdf: 1226061 bytes, checksum: 84aff46d6e203f1230cc7569176854d3 (MD5)
Previous issue date: 2012
en
dc.description.tableofcontents摘要 I
ABSTRACT III
LIST OF FIGURES VIII
CHAPTER 1 INTRODUCTION 1
1.1 Motivation 1
1.2 Problem Formulation 4
1.3 Contribution of the Thesis 6
1.4 Organization of the Thesis 7
CHAPTER 2 LITERATURE SURVEY 8
2.1 Image Motion Information Extraction Techniques 9
2.1.1 Feature-Based Approach 10
2.1.2 Appearance-Based Approach 11
2.2 Motion Estimation Techniques 13
2.2.1 Motion Estimation with Range-Bearing Data 15
2.2.2 Motion Estimation with Bearing-Only Data 17
CHAPTER 3 PRELIMINARY ALGORITHMS AND MATHEMATICS 19
3.1 Perspective Projection 19
3.2 Kernel Density Estimation 21
3.3 Wheel Odometry Model 24
CHAPTER 4 VISUAL ODOMETRY WITH CALIBRATED CAMERA 28
4.1 System Overview 28
4.2 Image Motion Information Extraction 32
4.2.1 Feature Extraction 33
4.2.2 Feature Matching 37
4.3 Image-to-Ground Projection 45
4.4 Motion Estimation 52
4.4.1 Rotation Estimation 57
4.4.2 Translation Estimation 60
4.4.3 Trajectory Reconstruction 62
CHAPTER 5 VISUAL ODOMETRY WITH COOPERATED UN-CALIBRATED CAMERAS 63
5.1 System Overview 63
5.2 Motion Vector Extraction 66
5.3 Data Filtering 68
5.3.1 Spatial Filtering 69
5.3.2 Temporal Filtering 71
5.4 Data Normalization 73
5.5 Vehicle Motion Estimation 75
CHAPTER 6 SIMULATIONS AND EXPERIMENTS 77
6.1 Simulations of Visual Odometry with Calibrated Camera 77
6.1.1 Simulation of Motion Estimation by Points 85
6.1.2 Simulation of Image Motion Information Extraction by Pictures 91
6.2 Experiments of Visual Odometry with Calibrated Camera 97
6.2.1 Experimental Platform 98
6.2.2 Experimental Setup 100
6.2.3 Experimental Results of Case 1: Approximately Pure Translation 107
6.2.4 Experimental Results of Case 2: Approximately Pure Rotation 109
6.2.5 Experimental Results of Case 3: General Trajectory 111
6.2.6 Summary 113
6.3 Simulations of Visual Odometry with Cooperated Un-Calibrated Cameras 114
6.3.1 Simulation of Different Pitch Angles 118
6.3.2 Simulation of Different Yaw Angles 120
6.4 Experiments of Visual Odometry with Cooperated Un-Calibrated Cameras 124
CHAPTER 7 CONCLUSION AND FUTURE WORKS 129
7.1 Conclusion 129
7.2 Future Works 130
REFERENCES 132
dc.language.isoen
dc.title校正及未校正相機地面影像序列之視覺里程計演算法zh_TW
dc.titleVisual Odometry Algorithms Using Ground Image Sequence from Calibrated Camera and Cooperated Un-Calibrated Camerasen
dc.typeThesis
dc.date.schoolyear100-2
dc.description.degree碩士
dc.contributor.oralexamcommittee簡忠漢,李後燦,黃正民
dc.subject.keyword視覺里程計,核機率估測,特徵提取,特徵比對,雙輪里程計,zh_TW
dc.subject.keywordVisual Odometry,Kernel Density Estimation,Feature Extraction,Feature Matching,Wheel Odometry,en
dc.relation.page140
dc.rights.note有償授權
dc.date.accepted2012-08-14
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電機工程學研究所zh_TW
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-101-1.pdf
  目前未授權公開取用
1.2 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved