Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工程科學及海洋工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3954
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor郭振華
dc.contributor.authorYI-LUN CHIUen
dc.contributor.author邱奕倫zh_TW
dc.date.accessioned2021-05-13T08:39:10Z-
dc.date.available2021-04-15
dc.date.available2021-05-13T08:39:10Z-
dc.date.copyright2016-04-15
dc.date.issued2016
dc.date.submitted2016-03-31
dc.identifier.citation[1] J.-Y. Bouguet, 'Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm,' Intel Corporation, vol. 5, pp. 1-10, 2001.
[2] C. Connolly and T. Fliess, “A study of efficiency and accuracy in the transformation from RGB to CIELAB color space,” IEEE Transactions on Image Processing, vol. 6, pp. 1046-1048, Jul 1997.
[3] H. Shin, et al. “Motion Analysis by Free-Running Model Test,” The Twelfth International Offshore and Polar Engineering Conference, International Society of Offshore and Polar Engineers, 2002.
[4] V. Kopman, J.Laut, F. Acquaviva, A. Rizzo, M. Porfiri, “Dynamic modeling of a robotic fish propelled by a compliant tail,” Oceanic Engineering, IEEE Journal of, 40(1), 209-221, 2015.
[5] T. I. Fossen, Nonlinear modelling and control of underwater vehicles, 1991.
[6] 郭振華,王傑智, '自主式水下載具流體動力模式與運動控制,' 國立臺灣大學造船及海洋工程研究所碩士論文, 1996.
[7] A. I. Korotkin, Added masses of ship structures vol. 88: Springer Science & Business Media, 2008.
[8] 施生達, '潛艇操縱性,' 國防工業出版社, 1992
[9] 郭振華,邱柏昇, '仿生型自主式水下載具利用雙魚眼攝影機在已知環境中之導航,' 國立臺灣大學工程科學及海洋工程研究所碩士論文, 2012.
[10] N. Gracias and J. Santos-Victor, “Underwater Video Mosaics as Visual Navigation Maps,” Computer Vision and Image Understanding, vol. 79, no. 1, pp. 66-91, 2000.
[11] R. HARTLEY, A. Zisserman, “Multiple view geometry in computer vision,” Cambridge university press, 2003.
[12] S. Shah, J. K. Aggarwal, “Depth Estimation Using Stereo Fisheye Lenses,” IEEE International Conference on Image Processing, Vol. 2, pp 740-744, 1994.
[13] R. M. Eustice, O. M. Pizarro, H. Singh, “Visually augmented navigation for autonomous underwater vehicles,” Oceanic Engineering, IEEE Journal of, 33(2), 103-122, 2008.
[14] W. Narzt, et al. “Augmented reality navigation systems,” Universal Access in the Information Society 4.3 (2006): 177-187.
[15] R. T. Azuma, “A survey of augmented reality,” Presence: Teleoperators and virtual environments, 6(4), 355-385, 1997.
[16] D. W. F. Van Krevelen, R. Poelman, “A survey of augmented reality technologies, applications and limitations,” International Journal of Virtual Reality, 9(2), 1, 2010.
[17] J. Wolf, W. Burgard, H. Burkhardt, “Robust vision-based localization by combining an image-retrieval system with Monte Carlo localization,” Robotics, IEEE Transactions on, 21(2), 208-216, 2005.
[18] C. Roehrig and C. Kirsch, “Particle filter based sensor fusion of range measurements from wireless sensor network and laser range finder,” in Proc. 41st Int. Symp. Robot. and 6th German Conf. Robot., Jun. 2010, pp. 1–8.
[19] R. Karlsson and F. Gustafsson, “Particle filtering for underwater terrain navigation,”in IEEE Statistical Signal Process, Workshop, St. Louis, MO, pp. 526-529, Oct.2003.
[20] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. “Robust Monte Carlo Localization for mobile robots,” Artificial Intelligence Journal, 128(1-2):99-141, 2001.
[21] Z. Zhang, “Flexible Camera Calibration By Viewing a Plane From Unknown Orientations,” International Conference on Computer Vision, 1999.
[22] S. Thrun, W. Burgard, D. Fox, “Probabilistic Robotics,” The MIT Press, London, England, 2005.
[23] 郭振華,王偉翰, '使用側掃聲納掃描線輔助無人水下載具建立導航地圖,' 國立臺灣大學工程科學及海洋工程研究所碩士論文, 2009.
[24] F. Bonin-Font, A. Ortiz, G. Oliver, “Visual navigation for mobile robots: A survey,” Journal of intelligent and robotic systems, 53(3), 263-296, 2008.
[25] D. Nistér, O. Naroditsky, J. Bergen, “Visual odometry for ground vehicle applications,” Journal of Field Robotics, 23(1), 3-20, 2006.
[26] R. Haywood, “Acquisition of a micro scale photographic survey using an autonomous submersible,” in Proc. of the OCEANS Conf., vol. 5, pp. 1423–1426, 1986
[27] R.L. Marks, S.M. Rock, and M.J. Lee, “Real-time video mosaicking of the Ocean floor”, IEEE Journal of Oceanic Engineering, vol. 20, no.3, pp. 229–241, 1995.
[28] T. Schöps, J. Engel, D. Cremers, “Semi-dense visual odometry for AR on a smartphone,” In Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on, pp. 145-150, 2014.
[29] S. B. Han, J. H. Kim, H. Myung, “Landmark-based particle localization algorithm for mobile robots with a fish-eye vision system,” Mechatronics, IEEE/ASME Transactions on, 18(6), 1745-1756, 2013.
[30] W. Yuan, J. Katupitiya, “A time-domain grey-box system identification procedure for scale model helicopters,” In Proceedings of the 2011 Australasian Conference on Robotics and Automation, Dec. 2011.
[31] C. E. Shannon, “A mathematical theory of communication,” ACM SIGMOBILE Mobile Computing and Communications Review, 5(1), 3-55, 2001
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3954-
dc.description.abstract本論文探討兩項水下機器人基本問題:動力模型的建立以及載具位置的追蹤定位。本載具使用轉軸推進器,可於高速下執行橫向轉彎,並具備胸鰭進行急煞與深度控制動作。為了能夠預先估測載具動作,我們基於拉格朗日原理推導本載具動力模型。本文研究利用上方攝影系統來追蹤紀錄載具船體上的位置標記。進而取得載具的縱移、橫移與轉角速度等等數據,並應用非線性最佳化方法估測係數。經由比較模擬與實驗可知此動力模型具有相當的準確度與可靠性。
在載具的追蹤定位方面,我們採用單眼視覺方法來追蹤載具的位置與方向。我們使用前視鏡頭進行觀測,並提出一個新穎的即時最佳化估算方法。在已知地圖的環境中,使用粒子濾波器方法估測載具位置。特別的是將擴增實境的技術應用於量測模型中,這個量測方法為濾波器提供重要因子的計算方式。我們的方法已經過長時間水下巡航的驗證,實驗顯示該方法為穩定且高效能,即時提供水下載具位置與姿態。
zh_TW
dc.description.abstractThis work investigates a development of a highly maneuverable AUV that has a high maneuverability to perform power turns. Two fundamental problems are addressed in this paper, which are the dynamic modelling of this AUV and pose tracking method by vision system. The vehicle has a rotatable stern propeller for horizontal turning at high speed, two paddles for the braking and ascending/descending. A motion model is firstly derived to predict the motion of the body. The dynamic equations are derived based on the Lagrange principle. Added mass coefficients are estimated using the equivalent ellipsoid method. A tank environment with an overhead camera system is utilized to record marker positions on the vehicle body. The iterative Lucas-Kanade method is applied for the tracking of the AUV.
To track the vehicle’s position and orientation for autonomous navigation, we introduce a monocular image-based approach. Our approach is developed for an underwater environment which with fewer features and low visibility. We present a novel real-time optimizing estimation method which bases on the forward-looking camera for observing. The sequence Monte-Carlo method is used for estimating the pose of body. In particular, the augmented reality technique is involved to the measuring process, this measuring method provide the reliable estimation for importance factor. Our approach was verified by long time cruise in a water tank. Experiment data indicates that it is robust and efficient for the real-time position tracking of the robot.
en
dc.description.provenanceMade available in DSpace on 2021-05-13T08:39:10Z (GMT). No. of bitstreams: 1
ntu-105-R97525068-1.pdf: 3485718 bytes, checksum: d7f6bc65f736bdda2c521ab5bfa53d33 (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents摘要 III
ABSTRACT IV
CONTENTS VI
LIST OF FIGURES IX
LIST OF TABLES XIII
LIST OF SYMBOLS XIV
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Related Work 6
1.3 System Architecture 9
Chapter 2 Computer Vision Background 12
2.1 Camera Projection Model 12
2.2 Extrinsic Matrix Estimation 16
2.3 Homography Transform 18
2.4 Lab color space 20
Chapter 3 Vehicle Motion Model 22
3.1 Vehicle Dynamic Modeling 22
3.2 Parameters Estimation 27
3.2.1 Thrust Estimation 27
3.2.2 Added Mass 29
3.2.3 Motion Data Gathering 34
3.2.4 Nonlinear Grey-Box Model Identification 41
3.3 Simulation 43
3.3.1 Test 1: L-Shaped Path 44
3.3.2 Test 2: S-Shaped Path 46
3.3.3 Comparison to the experimental data 48
Chapter 4 Pose Tracking 52
4.1 Particles Filter Algorithm 52
4.2 Measurement Comparing Template 55
4.3 Observation Model 57
4.4 Summary 59
Chapter 5 Experiment 60
5.1 Experiment 1 – The Comparison of EKF and PF 61
5.1.1 Extended Kalman Filter 62
5.1.2 Particle Filter Localization 65
5.2 Testing in NMMST’s Tank 67
5.3 The Kidnapped Problem 69
Chapter 6 Conclusion 72
Appendix 74
Reference 75
dc.language.isoen
dc.subject序列式蒙特卡羅定位演算法zh_TW
dc.subject自主式水下載具zh_TW
dc.subject水下導航zh_TW
dc.subject動力模型zh_TW
dc.subject單眼視覺zh_TW
dc.subjectautonomous underwater vehicleen
dc.subjectaugmented realityen
dc.subjectSequential Monte Carlo algorithmen
dc.subjectmonocular visionen
dc.subjectdynamic modelingen
dc.title高速轉彎自主式水下載具動力模型鑑定與
單眼視覺導航研究
zh_TW
dc.titleDynamic Modeling and Monocular Image-Based Pose Tracking for an AUV in Power Turnen
dc.typeThesis
dc.date.schoolyear104-2
dc.description.degree碩士
dc.contributor.oralexamcommittee江茂雄,邱逢琛
dc.subject.keyword自主式水下載具,水下導航,動力模型,單眼視覺,序列式蒙特卡羅定位演算法,zh_TW
dc.subject.keywordautonomous underwater vehicle,dynamic modeling,monocular vision,Sequential Monte Carlo algorithm,augmented reality,en
dc.relation.page79
dc.identifier.doi10.6342/NTU201600174
dc.rights.note同意授權(全球公開)
dc.date.accepted2016-03-31
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept工程科學及海洋工程學研究所zh_TW
顯示於系所單位:工程科學及海洋工程學系

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf3.4 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved