請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3954
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 郭振華 | |
dc.contributor.author | YI-LUN CHIU | en |
dc.contributor.author | 邱奕倫 | zh_TW |
dc.date.accessioned | 2021-05-13T08:39:10Z | - |
dc.date.available | 2021-04-15 | |
dc.date.available | 2021-05-13T08:39:10Z | - |
dc.date.copyright | 2016-04-15 | |
dc.date.issued | 2016 | |
dc.date.submitted | 2016-03-31 | |
dc.identifier.citation | [1] J.-Y. Bouguet, 'Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm,' Intel Corporation, vol. 5, pp. 1-10, 2001.
[2] C. Connolly and T. Fliess, “A study of efficiency and accuracy in the transformation from RGB to CIELAB color space,” IEEE Transactions on Image Processing, vol. 6, pp. 1046-1048, Jul 1997. [3] H. Shin, et al. “Motion Analysis by Free-Running Model Test,” The Twelfth International Offshore and Polar Engineering Conference, International Society of Offshore and Polar Engineers, 2002. [4] V. Kopman, J.Laut, F. Acquaviva, A. Rizzo, M. Porfiri, “Dynamic modeling of a robotic fish propelled by a compliant tail,” Oceanic Engineering, IEEE Journal of, 40(1), 209-221, 2015. [5] T. I. Fossen, Nonlinear modelling and control of underwater vehicles, 1991. [6] 郭振華,王傑智, '自主式水下載具流體動力模式與運動控制,' 國立臺灣大學造船及海洋工程研究所碩士論文, 1996. [7] A. I. Korotkin, Added masses of ship structures vol. 88: Springer Science & Business Media, 2008. [8] 施生達, '潛艇操縱性,' 國防工業出版社, 1992 [9] 郭振華,邱柏昇, '仿生型自主式水下載具利用雙魚眼攝影機在已知環境中之導航,' 國立臺灣大學工程科學及海洋工程研究所碩士論文, 2012. [10] N. Gracias and J. Santos-Victor, “Underwater Video Mosaics as Visual Navigation Maps,” Computer Vision and Image Understanding, vol. 79, no. 1, pp. 66-91, 2000. [11] R. HARTLEY, A. Zisserman, “Multiple view geometry in computer vision,” Cambridge university press, 2003. [12] S. Shah, J. K. Aggarwal, “Depth Estimation Using Stereo Fisheye Lenses,” IEEE International Conference on Image Processing, Vol. 2, pp 740-744, 1994. [13] R. M. Eustice, O. M. Pizarro, H. Singh, “Visually augmented navigation for autonomous underwater vehicles,” Oceanic Engineering, IEEE Journal of, 33(2), 103-122, 2008. [14] W. Narzt, et al. “Augmented reality navigation systems,” Universal Access in the Information Society 4.3 (2006): 177-187. [15] R. T. Azuma, “A survey of augmented reality,” Presence: Teleoperators and virtual environments, 6(4), 355-385, 1997. [16] D. W. F. Van Krevelen, R. Poelman, “A survey of augmented reality technologies, applications and limitations,” International Journal of Virtual Reality, 9(2), 1, 2010. [17] J. Wolf, W. Burgard, H. Burkhardt, “Robust vision-based localization by combining an image-retrieval system with Monte Carlo localization,” Robotics, IEEE Transactions on, 21(2), 208-216, 2005. [18] C. Roehrig and C. Kirsch, “Particle filter based sensor fusion of range measurements from wireless sensor network and laser range finder,” in Proc. 41st Int. Symp. Robot. and 6th German Conf. Robot., Jun. 2010, pp. 1–8. [19] R. Karlsson and F. Gustafsson, “Particle filtering for underwater terrain navigation,”in IEEE Statistical Signal Process, Workshop, St. Louis, MO, pp. 526-529, Oct.2003. [20] S. Thrun, D. Fox, W. Burgard, and F. Dellaert. “Robust Monte Carlo Localization for mobile robots,” Artificial Intelligence Journal, 128(1-2):99-141, 2001. [21] Z. Zhang, “Flexible Camera Calibration By Viewing a Plane From Unknown Orientations,” International Conference on Computer Vision, 1999. [22] S. Thrun, W. Burgard, D. Fox, “Probabilistic Robotics,” The MIT Press, London, England, 2005. [23] 郭振華,王偉翰, '使用側掃聲納掃描線輔助無人水下載具建立導航地圖,' 國立臺灣大學工程科學及海洋工程研究所碩士論文, 2009. [24] F. Bonin-Font, A. Ortiz, G. Oliver, “Visual navigation for mobile robots: A survey,” Journal of intelligent and robotic systems, 53(3), 263-296, 2008. [25] D. Nistér, O. Naroditsky, J. Bergen, “Visual odometry for ground vehicle applications,” Journal of Field Robotics, 23(1), 3-20, 2006. [26] R. Haywood, “Acquisition of a micro scale photographic survey using an autonomous submersible,” in Proc. of the OCEANS Conf., vol. 5, pp. 1423–1426, 1986 [27] R.L. Marks, S.M. Rock, and M.J. Lee, “Real-time video mosaicking of the Ocean floor”, IEEE Journal of Oceanic Engineering, vol. 20, no.3, pp. 229–241, 1995. [28] T. Schöps, J. Engel, D. Cremers, “Semi-dense visual odometry for AR on a smartphone,” In Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on, pp. 145-150, 2014. [29] S. B. Han, J. H. Kim, H. Myung, “Landmark-based particle localization algorithm for mobile robots with a fish-eye vision system,” Mechatronics, IEEE/ASME Transactions on, 18(6), 1745-1756, 2013. [30] W. Yuan, J. Katupitiya, “A time-domain grey-box system identification procedure for scale model helicopters,” In Proceedings of the 2011 Australasian Conference on Robotics and Automation, Dec. 2011. [31] C. E. Shannon, “A mathematical theory of communication,” ACM SIGMOBILE Mobile Computing and Communications Review, 5(1), 3-55, 2001 | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3954 | - |
dc.description.abstract | 本論文探討兩項水下機器人基本問題:動力模型的建立以及載具位置的追蹤定位。本載具使用轉軸推進器,可於高速下執行橫向轉彎,並具備胸鰭進行急煞與深度控制動作。為了能夠預先估測載具動作,我們基於拉格朗日原理推導本載具動力模型。本文研究利用上方攝影系統來追蹤紀錄載具船體上的位置標記。進而取得載具的縱移、橫移與轉角速度等等數據,並應用非線性最佳化方法估測係數。經由比較模擬與實驗可知此動力模型具有相當的準確度與可靠性。
在載具的追蹤定位方面,我們採用單眼視覺方法來追蹤載具的位置與方向。我們使用前視鏡頭進行觀測,並提出一個新穎的即時最佳化估算方法。在已知地圖的環境中,使用粒子濾波器方法估測載具位置。特別的是將擴增實境的技術應用於量測模型中,這個量測方法為濾波器提供重要因子的計算方式。我們的方法已經過長時間水下巡航的驗證,實驗顯示該方法為穩定且高效能,即時提供水下載具位置與姿態。 | zh_TW |
dc.description.abstract | This work investigates a development of a highly maneuverable AUV that has a high maneuverability to perform power turns. Two fundamental problems are addressed in this paper, which are the dynamic modelling of this AUV and pose tracking method by vision system. The vehicle has a rotatable stern propeller for horizontal turning at high speed, two paddles for the braking and ascending/descending. A motion model is firstly derived to predict the motion of the body. The dynamic equations are derived based on the Lagrange principle. Added mass coefficients are estimated using the equivalent ellipsoid method. A tank environment with an overhead camera system is utilized to record marker positions on the vehicle body. The iterative Lucas-Kanade method is applied for the tracking of the AUV.
To track the vehicle’s position and orientation for autonomous navigation, we introduce a monocular image-based approach. Our approach is developed for an underwater environment which with fewer features and low visibility. We present a novel real-time optimizing estimation method which bases on the forward-looking camera for observing. The sequence Monte-Carlo method is used for estimating the pose of body. In particular, the augmented reality technique is involved to the measuring process, this measuring method provide the reliable estimation for importance factor. Our approach was verified by long time cruise in a water tank. Experiment data indicates that it is robust and efficient for the real-time position tracking of the robot. | en |
dc.description.provenance | Made available in DSpace on 2021-05-13T08:39:10Z (GMT). No. of bitstreams: 1 ntu-105-R97525068-1.pdf: 3485718 bytes, checksum: d7f6bc65f736bdda2c521ab5bfa53d33 (MD5) Previous issue date: 2016 | en |
dc.description.tableofcontents | 摘要 III
ABSTRACT IV CONTENTS VI LIST OF FIGURES IX LIST OF TABLES XIII LIST OF SYMBOLS XIV Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Related Work 6 1.3 System Architecture 9 Chapter 2 Computer Vision Background 12 2.1 Camera Projection Model 12 2.2 Extrinsic Matrix Estimation 16 2.3 Homography Transform 18 2.4 Lab color space 20 Chapter 3 Vehicle Motion Model 22 3.1 Vehicle Dynamic Modeling 22 3.2 Parameters Estimation 27 3.2.1 Thrust Estimation 27 3.2.2 Added Mass 29 3.2.3 Motion Data Gathering 34 3.2.4 Nonlinear Grey-Box Model Identification 41 3.3 Simulation 43 3.3.1 Test 1: L-Shaped Path 44 3.3.2 Test 2: S-Shaped Path 46 3.3.3 Comparison to the experimental data 48 Chapter 4 Pose Tracking 52 4.1 Particles Filter Algorithm 52 4.2 Measurement Comparing Template 55 4.3 Observation Model 57 4.4 Summary 59 Chapter 5 Experiment 60 5.1 Experiment 1 – The Comparison of EKF and PF 61 5.1.1 Extended Kalman Filter 62 5.1.2 Particle Filter Localization 65 5.2 Testing in NMMST’s Tank 67 5.3 The Kidnapped Problem 69 Chapter 6 Conclusion 72 Appendix 74 Reference 75 | |
dc.language.iso | en | |
dc.title | 高速轉彎自主式水下載具動力模型鑑定與
單眼視覺導航研究 | zh_TW |
dc.title | Dynamic Modeling and Monocular Image-Based Pose Tracking for an AUV in Power Turn | en |
dc.type | Thesis | |
dc.date.schoolyear | 104-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 江茂雄,邱逢琛 | |
dc.subject.keyword | 自主式水下載具,水下導航,動力模型,單眼視覺,序列式蒙特卡羅定位演算法, | zh_TW |
dc.subject.keyword | autonomous underwater vehicle,dynamic modeling,monocular vision,Sequential Monte Carlo algorithm,augmented reality, | en |
dc.relation.page | 79 | |
dc.identifier.doi | 10.6342/NTU201600174 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2016-03-31 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 工程科學及海洋工程學研究所 | zh_TW |
顯示於系所單位: | 工程科學及海洋工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-105-1.pdf | 3.4 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。