請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72682
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 李綱(Kang Li) | |
dc.contributor.author | Han-Yuan Chang | en |
dc.contributor.author | 張瀚元 | zh_TW |
dc.date.accessioned | 2021-06-17T07:03:31Z | - |
dc.date.available | 2022-08-05 | |
dc.date.copyright | 2019-08-05 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-07-29 | |
dc.identifier.citation | [1] Guizzo, Eric. 'Three engineers, hundreds of robots, one warehouse.' IEEE spectrum 45.7 (2008): 26-34.
[2] Andreasson, Henrik, et al. 'Autonomous transport vehicles: Where we are and what is missing.' IEEE Robotics & Automation Magazine 22.1 (2015): 64-75. [3] Toyota (2018). BT Staxio SAE160 AGV Autopilot Stacker, from https://www.toyotamaterialhandling.com.au/products/product-search/automatic-guided-vehicles/toyota-bt-staxio-sae160-agv-autopilot-stacker/ [4] Linde Robotics (2018). L-MATIC, from https://www.linde-mh.com/en/Products/Automated-Trucks/L-Matic/ [5] Song, Guangming, et al. 'Automatic docking system for recharging home surveillance robots.' IEEE Transactions on Consumer Electronics 57.2 (2011): 428-435. [6] Meena, M., and P. Thilagavathi. 'Automatic Docking System with Recharging and Battery Replacement for Surveillance Robot.' International Journal of Electronics and Computer Science Engineering (2012): 1148-1154. [7] Wei, Hongxing, et al. 'Sambot: A self-assembly modular robot system.' IEEE/ASME Transactions on Mechatronics 16.4 (2010): 745-757. [8] Umer, Syed Muhammad, Yongsheng Ou, and Wei Feng. 'A novel localization and navigation approach for an indoor autonomous mobile surveillance robot.' 2014 4th IEEE International Conference on Information Science and Technology. IEEE, 2014. [9] Jiang, Guolai, et al. 'A novel approach for localization of an indoor autonomous mobile surveillance robot.' Proceeding of the 11th World Congress on Intelligent Control and Automation. IEEE, 2014. [10] Moubarak, Paul, and Pinhas Ben-Tzvi. 'Modular and reconfigurable mobile robotics.' Robotics and autonomous systems 60.12 (2012): 1648-1663. [11] Li, Dazhai, Hualei Fu, and Wei Wang. 'Ultrasonic based autonomous docking on plane for mobile robot.' 2008 IEEE International Conference on Automation and Logistics. IEEE, 2008. [12] Wang, Wei, et al. 'An autonomous docking method based on ultrasonic sensors for self-reconfigurable mobile robot.' 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2009. [13] Wang, Wei, Wenpeng Yu, and Houxiang Zhang. 'JL-2: A mobile multi-robot system with docking and manipulating capabilities.' International Journal of Advanced Robotic Systems 7.1 (2010): 9. [14] Luo, Ren C., et al. 'Automatic docking and recharging system for autonomous security robot.' 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2005. [15] Lee, Jooho, Joohyun Woo, and Nakwan Kim. 'Vision and 2D LiDAR based autonomous surface vehicle docking for identify symbols and dock task in 2016 Maritime RobotX Challenge.' 2017 IEEE Underwater Technology (UT). IEEE, 2017. [16] Guangrui, Fan, and Wang Geng. 'Vision-based autonomous docking and re-charging system for mobile robot in warehouse environment.' 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE). IEEE, 2017. [17] Phamduy, Paul, Jayhwan Cheong, and Maurizio Porfiri. 'An autonomous charging system for a robotic fish.' IEEE/ASME Transactions on Mechatronics 21.6 (2016): 2953-2963. [18] Wang, John, and Edwin Olson. 'AprilTag 2: Efficient and robust fiducial detection.' 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016. [19] Dementhon, Daniel F., and Larry S. Davis. 'Model-based object pose in 25 lines of code.' International journal of computer vision 15.1-2 (1995): 123-141. [20] Martin, Jon, et al. 'An autonomous transport vehicle in an existing manufacturing facility with focus on the docking maneuver task.' 2017 3rd International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2017. [21] Won, Seong-hoon Peter, Wael William Melek, and Farid Golnaraghi. 'A Kalman/particle filter-based position and orientation estimation method using a position sensor/inertial measurement unit hybrid system.' IEEE Transactions on Industrial Electronics 57.5 (2009): 1787-1798. [22] Neto, Pedro, J. Norberto Pires, and Anónio Paulo Moreira. '3-D position estimation from inertial sensing: Minimizing the error from the process of double integration of accelerations.' IECON 2013-39th Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2013. [23] Zhao, He, and Zheyao Wang. 'Motion measurement using inertial sensors, ultrasonic sensors, and magnetometers with extended kalman filter for data fusion.' IEEE Sensors Journal 12.5 (2011): 943-953. [24] Besl, Paul J., and Neil D. McKay. 'Method for registration of 3-D shapes.' Sensor fusion IV: control paradigms and data structures. Vol. 1611. International Society for Optics and Photonics, 1992. [25] Censi, Andrea. 'An ICP variant using a point-to-line metric.' (2008): 19-25. [26] Gonzalez, Javier, Anthony Stentz, and Anibal Ollero. 'A mobile robot iconic position estimator using a radial laser scanner.' Journal of Intelligent and Robotic Systems 13.2 (1995): 161-179. [27] Segal, Aleksandr, Dirk Haehnel, and Sebastian Thrun. 'Generalized-icp.' Robotics: science and systems. Vol. 2. No. 4. 2009. [28] Jaimez, Mariano, Javier G. Monroy, and Javier Gonzalez-Jimenez. 'Planar odometry from a radial laser scanner. A range flow-based approach.' 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2016. [29] Ahmad, Norhafizan, et al. 'Reviews on various inertial measurement unit (IMU) sensor applications.' International Journal of Signal Processing Systems 1.2 (2013): 256-262. [30] Scandaroli, Glauco Garcia, and Pascal Morin. 'Nonlinear filter design for pose and IMU bias estimation.' 2011 IEEE International Conference on Robotics and Automation. IEEE, 2011. [31] M. Bloesch, et al. 'Robust visual inertial odometry using a direct EKF-based approach.' 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2015. [32] Bloesch, Michael, et al. 'Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback.' The International Journal of Robotics Research 36.10 (2017): 1053-1072. [33] Ruiz, Antonio Ramón Jiménez, et al. 'Accurate pedestrian indoor navigation by tightly coupling foot-mounted IMU and RFID measurements.' IEEE Transactions on Instrumentation and measurement 61.1 (2011): 178-189. [34] Alatise, Mary, and Gerhard P. Hancke. 'Pose estimation of a mobile robot using monocular vision and inertial sensors data.' 2017 IEEE AFRICON. IEEE, 2017. [35] Alatise, Mary, and Gerhard Hancke. 'Pose estimation of a mobile robot based on fusion of IMU data and vision data using an extended Kalman filter.' Sensors 17.10 (2017): 2164. [36] Chen, Jing, et al. 'Fusion of inertial and vision data for accurate tracking.' Fourth International Conference on Machine Vision (ICMV 2011): Machine Vision, Image Processing, and Pattern Analysis. Vol. 8349. International Society for Optics and Photonics, 2012. [37] Tao, Yaqin, Huosheng Hu, and Huiyu Zhou. 'Integration of vision and inertial sensors for 3D arm motion tracking in home-based rehabilitation.' The International Journal of Robotics Research 26.6 (2007): 607-624. [38] Nyqvist, Hanna E., et al. 'Pose estimation using monocular vision and inertial sensors aided with ultra wide band.' 2015 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 2015. [39] Erdem, Arif Tanju, and Ali Özer Ercan. 'Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.' IEEE Transactions on Image Processing 24.2 (2014): 538-548. [40] Mourikis, Anastasios I., and Stergios I. Roumeliotis. 'A multi-state constraint Kalman filter for vision-aided inertial navigation.' Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE, 2007. [41] Li, Mingyang, and Anastasios I. Mourikis. 'Improving the accuracy of EKF-based visual-inertial odometry.' 2012 IEEE International Conference on Robotics and Automation. IEEE, 2012. [42] Li, Mingyang, and Anastasios I. Mourikis. 'High-precision, consistent EKF-based visual-inertial odometry.' The International Journal of Robotics Research 32.6 (2013): 690-711. [43] Li, Juan, et al. 'A novel system for object pose estimation using fused vision and inertial data.' Information Fusion 33 (2017): 15-28. [44] Won, Peter Seong-hoon, Mohammad Biglarbegian, and William Melek. 'Development and performance comparison of extended Kalman filter and particle filter for self-reconfigurable mobile robots.' 2014 IEEE Symposium on Robotic Intelligence in Informationally Structured Space (RiiSS). IEEE, 2014. [45] Won, Peter, Mohammad Biglarbegian, and William Melek. 'Development of an effective docking system for modular mobile self-reconfigurable robots using extended kalman filter and particle filter.' Robotics 4.1 (2015): 25-49. [46] Quilez, Roberto, et al. 'Docking autonomous robots in passive docks with Infrared sensors and QR codes.' 2015. [47] Trawny, Nikolas, and Stergios I. Roumeliotis. 'Indirect Kalman filter for 3D attitude estimation.' University of Minnesota, Dept. of Comp. Sci. & Eng., Tech. Rep 2 (2005): 2005. [48] Sola, Joan. 'Quaternion kinematics for the error-state KF.' Laboratoire dAnalyse et dArchitecture des Systemes-Centre national de la recherche scientifique (LAAS-CNRS), Toulouse, France, Tech. Rep (2012). [49] Hu, Jwu-Sheng, and Ming-Yuan Chen. 'A sliding-window visual-imu odometer based on tri-focal tensor geometry.' 2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72682 | - |
dc.description.abstract | 慣性感測器與視覺感測器的組合常用於姿態估測以及機器人導航,在本研究中,為了解決機器人自主對接以及充電問題,提出了一套以擴展型卡爾曼濾波器為基底之慣性-視覺目標六維姿態估測器,透過計算模型亞可比陣列可將精準度高但非線性量測模型應用於線性的卡爾曼濾波器,結合複數個影像針對同一特徵之幾何限制以及待測目標之六維姿態作為量測模型,此方法的優點為系統狀態不需要估測三維特徵位置。換句話說,這個方法不需要重建環境,大幅度地減少演算法複雜度,更符合即時運算的構想。要如何正確地結合兩種感測器資訊以及提高整體估測器的精準度,兩感測器之間的六維相對姿態精準度佔有極為重要的角色,錯誤的校正過程會導致偏移誤差並且降低估測的精準度,甚至導致估測器量測發散,本研究為使用線上運算的概念即時校正感測器之間的六維姿態,在目標估測的平均誤差可達到2.754公分以及0.702度。最後由實際運行結果顯示結合慣性視覺感測器之目標六維姿態估測器的精準度,並且分析有無感測器自主校正在目標估測上的影響,不僅僅能提升在於快速運動下的目標估測精準度,也同時能夠正確的校正感測器之間的相對姿態。 | zh_TW |
dc.description.abstract | Inertial and visual sensors are usually used in pose estimation and robot navigation. This research presents an Extended Kalman filter (EKF) based visual-inertial target 6-DoF pose estimator for robot autonomous docking and recharging problems. The nonlinear measurement model which is more accurate but highly complex can be applied to linear Kalman filter through calculating Jacobian without losing its accuracy. Combining geometry constraints of the mulit-camera views and the target 6-DoF pose are served as the measurement model. This model does not require including 3D feature position in the state vector. In other words, this method doesn’t need to reconstruct the environment which can reduce the algorithm complexity and make it more effective for real-time executing. Correct data fusion and hence overall estimator accuracy, depends on the accurate calibration of the 6-DoF relative pose between two sensors. The errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. This research, we describe an algorithm which can applies the concept of online calibrating sensors relative pose. The average error in the estimation result is 2.754cm and 0.702 degree. The experimental results are presented that the accuracy of visual-inertial target pose estimator and analysis the impact of the online sensor-to-sensor self-calibration. This algorithm can not only enhance the estimation accuracy during high speed movement, but also calibrating sensor-to-sensor relative pose accurately. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T07:03:31Z (GMT). No. of bitstreams: 1 ntu-108-R05522828-1.pdf: 4316075 bytes, checksum: c09f872c20a38f3631d6f76c1ed85734 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 中文摘要 i
ABSTRACT ii 目錄 iii 圖目錄 vi 表目錄 viii Chapter 1 緒論 1 1.1 研究動機與目的 1 1.2 文獻回顧 4 1.3 研究貢獻 7 Chapter 2 四元數運動學 8 2.1 四元數的定義 8 2.2 四元數性質 9 2.2.1 四元數的加減法 9 2.2.2 四元數的乘積 9 2.2.3 單位四元數 10 2.2.4 共軛四元數 10 2.2.5 四元數的範數 10 2.2.6 逆四元數 11 2.2.7 純四元數的冪級數 11 2.2.8 純四元數的指數 11 2.2.9 四元數的指數 11 2.3 四元數與旋轉 12 2.3.1 三維向量旋轉 12 2.3.2 旋轉群 13 2.3.3 旋轉群與旋轉矩陣 13 2.3.4 旋轉矩陣與指數映射 14 2.3.5 羅德里格旋轉公式 15 2.3.6 旋轉群與四元數 16 2.3.7 四元數與指數映射 16 2.3.8 旋轉矩陣與四元數 17 2.3.9 旋轉的疊合 18 2.4 四元數的微積分 19 2.4.1 旋轉的微擾 19 2.4.2 四元數對時間微分 20 2.4.3 旋轉的亞可比式 20 2.4.4 四元數對時間的積分 21 Chapter 3 系統模型與估測器設計 22 3.1 系統座標系 22 3.2 慣性感測器動態模型 23 3.2.1 系統狀態 23 3.2.2 系統噪音模型 23 3.2.3 IMU連續域系統動態 25 3.3 系統量測模型 28 3.3.1 針孔相機模型 28 3.4 Epipolar Geometry Model 31 3.5 Tri-focal Tensor 33 3.6 感測器融合 36 3.6.1 系統狀態 36 3.6.2 系統預測動態模型 36 3.6.3 離散域系統動態 37 3.6.4 濾波器量測更新 40 Chapter 4 實驗與討論 41 4.1 實驗設備 41 4.2 實驗結果與分析 42 4.2.1 感測器自校正與目標估測 42 4.2.2 無感測器自校正 57 Chapter 5 結論與未來建議 60 5.1 結論 60 5.2 未來建議 60 附錄 61 參考文獻 62 | |
dc.language.iso | zh-TW | |
dc.title | 卡爾曼濾波器之視覺慣性目標姿態估測與感測器間自我校正 | zh_TW |
dc.title | Kalman Filter Based Visual Inertial Target Pose Estimation with Sensor-to-sensor Self Calibration | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 詹魁元(Kuei-Yuan Chan),陳亮嘉(Liang-Chia Chen) | |
dc.subject.keyword | 慣性感測器,視覺幾何,擴展型卡爾曼濾波器,自我校正,姿態估測, | zh_TW |
dc.subject.keyword | inertial sensor,visual geometry,Extended Kalman filter,self-calibration,pose estimation, | en |
dc.relation.page | 66 | |
dc.identifier.doi | 10.6342/NTU201902155 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-07-30 | |
dc.contributor.author-college | 工學院 | zh_TW |
dc.contributor.author-dept | 機械工程學研究所 | zh_TW |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 4.21 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。