請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3855
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 洪一平 | |
dc.contributor.author | Qiao Liang | en |
dc.contributor.author | 梁橋 | zh_TW |
dc.date.accessioned | 2021-05-13T08:37:36Z | - |
dc.date.available | 2018-08-02 | |
dc.date.available | 2021-05-13T08:37:36Z | - |
dc.date.copyright | 2016-08-02 | |
dc.date.issued | 2016 | |
dc.date.submitted | 2016-07-26 | |
dc.identifier.citation | [1] Jakob Engel, Thomas Schops, and Daniel Cremers. Lsd-slam: Large-scale direct monocular slam. In European Conference on Computer Vision, pages 834–849. Springer, 2014.
[2] Stephan Weiss and Roland Siegwart. Real-time metric state estimation for modular vision-inertial systems. In Robotics and Automation (ICRA), 2011 IEEE Interna- tional Conference on, pages 4531–4537. IEEE, 2011. [3] Raul Mur-Artal, JMM Montiel, and Juan D Tardos. Orb-slam: a versatile and ac- curate monocular slam system. IEEE Transactions on Robotics, 31(5):1147–1163, 2015. [4] DJI. Dji phantom series. http://www.dji.com/cn/products/phantom. [5] Ted Driver. Long-term prediction of gps accuracy: Understanding the fundamentals. In ION GNSS, 2007. [6] Marko Modsching, Ronny Kramer, and Klaus ten Hagen. Field trial on gps accuracy in a medium size city: The influence of built-up. In 3Rd workshop on positioning, navigation and communication, pages 209–218, 2006. [7] Microsoft. Microsoft kinect. https://developer.microsoft.com/en- us/windows/kinect. [8] Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on, pages 127–136. IEEE, 2011. [9] Thomas Whelan, Stefan Leutenegger, Renato F Salas-Moreno, Ben Glocker, and Andrew J Davison. Elasticfusion: Dense slam without a pose graph. Proc. Robotics: Science and Systems, Rome, Italy, 2015. [10] Andrew Howard. Real-time stereo visual odometry for autonomous ground vehi- cles. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3946–3952. IEEE, 2008. [11] David Schleicher, Luis M Bergasa, Manuel Ocana, Rafael Barea, and Elena Lopez. Real-time hierarchical stereo visual slam in large-scale environments. Robotics and Autonomous Systems, 58(8):991–1002, 2010. [12] Andrew J Davison. Real-time simultaneous localisation and mapping with a single camera. In Computer Vision, 2003. Proceedings. Ninth IEEE International Confer- ence on, pages 1403–1410. IEEE, 2003. [13] Georg Klein and David Murray. Parallel tracking and mapping for small ar workspaces. In Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on, pages 225–234. IEEE, 2007. [14] Andrew J Davison, Ian D Reid, Nicholas D Molton, and Olivier Stasse. Monoslam: Real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence, 29(6):1052–1067, 2007. [15] Richard A Newcombe, Steven J Lovegrove, and Andrew J Davison. Dtam: Dense tracking and mapping in real-time. In 2011 international conference on computer vision, pages 2320–2327. IEEE, 2011. [16] Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming- Hsuan Yang, and Yi-Ping Hung. Vision-based positioning for internet-of-vehicles. IEEE Transactions on Intelligent Transportation Systems, 2016. [17] Anastasios I Mourikis and Stergios I Roumeliotis. A multi-state constraint kalman filter for vision-aided inertial navigation. In Proceedings 2007 IEEE International Conference on Robotics and Automation, pages 3565–3572. IEEE, 2007. [18] Jonathan Kelly and Gaurav S Sukhatme. Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration. In Computational Intelligence in Robotics and Automation (CIRA), 2009 IEEE International Symposium on, pages 360–368. IEEE, 2009. [19] Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart, and Paul Fur- gale. Keyframe-based visual–inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 34(3):314–334, 2015. [20] Stephan Weiss, Markus W Achtelik, Margarita Chli, and Roland Siegwart. Versatile distributed pose estimation and sensor self-calibration for an autonomous mav. In Robotics and Automation (ICRA), 2012 IEEE International Conference on, pages 31–38. IEEE, 2012. [21] x-io Technologies. x-imu. http://www.x-io.co.uk/products/x-imu/. [22] Javier Civera, Andrew J Davison, and JM Martinez Montiel. Inverse depth parametrization for monocular slam. IEEE transactions on robotics, 24(5):932–945, 2008. [23] Ethan Eade and Tom Drummond. Scalable monocular slam. In 2006 IEEE Com- puter Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 1, pages 469–476. IEEE, 2006. [24] Jan Stuhmer, Stefan Gumhold, and Daniel Cremers. Real-time dense geometry from a handheld camera. In Joint Pattern Recognition Symposium, pages 11–20. Springer, 2010. [25] Matia Pizzoli, Christian Forster, and Davide Scaramuzza. Remode: Probabilistic, monocular dense reconstruction in real time. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 2609–2616. IEEE, 2014. [26] Jakob Engel, Jurgen Sturm, and Daniel Cremers. Semi-dense visual odometry for a monocular camera. In Proceedings of the IEEE international conference on com- puter vision, pages 1449–1456, 2013. [27] Thomas Schops, Jakob Engel, and Daniel Cremers. Semi-dense visual odometry for ar on a smartphone. In Mixed and Augmented Reality (ISMAR), 2014 IEEE Interna- tional Symposium on, pages 145–150. IEEE, 2014. [28] Jakob Engel, Jurgen Sturm, and Daniel Cremers. Scale-aware navigation of a low- cost quadrocopter with a monocular camera. Robotics and Autonomous Systems, 62(11):1646–1656, 2014. [29] Mingyang Li and Anastasios I Mourikis. High-precision, consistent ekf-based visual–inertial odometry. The International Journal of Robotics Research, 32(6): 690–711, 2013. [30] Roland Brockers, Sara Susca, David Zhu, and Larry Matthies. Fully self-contained vision-aided navigation and landing of a micro air vehicle independent from exter- nal sensor inputs. In SPIE Defense, Security, and Sensing, pages 83870Q–83870Q. International Society for Optics and Photonics, 2012. [31] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient alternative to sift or surf. In 2011 International conference on computer vision, pages 2564–2571. IEEE, 2011. [32] Vicon. Vicon bonita. http://www.vicon.com/products/camera- systems/bonita. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3855 | - |
dc.description.abstract | 隨著飛行攝影機的日益普及,自我定位技術作為保障其功能性與安全性的關鍵技術之一,其重要性與日俱增。單目攝影機和慣性測量單元 (IMU) 因為其低成本、輕重量等特點,非常適合用於飛行攝影機的自我定位。此篇論文從視覺定位和視覺慣性傳感器融合兩個方面分別進行研究,結合單目攝影機和慣性測量單元提出一種飛行攝影機自我定位之方式。本文對三種目前較為先進的用於車輛定位的單目視覺定位方法進行不同條件下的實驗,分析將其用於飛行攝影機定位時可能產生的問題,并討論各種方法的適用場景和優缺點。考慮到視覺定位的固有限制,本文引入一種基於寬鬆耦合方式的傳感器融合方法,將視覺和慣性測量相結合,並在實驗結果中驗證了方法的有效性。 | zh_TW |
dc.description.abstract | In this paper, a low cost monocular camera and an inertial measurement unit (IMU) are combined for the ego-positioning on flying cameras. We firstly survey the state-of-the-art monocular visual positioning approaches, such as Simultaneous Localization and Mapping (SLAM) and Model-Based Localization (MBL). Three of the most representative methods including ORB-SLAM, LSD-SLAM, and MBL, which are originally designed for vehicles, are evaluated in different scenarios. Based on the experiment results, we analyze the pros and cons of each method. Considering the limitations of vision-only approaches, we fuse the visual positioning with an inertial sensor based on a loosely-coupled framework. The experiment results demonstrate the benefits of visual-inertial sensor fusion. | en |
dc.description.provenance | Made available in DSpace on 2021-05-13T08:37:36Z (GMT). No. of bitstreams: 1 ntu-105-R03944046-1.pdf: 6280630 bytes, checksum: 7e35de8736d1a64f77d11ab98789671b (MD5) Previous issue date: 2016 | en |
dc.description.tableofcontents | 誌謝 i
中文摘要 ii Abstract iii Contents iv List of Figures vi List of Tables viii 1 Introduction 1 1.1 Motivation.................................. 1 1.2 PositioningTechniquesforFlyingCameras.................................. 1 1.3 VisualPositioning.............................. 2 1.4 InertialSensor................................ 3 2 Related Works 5 2.1 MonocularVisualPositioning ....................... 5 2.2 Visual-InertialSensorFusion........................ 7 3 Visual Positioning for Flying Cameras 8 3.1 Model-BasedLocalization ......................... 8 3.1.1 TrainingPhase ........................... 8 3.1.2 Ego-PositioningPhase ....................... 10 3.2 LSD-SLAM................................. 10 3.2.1 Tracking .............................. 10 3.2.2 DepthMapEstimation ....................... 11 3.2.3 MapOptimization ......................... 11 3.3 ORB-SLAM................................. 11 3.3.1 Tracking .............................. 11 3.3.2 LocalMapping ........................... 12 3.3.3 LoopClosing............................ 13 4 Visual-Inertial Sensor Fusion 14 4.1 FrameworkOverview............................ 14 4.2 Method ................................... 14 4.2.1 StateRepresentation ........................ 15 4.2.2 PredictionStep ........................... 16 4.2.3 UpdateStep............................. 16 5 Experiments 18 5.1 Evaluation for Different Visual Positioning Methods . . . . . . . . . . . 19 5.1.1 Indoor................................ 19 5.1.2 Outdoor............................... 20 5.1.3 PureRotation............................ 22 5.1.4 Fast-Moving ............................ 22 5.1.5 Blurry................................ 23 5.1.6 Comparison............................. 24 5.2 EvaluationforSensorFusionResults.................... 25 6 Conclusion and Future Works 28 6.1 ConclusionandFutureWork ........................ 28 Bibliography 29 | |
dc.language.iso | en | |
dc.title | 基于視覺和慣性測量之飛行攝影機自我定位 | zh_TW |
dc.title | Visual-Inertial Ego-Positioning for Flying Cameras | en |
dc.type | Thesis | |
dc.date.schoolyear | 104-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳祝嵩,陳冠文,傅楸善,徐繼聖 | |
dc.subject.keyword | 飛行攝影機,自我定位,單目視覺,視覺定位,視覺與慣性傳感器融合, | zh_TW |
dc.subject.keyword | Flying Cameras,Ego-Positioning,Monocular Vision,Visual Positioning,Visual-Inertial Sensor Fusion, | en |
dc.relation.page | 32 | |
dc.identifier.doi | 10.6342/NTU201601311 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2016-07-27 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-105-1.pdf | 6.13 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。