Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3855| Title: | 基于視覺和慣性測量之飛行攝影機自我定位 Visual-Inertial Ego-Positioning for Flying Cameras |
| Authors: | Qiao Liang 梁橋 |
| Advisor: | 洪一平 |
| Keyword: | 飛行攝影機,自我定位,單目視覺,視覺定位,視覺與慣性傳感器融合, Flying Cameras,Ego-Positioning,Monocular Vision,Visual Positioning,Visual-Inertial Sensor Fusion, |
| Publication Year : | 2016 |
| Degree: | 碩士 |
| Abstract: | 隨著飛行攝影機的日益普及,自我定位技術作為保障其功能性與安全性的關鍵技術之一,其重要性與日俱增。單目攝影機和慣性測量單元 (IMU) 因為其低成本、輕重量等特點,非常適合用於飛行攝影機的自我定位。此篇論文從視覺定位和視覺慣性傳感器融合兩個方面分別進行研究,結合單目攝影機和慣性測量單元提出一種飛行攝影機自我定位之方式。本文對三種目前較為先進的用於車輛定位的單目視覺定位方法進行不同條件下的實驗,分析將其用於飛行攝影機定位時可能產生的問題,并討論各種方法的適用場景和優缺點。考慮到視覺定位的固有限制,本文引入一種基於寬鬆耦合方式的傳感器融合方法,將視覺和慣性測量相結合,並在實驗結果中驗證了方法的有效性。 In this paper, a low cost monocular camera and an inertial measurement unit (IMU) are combined for the ego-positioning on flying cameras. We firstly survey the state-of-the-art monocular visual positioning approaches, such as Simultaneous Localization and Mapping (SLAM) and Model-Based Localization (MBL). Three of the most representative methods including ORB-SLAM, LSD-SLAM, and MBL, which are originally designed for vehicles, are evaluated in different scenarios. Based on the experiment results, we analyze the pros and cons of each method. Considering the limitations of vision-only approaches, we fuse the visual positioning with an inertial sensor based on a loosely-coupled framework. The experiment results demonstrate the benefits of visual-inertial sensor fusion. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3855 |
| DOI: | 10.6342/NTU201601311 |
| Fulltext Rights: | 同意授權(全球公開) |
| Appears in Collections: | 資訊網路與多媒體研究所 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-105-1.pdf | 6.13 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
