Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86259
Title: | 可用於無人飛行載具定位之視覺-慣性-超寬頻融合技術 Visual-Inertial-UWB Fusion for UAV Localization |
Authors: | Hsiu-Jui Chang 張修瑞 |
Advisor: | 洪一平(Yi-Ping Hung) 洪一平(Yi-Ping Hung | hung@csie.ntu.edu.tw | ), |
Keyword: | 視覺慣性里程計,超寬頻,感測器融合,無人飛行載具,深度學習, Visual-Inertial Odometry,Ultra-wideband,Sensor Fusion,Unmanned Aerial Vehicle,Deep Learning, |
Publication Year : | 2022 |
Degree: | 碩士 |
Abstract: | 相機、慣性量測單元(IMU)與超寬頻(UWB)感測器經常被用於解決無人飛行載具的定位問題。藉由整合不同感測器的觀測數據,即可進一步提升定位系統的準確度。在本論文中,我們提出了一種基於深度學習,並將視覺、慣性量測單元與超寬頻融合的定位方法。我們的模型由視覺慣性(VI)分支和超寬頻分支組成,並且結合兩個分支的結果來預測全局位置。為了評估此方法的表現,我們在一個公開的視覺慣性資料集上加入超寬頻模擬,並且在真實世界進行實驗。實驗結果顯示,我們的方法相較於單純使用視覺慣性或超寬頻的定位方法,提供了更健全和準確的定位結果。 Camera, inertial measurement unit (IMU), and ultra-wideband (UWB) sensors are commonplace solutions to unmanned aerial vehicle localization problems. By integrating the observations from different sensors, the performance of the localization system may be further improved. In this thesis, we propose a learning-based indoor localization method using the fusion of vision, IMU, and UWB. Our model consists of a visual-inertial (VI) branch and a UWB branch. We combine the estimation results of both branches to predict global poses. To evaluate our method, we add UWB simulations on a public VI dataset and conduct a real-world experiment. The experimental results show that our method provides more robust and accurate results than VI/UWB-only localization. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86259 |
DOI: | 10.6342/NTU202201914 |
Fulltext Rights: | 同意授權(全球公開) |
metadata.dc.date.embargo-lift: | 2022-09-05 |
Appears in Collections: | 資訊工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
U0001-3107202217384000.pdf | 4.86 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.