Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86259
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor洪一平(Yi-Ping Hung)
dc.contributor.authorHsiu-Jui Changen
dc.contributor.author張修瑞zh_TW
dc.date.accessioned2023-03-19T23:45:21Z-
dc.date.copyright2022-09-05
dc.date.issued2022
dc.date.submitted2022-08-29
dc.identifier.citation[1] Y. Almalioglu, M. Turan, M. R. U. Saputra, P. P. de Gusmão, A. Markham, and N. Trigoni. Selfvio: Self-supervised deep monocular visual–inertial odometry and depth estimation. Neural Networks, 150:119–136, 2022. [2] T. Baltrusaitis, C. Ahuja, and L.-P. Morency. Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:423–443, 2019. [3] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart. The euroc micro aerial vehicle datasets. The International Journal of Robotics Research, 35(10):1157–1163, 2016. [4] C. Chen, X. Lu, A. Markham, and N. Trigoni. Ionet: Learning to cure the curse of drift in inertial odometry. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), Apr. 2018. [5] C. Chen, S. Rosa, Y. Miao, C. X. Lu, W. Wu, A. Markham, and N. Trigoni. Selective sensor fusion for neural visual-inertial odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. [6] C. Chen, H. Zhu, M. Li, and S. You. A review of visual-inertial simultaneous localization and mapping from filtering-based and optimization-based perspectives. Robotics, 7(3), 2018. [7] R. Clark, S. Wang, H. Wen, A. Markham, and N. Trigoni. Vinet: Visual-inertial odometry as a sequence-to-sequence learning problem. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017. [8] C. Debeunne and D. Vivet. A review of visual-lidar fusion based simultaneous localization and mapping. Sensors, 20(7), 2020. [9] A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazirbas, V. Golkov, P. v. d. Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2758–2766, 2015. [10] P. Geneva, K. Eckenhoff, W. Lee, Y. Yang, and G. Huang. Openvins: A research platform for visual-inertial estimation. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 4666–4672, 2020. [11] L. Han, Y. Lin, G. Du, and S. Lian. Deepvio: Self-supervised deep learning of monocular visual inertial odometry using 3d geometric constraints. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6906– 6913, 2019. [12] A. R. Jimenez Ruiz, F. Seco Granja, J. C. Prieto Honorato, and J. I. Guevara Rosas. Accurate pedestrian indoor navigation by tightly coupling foot-mounted imu and rfid measurements. IEEE Transactions on Instrumentation and Measurement, 61(1):178– 189, 2012. [13] A. Kendall, Y. Gal, and R. Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [14] A. Kendall, M. Grimes, and R. Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 2938–2946, 2015. [15] L. Kneip, S. Weiss, and R. Siegwart. Deterministic initialization of metric state estimation filters for loosely-coupled monocular vision-inertial systems. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2235–2241, 2011. [16] P. Krapež, M. Vidmar, and M. Munih. Distance measurements in uwb-radio localization systems corrected with a feedforward neural network model. Sensors, 21(7), 2021. [17] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale. Keyframe-based visual–inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 34(3):314–334, 2015. [18] L. Liebel and M. Körner. Auxiliary tasks in multi-task learning, 2018. [19] L. Lin, W. Luo, Z. Yan, and W. Zhou. Rigid-aware self-supervised gan for camera ego-motion estimation. Digital Signal Processing, 126:103471, 02 2022. [20] Y.-M. Lu, J.-P. Sheu, and Y.-C. Kuo. Deep learning for ultra-wideband indoor positioning. In 2021 IEEE 32nd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), pages 1260–1266, 2021. [21] D. T. A. Nguyen, H.-G. Lee, E.-R. Jeong, H. L. Lee, and J. Joung. Deep learning-based localization for uwb systems. Electronics, 9(10), 2020. [22] A. Poulose and D. S. Han. Uwb indoor localization using deep learning lstm networks. Applied Sciences, 10(18), 2020. [23] A. Ranjan, V. Jampani, L. Balles, K. Kim, D. Sun, J. Wulff, and M. J. Black. Competitive collaboration: Joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12232–12241, Los Alamitos, CA, USA, jun 2019. IEEE Computer Society. [24] E. J. Shamwell, K. Lindgren, S. Leung, and W. D. Nothwang. Unsupervised deep visual-inertial odometry with online error correction for rgb-d imagery. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(10):2478–2493, 2020. [25] S. Wang, R. Clark, H. Wen, and N. Trigoni. DeepVO: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, may 2017. [26] H. Xu, L. Wang, Y. Zhang, K. Qiu, and S. Shen. Decentralized visual-inertial- uwb fusion for relative state estimation of aerial swarm. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 8776–8782, 2020. [27] H. Yan, Q. Shan, and Y. Furukawa. RIDI: robust IMU double integration. In V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, editors, Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, 35 Proceedings, Part XIII, volume 11217 of Lecture Notes in Computer Science, pages 641–656. Springer, 2018. [28] B. Yang, J. Li, and H. Zhang. Resilient indoor localization system based on uwb and visual–inertial sensors for complex environments. IEEE Transactions on Instrumentation and Measurement, 70:1–14, 2021. [29] N. Yang, L. von Stumberg, R. Wang, and D. Cremers. D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1278–1289, 2020. [30] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6612–6619, 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86259-
dc.description.abstract相機、慣性量測單元(IMU)與超寬頻(UWB)感測器經常被用於解決無人飛行載具的定位問題。藉由整合不同感測器的觀測數據,即可進一步提升定位系統的準確度。在本論文中,我們提出了一種基於深度學習,並將視覺、慣性量測單元與超寬頻融合的定位方法。我們的模型由視覺慣性(VI)分支和超寬頻分支組成,並且結合兩個分支的結果來預測全局位置。為了評估此方法的表現,我們在一個公開的視覺慣性資料集上加入超寬頻模擬,並且在真實世界進行實驗。實驗結果顯示,我們的方法相較於單純使用視覺慣性或超寬頻的定位方法,提供了更健全和準確的定位結果。zh_TW
dc.description.abstractCamera, inertial measurement unit (IMU), and ultra-wideband (UWB) sensors are commonplace solutions to unmanned aerial vehicle localization problems. By integrating the observations from different sensors, the performance of the localization system may be further improved. In this thesis, we propose a learning-based indoor localization method using the fusion of vision, IMU, and UWB. Our model consists of a visual-inertial (VI) branch and a UWB branch. We combine the estimation results of both branches to predict global poses. To evaluate our method, we add UWB simulations on a public VI dataset and conduct a real-world experiment. The experimental results show that our method provides more robust and accurate results than VI/UWB-only localization.en
dc.description.provenanceMade available in DSpace on 2023-03-19T23:45:21Z (GMT). No. of bitstreams: 1
U0001-3107202217384000.pdf: 4977022 bytes, checksum: 55c31a194399e2307a7730519ab64fbf (MD5)
Previous issue date: 2022
en
dc.description.tableofcontentsAcknowledgements i 摘要 ii Abstract iii Contents iv List of Figures vi List of Tables viii Chapter 1 Introduction 1 Chapter 2 Related Work 4 2.1 Deep Learning for Localization . . . . . . . . . . . . . . . . . . . . 4 2.1.1 Visual Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.2 Inertial Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.3 Ultra-Wideband Sensors . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.1 Traditional Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.2 Learning-Based Methods . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 3 Proposed Method 9 3.1 VI Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 UWB Branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 VI-UWB Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.4 Loss Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.5 Training Process and Implementation Details . . . . . . . . . . . . . 13 Chapter 4 Experiments 16 4.1 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2 EuRoC Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3 Real-World Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 5 Conclusions 31 References 32
dc.language.isoen
dc.title可用於無人飛行載具定位之視覺-慣性-超寬頻融合技術zh_TW
dc.titleVisual-Inertial-UWB Fusion for UAV Localizationen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳冠文(Kuan-Wen Chen),吳瑞北(Ruey-Beei Wu)
dc.subject.keyword視覺慣性里程計,超寬頻,感測器融合,無人飛行載具,深度學習,zh_TW
dc.subject.keywordVisual-Inertial Odometry,Ultra-wideband,Sensor Fusion,Unmanned Aerial Vehicle,Deep Learning,en
dc.relation.page36
dc.identifier.doi10.6342/NTU202201914
dc.rights.note同意授權(全球公開)
dc.date.accepted2022-08-30
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
dc.date.embargo-lift2022-09-05-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-3107202217384000.pdf4.86 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved