請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98411| 標題: | 伺服器與邊緣設備於實境環境中基於特徵的分佈式單目視覺定位系統 Distributed Feature-Based Monocular Visual Localization with Server-Edge Collaboration in the Wild |
| 作者: | 陳欣妤 Hsin-Yu Chen |
| 指導教授: | 簡韶逸 Shao-Yi Chien |
| 關鍵字: | 視覺定位,視覺里程計, Visual localization,Visual odometry, |
| 出版年 : | 2025 |
| 學位: | 碩士 |
| 摘要: | 精確的六自由度(6-DoF)相機定位對於自駕車、移動機器人以及擴增實境等多種應用至關重要,因為它能確保系統的穩定運作並提升整體效能。
單張影像定位的目標是在已知場景中,對給定的查詢影像估計其六自由度相機位姿。然而,在大規模環境中,由於外觀變化劇烈,要達成高精度的定位極具挑戰性。能夠應對此類情況的方法通常需耗費大量運算資源,因此難以在邊緣設備上執行。視覺里程計可以即時從連續影格中估計相對位姿,雖然傳統方法能達成即時運算,但仍須透過回環偵測與稀疏調整來修正累積誤差。 本論文提出一種基於特徵的單目視覺定位系統,結合伺服器端的單張影像定位與邊緣設備上的視覺里程計。透過邊緣設備提供的連續影像資訊,我們的系統能降低單張定位失敗的風險,並藉由使用全域地圖來解決視覺里程計所產生的漂移問題。該系統在保持與現有先進定位方法相當精度的同時,亦展現出良好的運算效率。此外,我們僅將深度學習模型用於特徵偵測與匹配,因此不需針對不同場景重新訓練模型,使本系統具備良好的場景適應性,能夠快速部署至新的環境中。 Precise 6-Degree-of-Freedom (6-DoF) camera localization is crucial for a wide range of applications, including autonomous driving, mobile robotics, and augmented reality, as it ensures reliable operation and enhances overall system effectiveness. Single-image localization determines the 6-DoF camera pose for a given query image within a known scene. However, achieving accurate localization in large-scale environments with significant appearance changes is particularly challenging. Methods that provide robust results often require intensive computational resources, making them difficult to run on edge devices. Visual odometry can recover relative camera poses from consecutive frames in real-time. While traditional methods can achieve real-time relative pose estimation, they require loop closure and bundle adjustment to mitigate drift problems. In this thesis, we propose a feature-based monocular visual localization system that combines single-image localization on the server with visual odometry on an edge device. By leveraging the continuity of consecutive frames provided by the edge device, our system mitigates the risk of single-image localization failures and uses the global map to prevent drift issues associated with visual odometry. Our system demonstrates efficiency while maintaining accuracy comparable to state-of-the-art localization algorithms. Furthermore, deep learning models are employed exclusively for feature detection and matching, eliminating the need to train new models for different scenes. This characteristic enhances the system's adaptability, enabling easy deployment in new environments. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98411 |
| DOI: | 10.6342/NTU202501939 |
| 全文授權: | 未授權 |
| 電子全文公開日期: | N/A |
| 顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 29.61 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
