Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92147
標題: 基於誤差狀態卡爾曼濾波器之視覺慣性里程計算之無人機群分散式隊形控制於追蹤人物空中攝影
Error-State Kalman Filter Based Visual-Inertial Odometry and Distributive Formation Control of Unmanned Aerial Vehicles for Tracking Human Target in Aerial Cinematography
作者: 張晁維
Chao-Wei Chang
指導教授: 連豊力
Feng-Li Lian
關鍵字: 卡爾曼濾波器,視覺慣性里程計,隊形控制,幾何控制,自動空中攝影,
Kalman Filter,Visual Inertial Odometry,Formation Control,Geometric Control,Autonomous Aerial Cinematography,
出版年 : 2024
學位: 碩士
摘要: 近年來,由於無人機自動化與操作的快速發展,無人機空中攝影也逐漸見於電影工業中,如使用單一無人機掛載相機拍攝動態場景。動態場景的多角度同時拍攝是常見於許多傑出的影視作品的手法。然而,由於多台無人機同時進行空中攝影的技術尚未發展成熟,幾乎沒有在電影工業中觀察到多台無人機同時拍攝動態場景的情形。因此本篇論文提出一個多台無人機自動空中攝影系統,專注於解決三個多台無人機空中攝影的核心議題:基於視覺的無人機狀態估測、目標相機隊形的參數化、以及無人機隊形追蹤控制器。
現有文獻中的多台無人機空中攝影多仰賴於室內的精確定位系統。為了克服使用外部定位系統造成的限制,本篇論文提出使用基於誤差狀態卡爾曼濾波器之視覺里程計來求得無人機以及拍攝演員的軌跡。臉部偵測與特徵辨識模型以及標記物相對位姿估測演算法則使用現有的電腦視覺演算法,透過無人機掛載相機進行相對位姿量測。
相較現有文獻中對相機位置參數的使用僅限於單一拍攝演員以及單一無人機,本篇論文定義相機位置參數並用以計算相對演員的目標相機隊形,以及各台無人機對應的相機位姿。
最後,本篇論文提出基於梯度下降法的無人機群分散式隊形控制使無人機追蹤至目標相機隊形,並將控制器損失函數定義於特殊歐式群的李代數空間上。
為了驗證基於誤差狀態卡爾曼濾波器之視覺里程計對三台無人機與一個拍攝演員的軌跡估測精確度,本篇論文中使用一固定光達作為位置真實數值,並於多個實驗中比較狀態估測演算法求得位置與光達測量值的差異。無人機群分散式隊形控制則使用與實驗相同的無人機、演員數量以及其初始狀態進行數值模擬。實驗結果顯示,於單一無人機位姿估測時,本文提出的基於誤差狀態卡爾曼濾波器之視覺里程計可以求得較原本誤差狀態卡爾曼濾波器更精準的估測位姿;而對拍攝演員的里程計則可以在無人機重新測得人物時恢復至平滑且低誤差的位置估測。此外,模擬結果顯示本篇論文提出的無人機群隊形控制器可以使無人機在演員行走速度受雜訊干擾時仍能收斂至目標相機隊形進行空中攝影,反應此隊形控制器的可行性。
In recent decades, the fast progress on the low-level autonomy and manipulation of drones have unveiled the development of aerial cinematography for the film industries, especially on capturing a dynamic scene with a drone equipped with cameras for media production. However, multiple drones are seldom utilized simultaneously to capture a dynamic scene with multiple perspectives, a common practice for those outstanding film production, owing to the limited development of multiple drone aerial cinematography techniques. As a result, in this thesis we propose a multiple drone autonomous aerial cinematography system focusing on three core issues of the multiple drone aerial cinematography problem: a vision-based robocentric state estimation, camera positioning parameters for a desired camera formation, and a formation tracking controller for drones to track the desired camera trajecotries.
Existing works of multiple drone aerial cinematography mostly rely on accurate external positioning facilities in indoor environment. To overcome the restricitons of external positioning fascilities, an error-state Kalman filter (ESKF) based visual inertial odometry (VIO) is applied on drones and the human actors to derive smooth estimates of their motions without external equipments. Existing machine learning (ML) techniques on computer visions for face recognition, face landmark detection, and marker pose estimation is adopted to retrieve the pose measurement from the onboard camera.
A set of camera positioning parameters are defined in this thesis to assign the desired relative poses between the human actor and each drone to emulate the desired camera formation in the cinematographic literature, while in literature camera positioning parameters are only defined on a human actor and a drone.
For a formation tracking controller, a distributive gradient-based formation control using cost functions on $\mathfrak{se}(3)$ is proposed to track the desire camera formation with regard to the human actor.
To verify the ESKF-based VIO on the motion of three commerical drones and a human actor, experiments are conducted to examine the accuracy of the estimated position in comparison with position ground truths from a space-fixed LiDAR. The proposed distributive formation controller is validated by numerical simulations with a variety of scenarios and initial conditions using the same setting of drones and the human actor as the real-world experiments. Experimental results demonstrate that the proposed ESKF-based VIO outperforms the original ESKF on both position and orientation estimates for a single drone, and that the estimated odometry of the human actor could recover smooth and accurate position estimates from detection failures. Besides, Convergent results for the formation tracking with a noisy line of action (LoA) linear velocity of the human actor in numerical simulations also reveal the feasibility of the proposed distributive formation controller.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92147
DOI: 10.6342/NTU202400587
全文授權: 未授權
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-112-1.pdf
  未授權公開取用
50.57 MBAdobe PDF
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved