請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72056
標題: | 輪爪機器人回傳影像之穩定 Stabilization of Acquired Environmental Information for the Claw Wheel Robot |
作者: | Szu-Yu Lin 林思妤 |
指導教授: | 周瑞仁(Jui Jen Chou) |
關鍵字: | 越障,探勘,搜救,影像穩定,機器人, Obstacle surmounting,search,rescue,robot,video stabilization, |
出版年 : | 2018 |
學位: | 碩士 |
摘要: | 本研究開發一用於攀爬越障載具「輪爪機器人」的影像穩定系統。「輪爪機器人」係為了勘查危險、人員難以進入之情境所設計,如危樓、災害現場、野外環境等。輪爪機器人具有以下幾項特色:「摺疊轉換機構」使得輪爪機器人可切換於兩種不同的運動模式之間,分別為適合快速移動於平坦地面的「輪式」運動模式,與利於攀爬、跨越崎嶇地形或階梯的「爪式」運動模式。除此之外,輪爪機器人的優勢尚有機身結構簡單、致動器數量少,使得系統和控制架構相對簡單。
輪爪機器人具有兩個主要的功能性。首先,為使得機器人能夠跨越、進入不同探勘現場與情境,機器人必須具備的第一個功能性就是「機動性」。輪爪機器人經本研究團隊歷經數年開發及改善,已成功使機器人具備可克服不同環境,如平地、崎嶇地形、階梯、水域環境等的機動性。當機器人進入這些探勘現場後,機器人需要具備的第二個功能性就是「影像功能」。我們透過影像裝置如攝影機來捕捉、回傳現場的畫面至後端的工作站與人員,輔助後端人員進行遠端遙控機器人的工作。 但是由於探勘現場的隨機地貌、障礙物、以及輪爪形狀所造成的影響,由架設在機器人上的影像裝置所拍攝的原始畫面會有相當程度的晃動,因此在傳回影像資訊給後端工作站與人員之前,需要先進行影像的穩定。 因此,本研究開發一結合姿態感測與畫面特徵點追蹤之即時影像穩定系統。影像穩定架構大致上可分為兩個階段,第一階段是針對「可預測」的機身起伏與角度,藉由感測器回授、計算機身姿態來進行影像的視角補償;第二階段則是針對「難以掌握」的隨機晃動如地形之零碎起伏、機身機械誤差等,我們透過畫面特徵點追蹤的方式進一步穩定回傳之畫面。 第一階段的影像穩定,係透過馬達編碼器、慣性量測單元(IMU)之感測器回授、搭配機身幾何規格、姿態角函數、運動特性與卡曼濾波演算法的計算,求得裝設於後機身中央的攝影機大致的位置與角度,藉此將能夠量測的視角變動進行補償。 第二階段,關於無法預測的隨機晃動,則是透過捕捉畫面中的特徵點,並且在連續的影像中追蹤這些特徵點的位置與相對移動量,以RANSAC演算法計算出畫面內容的「運動場」。計算出運動場後,我們透過在畫面中加入一個「裁切框」,並且透過順著運動場移動裁切框位置的方式,減少裁切框──也就是最終的輸出影像──中各特徵點、以及畫面內容的相對移動與晃動,藉此達到影像穩定的目的。 本研究已成功建立輪爪機器人的即時影像穩定系統,與機身機電系統的改良,加裝影像裝置、慣性量測單元與馬達控制模組以符合任務需求。從結論中得知,在多種實驗情境下,本研究開發之即時影像穩定系統可有效將機器人所拍攝到之影像震盪幅度降低。除此之外,此系統亦可應用於其他類似之具有感測器回授與影像裝置之移動載具系統。 This research develops a computer-vision based video stabilization system aided by IMU (Inertial Measurement Unit) for a stair-climbing maneuvering “Claw-Wheel” robot for search and rescue purposes in dangerous or disaster sites. The Claw-Wheel Robot is designed for search purposes in various scenarios, for instance, dangerous buildings, disaster sites, wilderness. It features the “folding transformation mechanism”, which enables it to transform between its two motion modes intended for different scenarios. The “wheel mode” is designed for rapidly moving across flat land, while on the other hand the “claw mode” enables the robot to climb and ascend rough terrains or stairs. Moreover, the Claw-Wheel Robot is also simple in structure and has less amount of actuators. The Claw-Wheel Robot has two major functionalities. The first is the dynamic mobility which enables the robot to enter disaster or dangerous sites. Throughout these several years, our research team has fully developed the dynamic mobility functionality which makes the robot capable of maneuvering across various types of natural and artificial terrains, including flat ground, rugged terrain, stairs, amphibious environments and so on. After reaching these sites, the robot system relies on the second functionality to capture and retrieve image information by a video device, in order to aid humans during remote robot operation and control. However, due to the uncertainty of both the terrain and the geometric configuration of the climbing claws, the raw image information captured by a video device requires stabilization, apparently. Therefore, we combined state estimation and video stabilization techniques to provide stable information. This research develops a real-time video stabilization system featuring robot pose estimation and feature tracking techniques. The video stabilization framework can be divided into two stages. The first stage is to recover the predictable perspective deviation between frames by sensor feedback. The second processing stage is for the arbitrary shaking and unpredictable effects. During this stage we promote the stabilization process by tracking “feature points” in the captured image sequence over time. The first stage of video stabilization is achieved by predicting the pose of the camera, which is installed on the center of the robot. The algorithm involves sensor feedback provided by IMU (Inertial measurement unit) and motor encoders, along with robot geometric configuration, claw pose angle functions, motion characteristics and the Kalman filter algorithm. With these algorithms we can approximate the camera’s position and orientation, therefore compensate the perspective variation. The unpredictable, arbitrary motion is handled in the second stage of the process. We capture “feature points” and track them over consequential frames. After calculating the motion, or in other words, position deviation, for corresponding points in consequential frames, we apply the RANSAC algorithm to obtain the accurate “motion field” of the image content. Finally, we apply an output “cropping window” which is moved along the motion field. Inside this cropping window, which eventually turns out to be the output video, the relative positions of the feature points and frame contents are kept constant, in order to reduce shaking and stabilize the video sequence. In this research we have successfully constructed the real-time video stabilization framework and system for the Claw-Wheel robot, and we also promoted the on-board mechatronics system by installing a video device, IMU and motor control modules in order to meet mission requirements. Under several experimental scenarios, the real-time video stabilization system can reduce the shaking motion significantly. Moreover, this system could also be adapted to similar mobile platforms with proper motion sensor feedback and video devices. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72056 |
DOI: | 10.6342/NTU201803987 |
全文授權: | 有償授權 |
顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 8.05 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。