請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91723
標題: | 多相機影像伺服應用於三軸並聯式氣壓機械臂之研究 Development of Multi-Camera Visual Servo System for a Three-Axial Pyramidal Pneumatic Parallel Manipulator |
作者: | 陳冠廷 Kuan-Ting Chen |
指導教授: | 江茂雄 Mao-Hsiung Chiang |
關鍵字: | 多相機系統,影像物體檢測,三維位置定位,影像伺服控制,軌跡規劃, Multi-camera system,Image-based object detection,Three-dimensional position localization,Visual servo control,Trajectory planning, |
出版年 : | 2024 |
學位: | 碩士 |
摘要: | 隨著機械手臂與機器視覺技術的發展,多相機系統在物體辨識和位置定位方面的應用逐漸受到關注。本文致力於利用多相機系統實現對目標物的三維位置資訊的獲取,並結合影像伺服控制器,以實現靜態目標物的拾取與放置,以及動態目標物的追蹤功能。
依據相機相對於機械手臂的位置,可以分為相機被安裝於固定位置 (Eye to hand ) 以及相機被安裝於手臂末端點 (Eye in hand )。在Eye to hand的部分,利用RGB-D(Red, Green, Blue & Depth)相機,基於色彩空間的方法進行物體檢測。將RGB色彩空間轉換至HSV (Hue, Saturation,Value) 色彩空間,並藉由形態學之閉運算,建立目標物體完整輪廓。接著,由目標物輪廓之像素中心座標與其所對應RGB-D的深度,得出目標物體中心基於相機的三維座標位置。 在Eye in hand 的部分,利用兩個工業相機所建立的雙目立體視覺,基於ORB (Oriented FAST and Rotated BRIEF) 特徵點進行物件檢測。將所檢測到的特徵點之描述子 (descriptor),進行KNN Brutal-Force 匹配 (matching),並用RANSAC (RANdom SAmple Consensus) 穩健地建立的投影轉換矩陣。接著,以視差法計算目標物體中心基於相機的三維座標位置。 為了將目標物的三維座標資訊與並聯式氣壓三軸機械手臂進行整合。首先,我們進行手眼校正的實驗,將相機所估測的位置轉換至機械手臂基座的座標系下。並採用EKF (Extended Kalman Filter) 融合來自多相機的估測做為參考位置。接著,基於位置的影像伺服控制用於產生機械手臂末端點之參考速度。最後,考量到視覺系統的處理頻率 (25HZ) 與機械手臂系統的控制頻率 (1000HZ) 之差異,採用五階貝茲曲線 (Bézier curve) 做路徑規劃,用於產生時間差的行徑軌跡,同時滿足參考速度之追蹤。 本論文以三軸氣壓並聯式機械手臂驗證所提出的多相機影像伺服系統的可行性。實驗目標主要分為兩部分,首先,以RGB-D視覺與雙目立體視覺,對於目標物件做辨識與三維位置之定位。其次,以手眼校正、多相機估測、影像伺服控制,以及五階貝茲曲線軌跡規劃,來整合影像伺服系統。最後,以影像伺服的兩個應用,"靜態物件之拾取與放置"與"動態物件之追蹤"成功驗證影像伺服系統的可行性。藉由多相機的估測,使系統具備處理短暫遮蔽與追丟的能力。 With the advancement of computer vision and robotic technology, the application of multi-camera systems in object recognition and position localization has gained increasing attention. This paper focuses on utilizing a multi-camera system to acquire three-dimensional position information of target objects. Moreover, it integrates an image-based servo controller to achieve static object picking and placing, as well as dynamic object tracking functionalities. Based on the relative positions of the cameras to the robotic arm, the system is categorized into "Eye to hand" and "Eye in hand." In the "Eye to hand" configuration, an RGB-D (Red, Green, Blue & Depth) camera is employed, utilizing color space methods for object detection. The RGB color space is transformed into HSV (Hue, Saturation, Value), and morphological operations are applied to establish the complete contour of the target object. Subsequently, the three-dimensional coordinate position of the target object center based on the camera is determined from the pixel coordinates of the contour and the corresponding depth from the RGB-D camera. In the "Eye in hand" configuration, two industrial cameras are used for stereo vision. Object detection is performed based on ORB (Oriented FAST and Rotated BRIEF) feature points. Descriptors of the detected feature points undergo KNN Brutal-Force matching, and a robust projection transformation matrix is established using RANSAC. The three-dimensional coordinate position of the target object center based on the camera is then calculated using the stereo disparity method. To integrate the three-dimensional position information of the target object with a three-axis pneumatic parallel robotic arm, a series of experiments are conducted. Firstly, a hand-eye calibration experiment is performed to transform the camera-estimated positions to the coordinate system of the robotic arm base. An Extended Kalman Filter (EKF) is employed to fuse the estimates from multiple cameras as reference positions. Subsequently, an indirect position-based visual servo controller is used to generate reference velocities for the robotic arm's end effector. Finally, considering the difference in processing frequencies between the visual system (25Hz) and the robotic arm system (1000Hz), a fifth-order Bézier curve is employed for path planning, generating a trajectory with time-varying paths to satisfy the reference velocity tracking. The feasibility of the proposed multi-camera image-based servo system is validated using a three-axial pyramidal pneumatic parallel manipulator. The experimental objectives are divided into two parts: firstly, using RGB-D vision and stereo vision to recognize and locate target objects. Secondly, integrating the visual servo system through hand-eye calibration, multi-camera estimation, image-based servo control, and fifth-order Bézier curve trajectory planning. Finally, the system's feasibility is successfully demonstrated through two applications of image-based servo -"static object picking and placing" and "dynamic object tracking." The system's ability to handle temporary occlusion and object loss is enhanced through multi-camera estimation. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91723 |
DOI: | 10.6342/NTU202400501 |
全文授權: | 未授權 |
顯示於系所單位: | 工程科學及海洋工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-112-1.pdf 目前未授權公開取用 | 10.01 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。