請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49161
標題: | 具時間延遲補償之視覺伺服於人型移動式機械臂 Visual Servoing with Time-delay Compensation for Humanoid Mobile Manipulator |
作者: | Jiang-Yuan Chang 張江元 |
指導教授: | 傅立成(Li-Chen Fu) |
關鍵字: | 視覺伺服控制,移動式機械臂,時間延遲補償,間接速度控制, visual servo control,mobile manipulator,time-delay compensation,indirect velocity control, |
出版年 : | 2016 |
學位: | 碩士 |
摘要: | 視覺伺服(visual servo)是一種直接將視覺資訊回授到運動控制中的控制方法。這使得機器人的運動可以強健地和它的環境感知結合在一起。所以,將視覺伺服結合到控制移動式機械臂中去完成一個複雜的任務看起來是明智的做法,即,在家庭環境中移動和抓取物體。對於一個視覺伺服系統來說,在實際環境中,經常會出現時間延遲的問題,這可能是因爲有限的運算能力和有限的傳輸帶寬造成了影像的傳輸和處理的時間過長。對於我們特定的人型移動式機械臂來說,它的視覺伺服主要有兩個限制,一個是影像傳輸造成的嚴重的時間延遲,另一個是我們不能直接控制到每個關節的速度來作爲視覺伺服的控制輸入。
在這篇論文中,我們提出了一種新奇的視覺伺服系統來解決上面提到的問題,我們利用機器人的運動學模型和底座運動來估測和預測影像資訊,並結合速度時間的積分來補償時間延遲。我們也提出了一個基於全向輪機器人的視覺伺服系統框架來控制機器人接近並抓取桌子上的目標物件。對於物件辨識和定位,我們使用來自開源軟體套件(ViSP) 中基於物件模型的方法,並用彩色相機來實現。然而,由於物件定位性能的限制,當機器人距離物件很遠時,我們還利用深度影像結合彩色影像來得到物件的近似位置。當物件在機器人彩色相機的視野範圍內時,主要會有兩種控制指令,一種是基於影像視覺伺服控制的頭部控制,在接近的過程中保持物件一直在相機的視野範圍內,另一種是基於逆向運動學(inverse kinematics)的手臂控制來移動手臂到想要的位置和角度去完成抓取的任務。爲了解決運動學模型不確定性和獲得更高精度的抓取,我們故意在機械臂的手上貼了一個標誌(landmark)來實時地修正機構誤差造成的抓取誤差。我們在實際的人型移動式機器人上做了很多實驗來驗證我們提出的方法和框架。 Visual servoing is a form of control that directly combines the visual feedback and the motion control. This makes the robot’s actions robustly couple with its perception of the environment. As a result, it is advisable to incorporate visual servoing while commanding the mobile manipulator to perform a complex task, say, to move and to pick up things in house environment. For a visual servo system, there usually exists the problem of time-delay likely caused by long image processing and data transmission due to limited computation capability and tight communication bandwidth in a practical environment. For our specific humanoid mobile manipulator, its visual servo system is subject to two main limitations, of which one is the large time-delay due to image transmission, whereas the other is failing to directly command each joint velocity as the visual servo’s control input. In this thesis, we propose a novel visual servo system to solve the problems as mentioned above, where we use integral of velocity while incorporating the image estimation and prediction with the kinematic model and the motion of the base so as to compensate for the time-delay. We also propose a framework for the visual servo system with an omnidirectional wheel robot to govern the movements of approaching and picking up the target object on a table. For object detection and localization, we employ a model-based approach using an open source package, named ViSP library, together with a RGB camera. However, due to the limited performance of the object localization, we also adopt the depth image besides the RGB image in order to acquire the rough position information when the robot is far away from the object. When the object is within the viewing range of the robot’s RGB camera, there are two types of control commands which will be generated, one is for the head control using image based visual servoing to keep the object within the field of view of the camera all the way during the approaching phase, and the other is for the hand control leveraging the solution of the inverse kinematics problem so that the hand is moved to the desired position and orientation to fulfill the task of object grasping. To address the uncertainty of kinematics parameters and to gain higher accuracy of grasping, we purposely place a landmark on the manipulator’s hand to on-line compensate the grasping errors due to the uncalibrated mechanism errors. We evaluate the proposed approach and framework by several experiments on a real wheeled humanoid robot. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49161 |
DOI: | 10.6342/NTU201603284 |
全文授權: | 有償授權 |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-105-1.pdf 目前未授權公開取用 | 9.41 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。