請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49161完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 傅立成(Li-Chen Fu) | |
| dc.contributor.author | Jiang-Yuan Chang | en |
| dc.contributor.author | 張江元 | zh_TW |
| dc.date.accessioned | 2021-06-15T11:17:50Z | - |
| dc.date.available | 2019-11-02 | |
| dc.date.copyright | 2016-11-02 | |
| dc.date.issued | 2016 | |
| dc.date.submitted | 2016-08-18 | |
| dc.identifier.citation | [1] Ying Wang, Haoxiang Lang, and Clarence W de Silva. A hybrid visual servo controller for robust grasping by wheeled mobile robots. IEEE/ASME Transactions on Mechatronics, 15(5):757–769, 2010.
[2] Moslem Kazemi, Kamal Gupta, and Mehran Mehrandezh. Path planning for image-based control of wheeled mobile manipulators. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5306–5312, 2012. [3] William G Pence, Fabian Farelo, Redwan Alqasemi, Yu Sun, and Rajiv Dubey. Visual servoing control of a 9-DoF WMRA to perform ADL tasks. In IEEE International Conference on Robotics and Automation (ICRA), pages 916–922, 2012. [4] Amine Abou Moughlbay, Enric Cervera, and Philippe Martinet. Real-time model based visual servoing tasks on a humanoid robot. In Intelligent Autonomous Systems 12, pages 321–333. Springer, 2013. [5] Ying Wang, Guan-lu Zhang, Haoxiang Lang, Bashan Zuo, and Clarence W De Silva. A modified image-based visual servo controller with hybrid camera configuration for robust robotic grasping. Robotics and Autonomous Systems, 62(10):1398–1407, 2014. [6] Freek Liefhebber and Joris Sijs. Vision-based control of the Manus using SIFT. In IEEE 10th International Conference on Rehabilitation Robotics, pages 854–861, 2007. [7] Yong Jiang, Ning Xi, Qin Zhang, and Yunyi Jia. Target object identification and localization in mobile manipulations. In IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 144–149, 2011. [8] Haiyan Wu, Lei Lou, Chih-Chung Chen, Sandra Hirche, and Kolja Kühnlenz. A framework of networked visual servo control system with distributed computation. In IEEE 11th International Conference on Control Automation Robotics & Vision (ICARCV), pages 1466–1471, 2010. [9] Haiyan Wu, Chih-Chung Chen, Jiayun Feng, Kolja Kühnlenz, and Sandra Hirche. A switching control law for a networked visual servo control system. In IEEE International Conference on Robotics and Automation (ICRA), pages 5556–5563, 2010. [10] Haiyan Wu, Lei Lou, Chih-Chung Chen, Sandra Hirche, and Kolja Kühnlenz. Performance-oriented networked visual servo control with sending rate scheduling. In IEEE International Conference on Robotics and Automation (ICRA), pages 6180–6185, 2011. [11] Haiyan Wu, Lei Lou, Chih-Chung Chen, Sandra Hirche, and Kolja Kuhnlenz. Cloudbased networked visual servo control. IEEE Transactions on Industrial Electronics, 60(2):554–566, 2013. [12] Itsushi Kinbara, Satoshi Komada, and Junji Hirai. Visual servo of active cameras and manipulators by time delay compensation of image features with simple on-line calibration. In IEEE/SICE-ICASE International Joint Conference, pages 5317–5322, 2006. [13] Fei Li and Hua-Long Xie. Sliding mode variable structure control for visual servoing system. International Journal of Automation and Computing, 7(3):317–323, 2010. [14] Haifeng Li, Jingtai Liu, Yan Li, Xiang Lu, and Lei Sun. Visual servo of uncalibrated eye-in-hand system with time-delay compensation. In IEEE 8th World Congress on Intelligent Control and Automation (WCICA), pages 1322–1328, 2010. [15] Chang Liu, Xinhan Huang, and Min Wang. Target tracking for visual servoing systems based on an adaptive Kalman filter. International Journal of Advanced Robotic Systems, 9, 2012. [16] Chung-Yen Lin, Cong Wang, and Masayoshi Tomizuka. Visual tracking with sensing dynamics compensation using the Expectation-Maximization algorithm. In IEEE American Control Conference, pages 6281–6286, 2013. [17] Binh Minh Nguyen, Wataru Ohnishi, Yafei Wang, Hiroshi Fujimoto, Yoichi Hori, Kiyoto Ito, Masaki Odai, Hironori Ogawa, Erii Takano, Tomohiro Inoue, et al. Dual rate Kalman filter considering delayed measurement and its application in visual servo. In IEEE 13th International Workshop on Advanced Motion Control (AMC), pages 494–499, 2014. [18] François Chaumette and Seth Hutchinson. Visual servo control. i. basic approaches. IEEE Robotics & Automation Magazine, 13(4):82–90, 2006. [19] François Chaumette and Seth Hutchinson. Visual servo control, part ii: Advanced approaches. IEEE Robotics and Automation Magazine, 14(1):109–118, 2007. [20] Éric Marchand, Fabien Spindler, and François Chaumette. ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics & Automation Magazine, 12(4):40–52, 2005. [21] SoftBank Group. Pepper. https://www.ald.softbankrobotics.com/en/coolrobots/ pepper, 2016. [22] Aldebaran software 2.4.3.28 documentation. http://http://doc.aldebaran.com/2-4/, 2016. [23] G. Claudio and F. Spindler. ViSP-naoqi bridge library. http://jokla.github.io/vispnaoqi/, 2016. [24] J. Denavit and R. S. Hartenberg. A kinematic notation for lower-pair mechanisms based on matrices. Trans. of the ASME. Journal of Applied Mechanics, 22:215–221, 1955. [25] Richard Scheunemann Hartenberg and Jacques Denavit. Kinematic synthesis of linkages. McGraw-Hill, 1964. [26] Nikolaos Kofinas. Forward and inverse kinematics for the NAO humanoid robot. PhD thesis, Technical University of Crete, Greece, 2012. [27] Nikos Kofinas, Emmanouil Orfanoudakis, and Michail G Lagoudakis. Complete analytical inverse kinematics for NAO. In IEEE 13th International Conference on Autonomous Robot Systems (Robotica), pages 1–6, 2013. [28] Dmitry Berenson, James Kuffner, and Howie Choset. An optimization approach to planning for mobile manipulation. In IEEE International Conference on Robotics and Automation (ICRA), pages 1187–1192, 2008. [29] Corey Goldfeder, Matei Ciocarlie, Hao Dang, and Peter K Allen. The Columbia grasp database. In IEEE International Conference on Robotics and Automation (ICRA), pages 1710–1716, 2009. [30] Ben Kehoe, Akihiro Matsukawa, Sal Candido, James Kuffner, and Ken Goldberg. Cloud-based robot grasping with the google object recognition engine. In IEEE International Conference on Robotics and Automation (ICRA), pages 4263–4270, 2013. [31] Georgios Evangelidis. ARma library: Pattern tracking for Augmented Reality. http://xanthippi.ceid.upatras.gr/people/evangelidis/arma/, 2016. [32] Don Joven Agravante, Jordi Pages, and François Chaumette. Visual servoing for the REEM humanoid robot’s upper body. In IEEE International Conference on Robotics and Automation (ICRA), pages 5253–5258, 2013. [33] Giovanni Claudio, Fabien Spindler, and François Chaumette. Grasping by Romeo with visual servoing. In Journées Nationales de la Robotique Humanoïde, JNRH, 2015. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49161 | - |
| dc.description.abstract | 視覺伺服(visual servo)是一種直接將視覺資訊回授到運動控制中的控制方法。這使得機器人的運動可以強健地和它的環境感知結合在一起。所以,將視覺伺服結合到控制移動式機械臂中去完成一個複雜的任務看起來是明智的做法,即,在家庭環境中移動和抓取物體。對於一個視覺伺服系統來說,在實際環境中,經常會出現時間延遲的問題,這可能是因爲有限的運算能力和有限的傳輸帶寬造成了影像的傳輸和處理的時間過長。對於我們特定的人型移動式機械臂來說,它的視覺伺服主要有兩個限制,一個是影像傳輸造成的嚴重的時間延遲,另一個是我們不能直接控制到每個關節的速度來作爲視覺伺服的控制輸入。
在這篇論文中,我們提出了一種新奇的視覺伺服系統來解決上面提到的問題,我們利用機器人的運動學模型和底座運動來估測和預測影像資訊,並結合速度時間的積分來補償時間延遲。我們也提出了一個基於全向輪機器人的視覺伺服系統框架來控制機器人接近並抓取桌子上的目標物件。對於物件辨識和定位,我們使用來自開源軟體套件(ViSP) 中基於物件模型的方法,並用彩色相機來實現。然而,由於物件定位性能的限制,當機器人距離物件很遠時,我們還利用深度影像結合彩色影像來得到物件的近似位置。當物件在機器人彩色相機的視野範圍內時,主要會有兩種控制指令,一種是基於影像視覺伺服控制的頭部控制,在接近的過程中保持物件一直在相機的視野範圍內,另一種是基於逆向運動學(inverse kinematics)的手臂控制來移動手臂到想要的位置和角度去完成抓取的任務。爲了解決運動學模型不確定性和獲得更高精度的抓取,我們故意在機械臂的手上貼了一個標誌(landmark)來實時地修正機構誤差造成的抓取誤差。我們在實際的人型移動式機器人上做了很多實驗來驗證我們提出的方法和框架。 | zh_TW |
| dc.description.abstract | Visual servoing is a form of control that directly combines the visual feedback and the motion control. This makes the robot’s actions robustly couple with its perception of the environment. As a result, it is advisable to incorporate visual servoing while commanding the mobile manipulator to perform a complex task, say, to move and to pick up things in house environment. For a visual servo system, there usually exists the problem of time-delay likely caused by long image processing and data transmission due to limited computation capability and tight communication bandwidth in a practical environment. For our specific humanoid mobile manipulator, its visual servo system is subject to two main limitations, of which one is the large time-delay due to image transmission, whereas the other is failing to directly command each joint velocity as the visual servo’s control input.
In this thesis, we propose a novel visual servo system to solve the problems as mentioned above, where we use integral of velocity while incorporating the image estimation and prediction with the kinematic model and the motion of the base so as to compensate for the time-delay. We also propose a framework for the visual servo system with an omnidirectional wheel robot to govern the movements of approaching and picking up the target object on a table. For object detection and localization, we employ a model-based approach using an open source package, named ViSP library, together with a RGB camera. However, due to the limited performance of the object localization, we also adopt the depth image besides the RGB image in order to acquire the rough position information when the robot is far away from the object. When the object is within the viewing range of the robot’s RGB camera, there are two types of control commands which will be generated, one is for the head control using image based visual servoing to keep the object within the field of view of the camera all the way during the approaching phase, and the other is for the hand control leveraging the solution of the inverse kinematics problem so that the hand is moved to the desired position and orientation to fulfill the task of object grasping. To address the uncertainty of kinematics parameters and to gain higher accuracy of grasping, we purposely place a landmark on the manipulator’s hand to on-line compensate the grasping errors due to the uncalibrated mechanism errors. We evaluate the proposed approach and framework by several experiments on a real wheeled humanoid robot. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-15T11:17:50Z (GMT). No. of bitstreams: 1 ntu-105-R03921090-1.pdf: 9638607 bytes, checksum: 905b31f13d6556eafb300f5c3bf4e80d (MD5) Previous issue date: 2016 | en |
| dc.description.tableofcontents | 1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Visual Servoing on Mobile Manipulator . . . . . . . . . . . . . . 2 1.2.2 Visual Servoing with Time-delay Compensation . . . . . . . . . 4 1.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Preliminaries 9 2.1 Visual Servoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Image-based Visual Servoing Control . . . . . . . . . . . . . . . 10 2.1.2 ViSP Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Pepper Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Kinematics Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.1 Definition of Reference Frames of Pepper . . . . . . . . . . . . . 15 2.3.2 Robot Jacobian Matrix . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.3 Forward Kinematics of Pepper’s Right Arm . . . . . . . . . . . . 16 2.3.4 Inverse Kinematics of Pepper’s Right Arm . . . . . . . . . . . . 19 2.4 System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 Methodology 23 3.1 Movement Strategy with Visual Servoing of Mobile Manipulator . . . . . 23 3.1.1 Model-based Object Detection and Localization . . . . . . . . . . 23 3.1.2 Fusing Depth and Colour for Object Detection and Localization . 25 3.1.3 IBVS Head Gaze Control for Visibility . . . . . . . . . . . . . . 28 3.1.4 Time-delay Compensation for Visual Servoing . . . . . . . . . . 29 3.1.5 Indirect Velocity Control for Visual Servoing . . . . . . . . . . . 32 3.1.6 Implementation of the IBVS . . . . . . . . . . . . . . . . . . . . 35 3.2 Vision-based Grasping of Mobile Manipulator . . . . . . . . . . . . . . . 36 3.2.1 Tuning Mobile Base for Grasping . . . . . . . . . . . . . . . . . 36 3.2.2 Motion Control of the Lower Body for Grasping . . . . . . . . . 38 3.2.3 Grasping Trajectory Generation . . . . . . . . . . . . . . . . . . 39 3.2.4 Landmark Calibration for Mechanism Errors . . . . . . . . . . . 40 4 Experiments 43 4.1 Experiment Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2 Evaluation of IBVS Head Control with Time-delay Compensation . . . . 44 4.2.1 Simulation of IBVS Head Control . . . . . . . . . . . . . . . . . 45 4.2.2 Evaluation in Actual Environment . . . . . . . . . . . . . . . . . 47 4.3 Evaluation of Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4 Evaluation of Grasping with Landmark . . . . . . . . . . . . . . . . . . 54 4.5 Overall Scenario Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5 Conclusions 57 5.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 A Kinematics Model 59 A.1 Forward Kinematics of Pepper’s Right Arm . . . . . . . . . . . . . . . . 59 A.2 Forward Kinematics from Neck to Camera . . . . . . . . . . . . . . . . . 60 A.3 Robot Jacobian Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Reference 62 | |
| dc.language.iso | en | |
| dc.subject | 間接速度控制 | zh_TW |
| dc.subject | 視覺伺服控制 | zh_TW |
| dc.subject | 移動式機械臂 | zh_TW |
| dc.subject | 時間延遲補償 | zh_TW |
| dc.subject | visual servo control | en |
| dc.subject | indirect velocity control | en |
| dc.subject | time-delay compensation | en |
| dc.subject | mobile manipulator | en |
| dc.title | 具時間延遲補償之視覺伺服於人型移動式機械臂 | zh_TW |
| dc.title | Visual Servoing with Time-delay Compensation for Humanoid Mobile Manipulator | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 104-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 林沛群(Pei-Chun Lin),簡忠漢(Jong-Hann Jean),張文中(Wen-Chung Chang),范欽雄(Chin-Shyurng Fahn) | |
| dc.subject.keyword | 視覺伺服控制,移動式機械臂,時間延遲補償,間接速度控制, | zh_TW |
| dc.subject.keyword | visual servo control,mobile manipulator,time-delay compensation,indirect velocity control, | en |
| dc.relation.page | 66 | |
| dc.identifier.doi | 10.6342/NTU201603284 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2016-08-20 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-105-1.pdf 未授權公開取用 | 9.41 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
