請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66483
標題: | 三維物件辨識與姿態估測 3D Object Recognition and Pose Estimation |
作者: | Chia-Hung Chen 陳嘉宏 |
指導教授: | 黃漢邦(Han-Pang Huang) |
關鍵字: | 人臉辨識,姿態估測,自主抓握,軌跡規劃,物件定位, Face Recognition,Pose Estimation,Autonomous Grasping,Path Planning,Object Localization, |
出版年 : | 2012 |
學位: | 博士 |
摘要: | 近幾年來,從工業用的機器到商業化的娛樂產品,機器人的科技逐漸地影響我們的生活。不管是為了家庭用的、醫學用的或工業用的目的而發展機器人,各財團公司或大學的研究實驗室都一直持續不斷的進行開發中。為了使機器人更智慧化,變得更有認知能力,機器人能藉由機器視覺系統獲取更多有用的資訊,包括對周圍場景的了解、物件的辨識以及空間中的關係。
本論文目的旨在發展機器人對於人臉辨識的方法以及對於物件姿態估測的演算法以利於機器人自主抓取,所提出的人臉辨識方法是利用AAM來取出人臉特徵,再利用人臉的形狀描述子來辨識人臉。並且在假設已知物件的3維幾何形狀前提下,藉由SIFT所追蹤到物件上的點以及立體視覺估測到物體的3維點雲,我們所提出的姿態估測演算法能正確地估測出物件的姿態。同時我們也建立了一個整合物件偵測、物件定位、姿態估測、軌跡規劃以及真實機器手臂的視覺導引架構,利用視覺導引機器手臂到達目標物。 最後本論文藉由一個靈巧的機器手臂亞當展示了2個抓握方案,它必須抓取它面前所能抓到的物件,此展示顯示我們的機器手臂能在3維空間中強健且自主地抓取一個任意轉動且被SIFT偵測到的物件。 Recently, robotic technologies, from industrial machines to commercial entertainment products, are increasingly influential in our lives. There is continual development of robots for domestic, medical, and industrial purposes under way in corporate and university research labs. In efforts to make robots more intelligent and cognitive, robots have been developed to obtain much useful information including scene understanding and spatial relationship from a machine vision system. The objective of this dissertation is to develop recognition methods for face recognition and pose estimation algorithms for autonomous grasping. The proposed face recognition method utilizes AAM to extract facial feature points and utilizes shape descriptors to recognize a face. Also, we demonstrate that the proposed pose estimation algorithm is capable of accurately computing an object’s pose by the 2D tracking points on an object of SIFT and 3D point cloud detected by stereo vision on an object, assuming that a 3D geometric model of an object is known a priori. Moreover, the visual guide framework integrating object detection, object localization, pose estimation, path planning and the real robot arm for guiding the robot arm to the target is established. Finally, we demonstrate two grasping scenarios with a dexterous arm, ADAM, where an object in front of ADAM can be grasped. This demonstration shows our robot arm can robustly and autonomously grasp a randomly rotative rigid object detected by SIFT in 3D space. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/66483 |
全文授權: | 有償授權 |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-101-1.pdf 目前未授權公開取用 | 4.73 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。