請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85427
標題: | 應用類別無關之實例切割於未知堆疊物件之夾取 Robot Grasping of Unknown Objects in Clutter Using Category-Agnostic Instance Segmentation |
作者: | Yu-Cheng Wang 王俞程 |
指導教授: | 李志中(Jyh-Jone Lee) 李志中(Jyh-Jone Lee | jjlee@ntu.edu.tw | ), |
關鍵字: | 未知堆疊物件夾取,與類別無關之實例切割,課程學習,夾取點生成, Robot Grasping of Unknown Objects in Clutter,Category-Agnostic Instance Segmentation,Curriculum Learning,Grasping Point Generative, |
出版年 : | 2022 |
學位: | 碩士 |
摘要: | 近年來,深度學習被大量應用於物件識別與機器手臂夾取物件上,如自動化工廠的上下料。然而,隨著現今工廠少量多樣的生產模式,深度學習在面對新的物件時,通常無法直接應用,需針對新的物件重新蒐集資料以訓練模型。為因應此問題,本研究提出一套能針對未知新物件在堆疊場景中的夾取流程。本研究的夾取流程分為兩步驟,第一步驟會先透過與類別無關之實例切割模型(Mask R-CNN),將物件的遮罩由堆疊場景中辨識出來。其中,為了訓練與類別無關之實例切割模型,本研究透過Blender建立虛擬環境,生成虛擬堆疊資料集;此外,也結合課程學習(Curriculum Learning),將虛擬堆疊資料集,根據場景中物件的密集程度,分成三種難易度不同的資料集來訓練與類別無關之實例切割模型。而在第二步驟中,會將第一步驟所得到的物件遮罩,擷取其深度資訊輸入夾取點生成卷積類神經網路(Generative Grasping Convolutional Neural Network , GG-CNN2),得到夾取點。本研究最後透過實際實驗來驗證夾取流程,在5種未知新物件所形成的堆疊場景中,達到92.94%的平均夾取成功率。實驗結果證明了本研究之夾取流程,應用於未知堆疊物件夾取的可行性。 In recent years, methods using deep learning have been widely applied in object recognition and robot grasping, such as loading and unloading in automated factories. However, the deep learning-based model needs to be retrained for new objects due to the low-volume, high-variety production environments in today's factories. In response to this problem, a grasping pipeline containing two steps has been proposed for grasping novel objects in cluttered scenes. In the first step of the grasping pipeline, an object’s mask is identified through a category-agnostic instance segmentation model. A virtual environment is also created using Blender to generate a synthetic dataset for training the instance segmentation model. Furthermore, by using Curriculum Learning to train the instance segmentation model, we generate 3 kinds of synthetic dataset with different density of cluttering. In the second step, the Generative Grasping Convolutional Neural Network (GG-CNN2) which uses the depth information of the object mask obtained in the first step is used to get the grasping points. Finally, we demonstrate the system's ability by using a real robot to grasp 5 unknown new objects in a cluttered environment and achieve a grasp success rate up to 92.94%. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85427 |
DOI: | 10.6342/NTU202201399 |
全文授權: | 同意授權(全球公開) |
電子全文公開日期: | 2024-07-31 |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1107202215165700.pdf 此日期後於網路公開 2024-07-31 | 4.29 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。