請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88536
標題: | 全自動化深度學習隨機堆疊物件夾取流程之建構 Construction of an Automated Deep Learning Process for Random Bin Picking |
作者: | 周昱辰 Yu-Chen Chou |
指導教授: | 李志中 Jyh-Jone Lee |
關鍵字: | 堆疊夾取,合成資料集,風格轉換,實例分割,姿態估測,自動編碼網路, Grasping in clutter,Synthetic Data,Style Transfer,Instance Segmentation,Pose Estimation,Augmented Autoencoder, |
出版年 : | 2023 |
學位: | 碩士 |
摘要: | 在機器手臂夾取任務中,使用深度學習網路進行物件辨識往往需要實際拍攝任務場景,並在完成影像蒐集後以人工的方式標註資料集。此過程需要耗費大量的時間及人力,造成夾取任務應用的前置成本高且無法靈活更換產線。因此,本研究先透過渲染模擬軟體與目標物件之CAD模型合成大量的虛擬訓練資料集,同時自動完成資料集的標註。並以領域隨機化(Domain Randomization)與領域自適應(Domain Adaptation)中的風格轉換模型(Style Transfer Model),減少真實與虛擬影像之間的領域差異(Domain Gap),使虛擬資料集訓練的深度學習網路也能在真實場景中辨識目標物件。
本研究亦針對隨機堆疊物件提出一套自動化的夾取流程,整體夾取流程首先會透過RGB-D相機取得堆疊物件的平面及深度影像,接著使用實例分割模型(Mask-RCNN)將場景中的目標物件與背景分割並分類,將分割的目標物件經過由單物件夾取點生成卷積神經網路(GG-CNN)來進行夾取框預測的擴增式自動編碼網路(Augmented Autoencoder)後,即可獲得該物件的姿態及對應的夾取資訊,最後搭配深度影像進行夾取點干涉判定完成夾取流程。 最後會於真實堆疊場景中進行夾取實驗,以全自動化方式分別訓練夾取金屬圓管、T型塑膠水管、L型門把與混和物件的模型,實驗後分別具有:90.9%、92.0%、70.8%與87.0%的夾取成功率。 In robotic grasping tasks, deep learning networks often require real-world scene datasets that are annotated by humans which leads to high upfront costs and a lack of flexibility in changing production lines for grasping tasks. Therefore, this study utilizes simulation software to generate a training dataset and automatically label it. As the model is trained by the synthetic dataset and used in the real world, domain randomization and domain adaptation techniques (such as style transfer) are employed to reduce the domain gap between real and synthetic data. Additionally, this study proposes an automated grasping system specifically designed for randomly stacked objects. The pipeline of the system first uses instance segmentation model to segment each object in clutter, then applies an augmented autoencoder to find object pose and get several grasping candidates via Generative Grasping Convolutional Neural Network. Finally, to prevent gripping collision, depth information will be used to choose optimal grasping points for the robot. To validate the system, an experimental setup was established in the real world. The success rates for grasping tubes, T-shaped pipes, L-shaped handles, and mixed objects were as follows: 90.9%, 92.0%, 70.8%, and 87.0%, respectively. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88536 |
DOI: | 10.6342/NTU202301733 |
全文授權: | 同意授權(限校園內公開) |
顯示於系所單位: | 機械工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 目前未授權公開取用 | 6.24 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。