Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92653
Title: 基於RGB-D影像的殘差密集點對點網絡之物體姿態估計
Residual-based Dense Point-wise Network for 6Dof Object Pose Estimation Based on RGB-D Images
Authors: 洪宗維
Zong-Wei Hong
Advisor: 陳祝嵩
Chu-Song Chen
Keyword: 六自由度物體姿態估計,深度學習,RGB 和深度模態融合,密集點對應,機器人控制,
6DoF object pose estimation,Deep Learning,RGB and Depth Modality Fusion,Dense correspondence,Robotic Manipulation,
Publication Year : 2024
Degree: 碩士
Abstract: 在本研究中,我們提出了一種新穎的方法,用於從單一 RGB-D 影像中還原物體的姿態,其中包含旋轉與平移,因此包含六個自由度(6DoF)。與現有的方法不同,現有方法主要分成兩類,一是直接透過網路預測物體的姿態,二是間接地預測稀疏的關鍵點,並透過這些稀疏關鍵點來還原物體姿態。我們的方法則是通過預測稠密的對應點,即針對每個可見像素預測物體座標,來應對這一具有挑戰性的任務。我們的方法充分利用現有的物體檢測方法來檢測每個待測物體。並引入一種重新投影機制,改變相機內參矩陣以應對 RGB-D 影像中的物體的裁剪。此外,我們將 3D 物體座標轉換為殘差表示使得模型能夠減小輸出空間,以提供更好的訓練效果。我們進行了大量實驗,以驗證我們的方法在 6DoF 姿態估計方面的有效性。實驗表明我們的方法在大多數先前的方法中表現優越,特別是在遮擋場景中與最先進的方法表現出顯著的改進。
In this work, we present a novel method for determining the 6DoF pose of an object from a single RGB-D image. Unlike existing methods that either directly predict the object's pose or rely on sparse keypoints for pose recovery, our approach addresses this challenging task using dense correspondence, i.e., it regresses the object coordinates for each visible pixel. Our approach leverages readily available object detection methods. A reprojection mechanism is introduced to change the camera intrinsic matrix to handle cropping in RGB-D images. Moreover, we transform the 3D object coordinates into a residual representation, which proves effective in reducing the output space and yields superior performance. We conducted extensive experiments to validate the effectiveness of our approach for 6D pose estimation. Our approach outperforms most previous methods, especially in occlusion scenarios, and demonstrates notable improvements over the state-of-the-art methods.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92653
DOI: 10.6342/NTU202400663
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2029-03-29
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-112-2.pdf
  Restricted Access
20.74 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved