Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84600
Title: 使用時序注意力機制之基於影像序列的物體位姿估測
Object Pose Estimation Using Image Sequence via Temporal Attention
Authors: Chun-Yu Chen
陳竣宇
Advisor: 洪一平(Yi-Ping Hung)
Keyword: 物體位姿估測,語義分割,時序注意力,深度學習,
Object Pose Estimation,Semantic Segmentation,Temporal Attention,Deep Learning,
Publication Year : 2022
Degree: 碩士
Abstract: 物體位姿估測是一種用於偵測圖片中感興趣物體的技術。由單張RGB影像來做6D物體位姿估測的一個常見的挑戰就是是物體在雜亂的場景中彼此的互相遮擋。除了只使用輸入影像的空間資訊外,利用影片資料中連續影像的之間的時間資訊可以進一步提升這項任務的表現。舉例來說,考慮到輸入影像中的物體在當前的相機視角下被其他物體遮擋的情況,結合鄰近影像的相機視角就有機會去回復未看到的物體的位姿。在本論文中,我們對使用了深度學習的端到端單張影像的位姿估測方法進行了充分的分析與實驗,並且提出了一種端到端方法將單張影像位姿估測擴展到多張影像的版本。實驗結果顯示,我們的方法相較於基準模型提供了更準確的結果。
Object pose estimation is a technique used to detect objects of interest in images. A common challenge of 6D pose estimation from a single RGB image is the occlusions between objects in cluttered environment. In addition to only use the spatial information in the input frame, utilizing the temporal information between consecutive frames in the video data may further improve the performance in this task. For instance, taking account of the situation that the objects in the input frame be occluded by other objects in current camera perspective, combining the camera perspectives of neighboring frames makes it possible to recover the poses of unseen objects. In this thesis, we fully analyze and experiment on an end-to-end single-frame pose estimation method using deep learning. We also propose an end-to-end approach to extend the single-frame pose estimation to a multi-frame version. The experimental results show that our method provides more accurate results than the baseline model.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84600
DOI: 10.6342/NTU202203024
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2022-09-26
Appears in Collections:資訊網路與多媒體研究所

Files in This Item:
File SizeFormat 
U0001-3108202215385700.pdf
Access limited in NTU ip range
7.92 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved