Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85991
Title: 利用旋轉與位移等變的特徵表示進行漸進式點雲配準
Coarse-to-Fine Point Cloud Registration with SE(3)-Equivariant Representations
Authors: Cheng-Wei Lin
林承緯
Advisor: 陳文進(Wen-Chin Chen),徐宏民(Winston H. Hsu)
Keyword: 電腦視覺,機器學習,深度學習,點雲配準,等變神經網路,
Computer Vision,Machine Learning,Deep Learning,Point Cloud Registration,Equivariant Neural Network,
Publication Year : 2022
Degree: 碩士
Abstract: 點雲配準是一個電腦視覺和機器人操作中的關鍵問題,而現有的方法要麼依賴於匹配對姿態差異敏感的局部幾何特徵,要麼利用全局形狀,導致在面對部分重疊等點雲分佈變化時產生不穩定的結果。 這篇碖文中,我們結合了兩種方法的優點,利用漸進式的流程來同時處理這兩個問題:我們首先透過對齊全局特徵來減少輸入點雲之間的姿態差異,然後匹配局部特徵以進一步優化點雲分佈變化所造成的不穩定。由於對齊全局特徵需要有保留點雲姿態的特徵,而匹配局部特徵則期望有不受姿態影響的特徵,所以我們提出了一種對於旋轉與位移等變的特徵提取器來同時生成兩種不同類型的特徵。在這個特徵提取器中,我們首先利用旋轉與位移等變的神經網路來編譯保留點雲的姿態的特徵表示,接著再透過分離這些姿態來轉成不受姿態影響的特徵表示。實驗表明,我們提出的方法在面對姿態差異和點雲分佈變化時,與最先進的方法相比,召回率提高了 20%。
Point cloud registration is a crucial problem in computer vision and robotics. Existing methods either rely on matching local geometric features, which are sensitive to the pose differences, or leverage global shapes and thereby lead to inconsistency when facing distribution variances such as partial overlapping. Combining the advantages of both types of methods, we adopt a coarse-to-fine pipeline that concurrently handles both issues. We first reduce the pose differences between input point clouds by aligning global features; then we match the local features to further refine the inaccurate alignments resulting from distribution variances. As global feature alignment requires the features to preserve the poses of input point clouds and local feature matching expects the features to be invariant to these poses, we propose an SE(3)-equivariant feature extractor to simultaneously generate two types of features. In this feature extractor, representations preserving the poses are first encoded by our novel SE(3)-equivariant network and then converted into pose-invariant ones by a pose-detaching module. Experiments demonstrate that our proposed method increases the recall rate by 20% compared to state-of-the-art methods when facing both pose differences and distribution variances.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85991
DOI: 10.6342/NTU202202508
Fulltext Rights: 同意授權(全球公開)
metadata.dc.date.embargo-lift: 2024-09-01
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
U0001-1708202215000300.pdf5.52 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved