Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101605
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳沛遠zh_TW
dc.contributor.advisorPei-Yuan Wuen
dc.contributor.author黃竣楷zh_TW
dc.contributor.authorChun-Kai Huangen
dc.date.accessioned2026-02-11T16:42:50Z-
dc.date.available2026-02-12-
dc.date.copyright2026-02-11-
dc.date.issued2026-
dc.date.submitted2026-02-02-
dc.identifier.citation[1] S. Ao, Q. Hu, B. Yang, A. Markham, and Y. Guo. Spinnet: Learning a general surface descriptor for 3d point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11753–11762. IEEE, 2021.
[2] Y. Aoki, H. Goforth, R. A. Srivatsan, and S. Lucey. Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7163–7172, 2019.
[3] X. Bai, Z. Luo, L. Zhou, H. Fu, L. Quan, and C.-L. Tai. D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6359–6367, 2020.
[4] P. J. Besl and N. D. McKay. Method for registration of 3-d shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, volume 1611, pages 586–606. SPIE, 1992.
[5] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages 2724–2729. IEEE, 1992.
[6] D. Chetverikov, D. Stepanov, and P. Krsek. Robust euclidean alignment of 3d point sets: The trimmed-icp algorithm. In Proceedings of the 16th International Conference on Pattern Recognition (ICPR), pages 545–548. IEEE, 2002.
[7] C. Choy, J. Park, and V. Koltun. Fully convolutional geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 8958–8966, 2019.
[8] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
[9] P. Henry, M. Krainin, E. Summer, X. Fay, and D. Fox. Rgb-d mapping: Using kinect-style depth cameras for dense 3d modeling of indoor environments. In The International Journal of Robotics Research, volume 31, pages 647–663. SAGE Publications, 2012.
[10] S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12575–12584. IEEE, 2020.
[11] S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler. Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4267–4276, 2021.
[12] A. E. Johnson. Spin-images: a representation for 3-d surface matching. 1997.
[13] K. K., K. A., et al. Intraoral scanner in dentistry: a comprehensive review. Journal of Advanced Medical and Dental Sciences Research, 13(1):57–61, 2025.
[14] S. Logozzo, E. M. Zanetti, G. Franceschini, A. Kilpelä, and A. Mäkynen. Recent advances in dental optics — part i: 3d intraoral scanners for restorative dentistry. Optics and Lasers in Engineering, 54:203–221, 2014.
[15] F. Mangano, A. Gandolfi, G. Luongo, and S. Logozzo. Intraoral scanners in dentistry: a review of the current literature. BMC Oral Health, 17(1):149, 2017.
[16] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages 127–136, 2011.
[17] F. Pomerleau, F. Colas, and R. Siegwart. Comparing icp variants on real-world data sets. Autonomous Robots, 34(3):133–148, 2015.
[18] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 652–660. IEEE, 2017.
[19] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017.
[20] Z. Qin, H. Yu, C. Wang, Y. Guo, Y. Peng, and K. Xu. Geotransformer: A fast and robust transformer for point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11143–11152. IEEE, 2022.
[21] R. B. Rusu, N. Blodow, and M. Beetz. Point feature histograms (pfh) for 3d registration. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pages 236–243. IEEE, 2008.
[22] R. B. Rusu, N. Blodow, and M. Beetz. Fast point feature histograms (fpfh) for 3d registration. In 2009 IEEE International Conference on Robotics and Automation (ICRA), pages 3212–3217. IEEE, 2009.
[23] S. Salti, F. Tombari, and L. D. Stefano. Shot: Unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding, 125:251–264, 2014.
[24] A. Segal, D. Haehnel, and S. Thrun. Generalized-icp. In Robotics: science and systems, volume 2, page 435. Seattle, WA, 2009.
[25] H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6411–6420, 2019.
[26] I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss. Kiss-icp: In defense of point-to-point icp—simple, accurate, and robust registration if done the right way. IEEE Robotics and Automation Letters, 8(2):1029–1036, 2023.
[27] Y. Wang and J. M. Solomon. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 3523–3532, 2019.
[28] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG), 38(5):1–12, 2019.
[29] T. Whelan, M. Kaess, M. Fallon, H. Johannsson, J. J. Leonard, and J. McDonald. Elasticfusion: Dense slam without a pose graph. In Proceedings of Robotics: Science and Systems (RSS), 2015.
[30] H. Xu, S. Liu, G. Wang, G. Liu, and B. Zeng. Omnet: Learning overlapping mask for partial-to-partial point cloud registration. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3132–3141, 2021.
[31] H. Yang, J. Shi, and L. Carlone. Teaser: Fast and certifiable point cloud registration. IEEE Transactions on Robotics, 37(2):314–333, 2020.
[32] J. Yang, H. Li, and Y. Jia. Go-icp: Solving 3d registration efficiently and globally optimally. In Proceedings of the IEEE International Conference on Computer Vision, pages 1457–1464, 2013.
[33] Z. J. Yew and G. H. Lee. Rpm-net: Robust point matching using learned features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11824–11833. IEEE, 2020.
[34] A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao, and T. Funkhouser. 3dmatch: Learning local geometric descriptors from rgb-d reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1802–1811. IEEE, 2017.
[35] Q. Y. Zhou, J. Park, and V. Koltun. Fast global registration. In European Conference on Computer Vision (ECCV), pages 766–782. Springer, 2016.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101605-
dc.description.abstract本論文提出一種快速且具迭代性的深度學習式點雲配準流程,應用於手持式高速結構光口內掃描儀之全口重建。傳統之迭代最近點(Iterative Closest Point, ICP)配準方法對初始位姿高度敏感,於部分重疊或含雜訊之掃描資料中容易陷入區域最小值;而近年之深度學習特徵式配準方法多仰賴高計算成本之密集對應搜尋,限制其於臨床即時應用的可行性。此外,口內掃描資料常包含大面積平坦之軟組織區域(如牙齦或齒槽嵴),其局部幾何特徵不明顯,使得基於局部特徵之配準方法可靠度降低。
為克服上述問題,本文方法不進行任何顯式特徵描述子比對,而是整合三個階段之由粗至細配準架構:(一)以體素化為基礎之重疊區域萃取,以降低資料規模;(二)利用 Lucas–Kanade(LK)最佳化架構進行全域粗配準,以提升初始對齊之穩健性;以及(三)透過迭代最近點法進行細部幾何之精細配準。藉由先行體素化並萃取潛在重疊區域,本方法在執行 LK 配準前有效減少點雲數量,避免大規模 RANSAC 或特徵比對所帶來之高計算負擔,並實現快速且穩定之收斂。
實驗結果顯示,在高重疊條件下,所提出的方法於真實口內掃描資料及合成之 ModelNet40 資料集上,於配準精度與計算效率方面皆優於多種現有先進方法,顯示其具備應用於臨床即時全口重建之潛力。
zh_TW
dc.description.abstractWe present a fast, iterative deep learning (DL) based registration pipeline for full-mouth reconstruction using handheld high-speed structured-light intraoral scanners. Traditional Iterative Closest Point (ICP) registration methods require good initialization and can easily become trapped in local minima under partial overlap or noisy scans. Recent DL feature-based registration frameworks rely on a computationally expensive dense correspondence search, which limits their utility in clinical settings. Additionally, intraoral scans often contain large planar soft-tissue regions such as the gingiva or alveolar ridge, which lack distinctive local geometric features, making local feature-based registration less reliable. To address this, our approach omits any explicit descriptor matching by combining (1) voxel-based extraction to isolate potentially overlapping regions, (2) a Lucas–Kanade (LK) based global registration for robust coarse alignment, and (3) a final Iterative Closest Point (ICP) refinement to recover fine dental geometry. By first voxelizing and extracting, we reduce the data size before invoking LK, resulting in rapid convergence without large-scale RANSAC or feature matching. Under high overlap conditions, our method outperforms state-of-the-art approaches on both real intraoral scans and the synthetic ModelNet40 dataset, demonstrating both speed and accuracy for chairside use.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2026-02-11T16:42:50Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2026-02-11T16:42:50Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
摘要 iii
Abstract v
Contents vii
List of Figures ix
List of Tables xi
Chapter 1 Introduction 1
Chapter 2 Related Work 5
2.1 Local Descriptors for Point Clouds 5
2.2 Iterative Closest Point (ICP) 6
2.3 Feature-Based Registration 7
2.4 Lucas–Kanade (LK)-based Registration 8
Chapter 3 Method 11
3.1 Problem Statement 11
3.2 FPFH Overlap Extraction 12
3.3 Global Feature Coarse Registration 13
3.4 ICP Fine Registration 17
3.5 Training 18
Chapter 4 Experiments 19
4.1 Baseline 19
4.2 Dataset: Structured-light 3D oral scanning point clouds 20
4.3 Dataset: ModelNet40 24
4.4 Computational Efficiency 26
4.5 Ablation Study 28
4.5.1 Effect of FPFH Overlap Extraction 28
4.5.2 Impact of LK Iteration Count 29
4.5.3 Backbone Comparison: EdgeConv + CNN vs. PointNet 30
4.5.4 Summary of Ablation Results 32
Chapter 5 Conclusion 33
References 35
-
dc.language.isoen-
dc.subject點雲配準-
dc.subject口內掃描-
dc.subject深度學習-
dc.subjectLucas–Kanade 最佳化-
dc.subject結構光掃描-
dc.subjectPoint cloud registration-
dc.subjectIntraoral scanning-
dc.subjectDeep learning-
dc.subjectLucas–Kanade optimization-
dc.subjectStructured light scanner-
dc.title一種快速且穩健的口內掃描點雲配準方法zh_TW
dc.titleA Fast and Robust Point Cloud Registration Method for Oral Scanneren
dc.typeThesis-
dc.date.schoolyear114-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee丁建均;于天立;林澤zh_TW
dc.contributor.oralexamcommitteeJian-Jiun Ding;Tian-Li Yu;Che Linen
dc.subject.keyword點雲配準,口內掃描深度學習Lucas–Kanade 最佳化結構光掃描zh_TW
dc.subject.keywordPoint cloud registration,Intraoral scanningDeep learningLucas–Kanade optimizationStructured light scanneren
dc.relation.page39-
dc.identifier.doi10.6342/NTU202600146-
dc.rights.note未授權-
dc.date.accepted2026-02-04-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電信工程學研究所-
dc.date.embargo-liftN/A-
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-114-1.pdf
  未授權公開取用
3.77 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved