請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101196完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳沛遠 | zh_TW |
| dc.contributor.advisor | Pei-Yuan Wu | en |
| dc.contributor.author | 林仲偉 | zh_TW |
| dc.contributor.author | Zhong-Wei Lin | en |
| dc.date.accessioned | 2025-12-31T16:17:05Z | - |
| dc.date.available | 2026-01-01 | - |
| dc.date.copyright | 2025-12-31 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-12-01 | - |
| dc.identifier.citation | [1] J. S. Ahn, A. Park, J. W. Kim, B. H. Lee, and J. B. Eom. Development of three-dimensional dental scanning apparatus using structured illumination. Sensors, 17(7):1634, 2017.
[2] P. Amornvit, D. Rokaya, C. Peampring, and S. Sanohkan. Confocal 3d optical intraoral scanners and comparison of image capturing accuracy. Computers, Materials & Continua, 66(1), 2021. [3] N. Z. Baba, B. J. Goodacre, C. J. Goodacre, F. Müller, and S. Wagner. Cad/cam complete denture systems and physical properties: A review of the literature. Journal of prosthodontics, 30(S2):113–124, 2021. [4] N. Babayoff and I. Glaser-Inbari. Imaging a three-dimensional structure by confocal focussing an array of light beams, Feb. 24 2004. US Patent 6,697,164. [5] E.-R. Baciu, D. G. Budală, R.-I. Vasluianu, C. I. Lupu, A. Murariu, G. L. Gelețu, I. N. Zetu, D. Diaconu-Popa, M. Tatarciuc, G. Nichitean, et al. A comparative analysis of dental measurements in physical and digital orthodontic case study models. Medicina, 58(9):1230, 2022. [6] S. Chen. Intraoral 3-d measurement by means of group coding combined with consistent enhancement for fringe projection pattern. IEEE Transactions on Instrumentation and Measurement, 71:1–12, 2022. [7] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. [8] H. Cheng, M. Zhang, and J. Q. Shi. A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. [9] M. Dehurtevent, L. Robberecht, and P. Béhin. Influence of dentist experience with scan spray systems used in direct cad/cam impressions. The Journal of prosthetic dentistry, 113(1):17–21, 2015. [10] L. R. Dice. Measures of the amount of ecologic association between species. Ecology, 26(3):297–302, 1945. [11] R. F. Dillon, B. Zhao, and N. H. Judell. Intra-oral three-dimensional imaging system, Mar. 5 2013. US Patent 8,390,822. [12] J. Du, W. Xiong, W. Chen, J. Cheng, Y. Wang, Y. Gu, and S. C. Chia. Robust laser stripe extraction using ridge segmentation and region ranking for 3d reconstruction of reflective and uneven surface. In 2015 IEEE International Conference on Image Processing (ICIP), pages 4912–4916. IEEE, 2015. [13] J. B. Eom, J. Ahn, and A. Park. 3d intraoral scanning system using fixed pattern mask and tunable-focus lens. Measurement Science and Technology, 31(1):015401, 2019. [14] A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser. Optical coherence tomography-principles and applications. Reports on progress in physics, 66(2):239, 2003. [15] J. D. Foley. Computer graphics: principles and practice, volume 12110. Addison-Wesley Professional, 1996. [16] Y. R. Gallardo, L. Bohner, P. Tortamano, M. N. Pigozzo, D. C. Lagana, and N. Sesma. Patient outcomes and procedure working time for digital versus conventional impressions: A systematic review. The Journal of prosthetic dentistry, 119(2):214–219, 2018. [17] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of machine learning research, 17(59):1–35, 2016. [18] J. Geng. Structured-light 3d surface imaging: a tutorial. Advances in optics and photonics, 3(2):128–160, 2011. [19] B. Giménez, M. Özcan, F. Martínez-Rus, and G. Pradíes. Accuracy of a digital impression system based on active wavefront sampling technology for implants considering operator experience, implant angulation, and depth. Clinical implant dentistry and related research, 17:e54–e64, 2015. [20] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022. [21] L. Hoyer, D. Dai, and L. Van Gool. Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9924–9935, 2022. [22] L. Hoyer, D. Dai, and L. Van Gool. Hrda: Context-aware high-resolution domain-adaptive semantic segmentation. In European conference on computer vision, pages 372–391. Springer, 2022. [23] L. Hoyer, D. Dai, H. Wang, and L. Van Gool. Mic: Masked image consistency for context-enhanced domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11721–11732, 2023. [24] M. Huang and X. Xu. A method of laser stripe centerline extraction based on deep learning for structured light 3d reconstruction. In Journal of Physics: Conference Series, volume 2522, page 012015. IOP Publishing, 2023. [25] S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser, and K. Schindler. Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pages 4267–4276, 2021. [26] Y. Huang, W. Kang, and Z. Lu. Improved structured light centerline extraction algorithm based on unilateral tracing. In Photonics, volume 11, page 723. MDPI, 2024. [27] M. Izquierdo, M. Sanchez, A. Ibanez, and L. Ullate. Sub-pixel measurement of 3d surfaces by laser scanning. Sensors and Actuators A: Physical, 76(1-3):1–8, 1999. [28] M. Khoury, Q.-Y. Zhou, and V. Koltun. Learning compact geometric features. In Proceedings of the IEEE international conference on computer vision, pages 153–161, 2017. [29] D. Kim, K. Wang, K. Saenko, M. Betke, and S. Sclaroff. A unified framework for domain adaptive pose estimation. In European Conference on Computer Vision, pages 603–620. Springer, 2022. [30] D. P. Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [31] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. White-head, A. C. Berg, W.-Y. Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015–4026, 2023. [32] J. Li, L. Wang, Y. Wan, K. Yang, and L. Luo. An adaptive weighted width extraction method based on the hessian matrix for high-precision detection of laser stripe centers in low-exposure. Optics and Lasers in Engineering, 181:108436, 2024. [33] X. Liu, C. Yoo, F. Xing, H. Oh, G. El Fakhri, J.-W. Kang, J. Woo, et al. Deep unsupervised domain adaptation: A review of recent advances and perspectives. APSIPA Transactions on Signal and Information Processing, 11(1), 2022. [34] M. Long, Y. Cao, J. Wang, and M. Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pages 97– 105. PMLR, 2015. [35] X. Lu, Y. Zhong, Z. Zheng, Y. Liu, J. Zhao, A. Ma, and J. Yang. Multi-scale and multi-task deep learning framework for automatic road extraction. IEEE Transactions on Geoscience and Remote Sensing, 57(11):9362–9377, 2019. [36] M. Machoy, J. Seeliger, L. Szyszka-Sommerfeld, R. Koprowski, T. Gedrange, and K. Woźniak. The use of optical coherence tomography in dental diagnostics: a state- of-the-art review. Journal of healthcare engineering, 2017(1):7560645, 2017. [37] F. Mangano, A. Gandolfi, G. Luongo, and S. Logozzo. Intraoral scanners in dentistry: a review of the current literature. BMC oral health, 17:1–11, 2017. [38] S. Mittal and S. Vaishay. A survey of techniques for optimizing deep learning on gpus. Journal of Systems Architecture, 99:101635, 2019. [39] S. B. Patzelt, S. Vonau, S. Stampf, and W. Att. Assessing the feasibility and accuracy of digitizing edentulous jaws. The Journal of the American Dental Association, 144(8):914–920, 2013. [40] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234–241. Springer, 2015. [41] A. Sironi, V. Lepetit, and P. Fua. Multiscale centerline detection by learning a scale-space distance transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2697–2704, 2014. [42] C. Steger. An unbiased detector of curvilinear structures. IEEE Transactions on pattern analysis and machine intelligence, 20(2):113–125, 1998. [43] A. Tarvainen and H. Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017. [44] W. Tranheden, V. Olsson, J. Pinto, and L. Svensson. Dacs: Domain adaptation via cross-domain mixed sampling. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1379–1389, 2021. [45] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7472–7481, 2018. [46] E. Turetken, C. Becker, P. Glowacki, F. Benmansour, and P. Fua. Detecting irregular curvilinear structures in gray scale and color imagery using multi-directional oriented flux. In Proceedings of the IEEE International Conference on Computer Vision, pages 1553–1560, 2013. [47] H.-F. Wang, Y.-F. Wang, J.-J. Zhang, and J. Cao. Laser stripe center detection under the condition of uneven scattering metal surface for geometric measurement. IEEE Transactions on Instrumentation and Measurement, 69(5):2182–2192, 2019. [48] T. Yang, S. Wu, S. Zhang, S. Yang, Y. Wu, and F. Liu. A robust and accurate centerline extraction method of multiple laser stripe for complex 3d measurement. Advanced Engineering Informatics, 58:102207, 2023. [49] R. Yao, B. Wang, M. Hu, D. Hua, L. Wu, H. Lu, and X. Liu. A method for extracting a laser center line based on an improved grayscale center of gravity method: application on the 3d reconstruction of battery film defects. Applied Sciences, 13(17):9831, 2023. [50] C. Ye, W. Feng, Q. Wang, C. Wang, B. Pan, Y. Xie, Y. Hu, and J. Chen. Laser stripe segmentation and centerline extraction based on 3d scanning imaging. Applied Optics, 61(18):5409–5418, 2022. [51] W. Yu, Y. Li, H. Yang, and B. Qian. The centerline extraction algorithm of weld line structured light stripe based on pyramid scene parsing network. IEEE Access, 9:105144–105152, 2021. [52] L. Zhang, Y. Zhang, and B. Chen. Improving the extracting precision of stripe center for structured light measurement. Optik, 207:163816, 2020. [53] M. Zhang, Z. Li, F. Zhang, and L. Ma. Adaptive bidirectional gray-scale center of gravity extraction algorithm of laser stripes. Sensors, 22(24):9567, 2022. [54] S. Zhang. High-speed 3d shape measurement with structured light methods: A review. Optics and lasers in engineering, 106:119–131, 2018. [55] Y. Zhang, T. Liu, M. Long, and M. Jordan. Bridging theory and algorithm for domain adaptation. In International conference on machine learning, pages 7404–7413. PMLR, 2019. [56] Z. Zhang, Q. Liu, and Y. Wang. Road extraction by deep residual u-net. IEEE Geoscience and Remote Sensing Letters, 15(5):749–753, 2018. [57] X. Zhao, N. C. Mithun, A. Rajvanshi, H.-P. Chiu, and S. Samarasekera. Unsupervised domain adaptation for semantic segmentation with pseudo label self-refinement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2399–2409, 2024. [58] Q. Zhou, Z. Feng, Q. Gu, J. Pang, G. Cheng, X. Lu, J. Shi, and L. Ma. Context-aware mixup for domain adaptive semantic segmentation. IEEE Transactions on Circuits and Systems for Video Technology, 33(2):804–817, 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101196 | - |
| dc.description.abstract | 口腔內結構光(SL)成像因口腔環境中表面複雜且高反射性,常面臨中心線品質下降的挑戰。雖然完全監督式學習方法可以改善中心線檢測,但它們需要勞動密集型的標註工作,使其在大規模數據集上變得不切實際。為了解決這些挑戰,我們提出了一個合成條紋渲染器(Synthetic Stripe Renderer,SSR):以常規的口腔內圖像作為輸入,SSR 利用 Segment Anything 生成的區域掩模來合成與隨機條紋圖案配對的口腔內 SL 圖像。我們將 SSR 與無監督領域適配(UDA)訓練集成到 U-Net 模型中,實現無需人工標註或配對領域數據的從合成到真實的中心線提取。此外,我們提出了precision@d、recall@d 和 f1-score@d等指標,用於通過中心線坐標集的重疊率來評估多條紋段的相似性。我們的方法能夠提取高品質的中心線區域,其性能可與全監督式訓練相媲美。 | zh_TW |
| dc.description.abstract | Intraoral structured light (SL) imaging often faces challenges with degraded centerline quality due to the complex and reflective surfaces in the oral environment. While fully supervised learning approaches can improve centerline detection, they require labor-intensive annotation, making them impractical for large-scale datasets. To address these challenges, we propose a Synthetic Stripe Renderer (SSR): Given regular intraoral images as input, SSR employs region masks generated from Segment Anything to synthesize intraoral SL images paired with random stripe patterns. We integrate the SSR and unsupervised domain adaptation (UDA) training into a U-Net model, enabling synthetic-to-real centerline extraction without requiring manual labels or paired domain data. Additionally, we introduce precision@d, recall@d, and f1-score@d metrics, which measure the overlap rate of centerline coordinate sets to assess the similarity of multi-line segments. Our method extracts high-quality centerline regions, achieving performance comparable to fully-supervised training. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-12-31T16:17:05Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-12-31T16:17:05Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 iii Abstract v Contents vii List of Figures ix List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Related Work 5 Chapter 3 Method 9 3.1 Synthetic Stripe Renderer (SSR) 9 3.2 SSR-UNet Training Framework 11 3.3 Inference Pipeline 13 Chapter 4 Metrics 15 Chapter 5 Experiments 19 5.1 Implementation Detail 19 5.2 Centerline Extraction Result 20 5.3 Structured Light ROI Segmentation 21 5.4 Self-annotation Progress 23 5.5 Generalization across Datasets 24 Chapter 6 Ablation Studies 27 6.1 SSR module Rendering Steps 27 6.2 Self-training Perturbation Choices 27 Conclusion 29 References 31 Appendix A — Detailed Formulation 39 A.1 Detailed Formulation of SSR 39 Appendix B — Hyper-parameters 43 B.1 SSR-UNet Training Hyper-parameters 43 Appendix C — Data Collection 45 C.1 Data Collection 45 | - |
| dc.language.iso | en | - |
| dc.subject | 口內結構光 | - |
| dc.subject | 中心線提取 | - |
| dc.subject | 語意分割 | - |
| dc.subject | 無監督領域自適應 | - |
| dc.subject | Intraoral Structured Light | - |
| dc.subject | Centerline Extraction | - |
| dc.subject | Semantic Segmentation | - |
| dc.subject | Unsupervised Domain Adaptation | - |
| dc.subject | Synthetic-to-real | - |
| dc.title | 基於合成條紋渲染器的無監督領域自適應之於口腔內圖像結構光中心線提取 | zh_TW |
| dc.title | Structured Light Centerline Extraction for Intraoral Images via Unsupervised Domain Adaptation with Synthetic Stripe Renderer | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 林澤;王靜慧 | zh_TW |
| dc.contributor.oralexamcommittee | Che Lin;Ching-Huey Wang | en |
| dc.subject.keyword | 口內結構光,中心線提取語意分割無監督領域自適應 | zh_TW |
| dc.subject.keyword | Intraoral Structured Light,Centerline ExtractionSemantic SegmentationUnsupervised Domain AdaptationSynthetic-to-real | en |
| dc.relation.page | 46 | - |
| dc.identifier.doi | 10.6342/NTU202501122 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-12-01 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電信工程學研究所 | - |
| dc.date.embargo-lift | 2026-01-01 | - |
| 顯示於系所單位: | 電信工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf | 3.74 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
