請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98969完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 蔡欣穆 | zh_TW |
| dc.contributor.advisor | Hsin-Mu Tsai | en |
| dc.contributor.author | 陳以峰 | zh_TW |
| dc.contributor.author | Yi-Feng Chen | en |
| dc.date.accessioned | 2025-08-20T16:28:44Z | - |
| dc.date.available | 2025-08-21 | - |
| dc.date.copyright | 2025-08-20 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-08-14 | - |
| dc.identifier.citation | [1] H. G. Barrow and J. M. Tenenbaum. Recovering intrinsic scene characteristics from images. In Computer Vision Systems, pages 3–26. Academic Press, 1978.
[2] S. Bell, K. Bala, and N. Snavely. Intrinsic images in the wild. In ACM Transactions on Graphics (TOG), volume 33, page 159. ACM, 2014. [3] W. P. Beneducci, M. L. Teixeira, and H. Pedrini. Dental shade matching assisted by computer vision techniques. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 11(4):1378–1396, 2023. [4] M. Bertalmio, A. L. Bertozzi, and G. Sapiro. Navier-stokes, fluid dynamics, and image and video inpainting. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 1, pages I–I. IEEE, 2001. [5] H. Chen, J. Huang, X. Dong, J. Qian, J. He, X. Qu, and E. Lu. A systematic review of visual and instrumental measurements for tooth shade matching. Quintessence International, 43(8), 2012. [6] Z. Chi, X. Wu, X. Shu, and J. Gu. Single image reflection removal using deep encoder-decoder network. arXiv preprint arXiv:1802.00094, 2018. [7] J. Guo, Z. Zhou, and L. Wang. Single image highlight removal with a sparse and low-rank reflection model. In Proceedings of the European Conference on Computer Vision (ECCV), pages 268–283, 2018. [8] X. Guo, X. Chen, S. Luo, S. Wang, and C.-M. Pun. Dual-hybrid attention network for specular highlight removal. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 10173–10181, 2024. [9] K. Hu, Z. Huang, and X. Wang. Highlight removal network based on an improved dichromatic reflection model. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2645–2649. IEEE, 2024. [10] J.-J. Hwang, S. Azernikov, A. A. Efros, and S. X. Yu. Learning beyond human expertise with generative models for dental restorations. arXiv preprint arXiv:1804.00064, 2018. [11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017. [12] Z. Jafri, N. Ahmad, M. Sawai, N. Sultan, and A. Bhardwaj. Digital smile designan innovative tool in aesthetic dentistry. Journal of oral biology and craniofacial research, 10(2):194–198, 2020. [13] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al. Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4015–4026, 2023. [14] D. Knezović, D. Zlatarić, I. Ž. Illeš, M. Alajbeg, et al. In vivo and in vitro evaluations of repeatability and accuracy of vita easyshade® advance 4.0 dental shade-matching device. Acta Stomatologica Croatica, 49(2):112, 2015. [15] K. Kokomoto, R. Okawa, K. Nakano, and K. Nozaki. Tooth development prediction using a generative machine learning approach. IEEE Access, 2024. [16] C. Kose Jr, D. Oliveira, P. N. Pereira, and M. G. Rocha. Using artificial intelligence to predict the final color of leucite-reinforced ceramic restorations. Journal of Esthetic and Restorative Dentistry, 35(1):105–115, 2023. [17] E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of the Optical Society of America, 61(1):1–11, 1971. [18] R. Li, J. Pan, Y. Si, B. Yan, Y. Hu, and H. Qin. Specular reflections removal for endoscopic image sequences with adaptive-rpca decomposition. IEEE transactions on medical imaging, 39(2):328–340, 2019. [19] C.-T. Liu, P.-L. Lai, P.-S. Fu, H.-Y. Wu, T.-H. Lan, T.-K. Huang, E. H.-H. Lai, and C.-C. Hung. Total solution of a smart shade matching. Journal of Dental Sciences, 18(3):1323–1329, 2023. [20] M. Mori, K. F. MacDorman, and N. Kageki. The uncanny valley [from the field]. IEEE Robotics & automation magazine, 19(2):98–100, 2012. [21] S. Nalbandian and B. Millar. The effect of veneers on cosmetic improvement. British Dental Journal, 207(2):E3–E3, 2009. [22] S. K. Nayar, X.-S. Fang, and T. Boult. Separation of reflection components usingcolor and polarization. International Journal of Computer Vision, 21(3):163–186, 1997. [23] S. R. Okubo, A. Kanawati, M. W. Richards, and S. Childressd. Evaluation of visual and instrument shade matching. The Journal of prosthetic dentistry, 80(6):642–648, 1998. [24] Point Grey Research, Inc. Flea³ usb 3.0 camera—fl3‑u3‑32s2c‑cs datasheet. Technical Datasheet FL3‑U3‑32S2C‑CS, Point Grey Research (now FLIR Integrated Imaging Solutions), Richmond, British Columbia, Canada, Sept. 2014. Revised 9/9/2014 —Ultra‑compact 3.2 MP USB 3.0 color camera. [25] Schneider-Kreuznach. B+w master ht ksm cpl mrc nano filter 72mm, 2023. Accessed: 2025-07-18. [26] J. Shen, X. Yang, and J. Jia. Intrinsic image decomposition using optimization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3481–3487. IEEE, 2011. [27] Y. Shen, P. Luo, J. Yan, X. Wang, and X. Tang. Faceid-gan: Learning a symmetry three-player gan for identity-preserving face synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 821–830, 2018. [28] S. Shetty, S. Gali, D. Augustine, and S. Sv. Artificial intelligence systems in dental shade-matching: A systematic review. Journal of Prosthodontics, 33(6):519–532, 2024. [29] X. Shi, Y. Zhou, C. Xiao, F. Zhu, J. Li, Z. Li, and X. Li. Aha: Adaptive highlightaware network for specular highlight removal. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4912–4921, 2020. [30] Sony Corporation. Sony alpha 6400 –aps-c mirrorless camera, 2023. Accessed: 2025-07-18. [31] W.-K. Tam and H.-J. Lee. Accurate shade image matching by using a smartphone camera. Journal of prosthodontic research, 61(2):168–176, 2017. [32] Tamron Co., Ltd. 17-70mm f/2.8 di iii-a vc rxd (model b070), 2020. Accessed: 2025-07-18. [33] R. T. Tan and K. Ikeuchi. Separating reflection components of textured surfaces using a single image. IEEE transactions on pattern analysis and machine intelligence, 27(2):178–193, 2005. [34] R. T. Tan, K. Nishino, and K. Ikeuchi. Separating reflection components based on chromaticity and noise analysis. IEEE transactions on pattern analysis and machine intelligence, 26(10):1373–1379, 2004. [35] S. Tchoulack, J. P. Langlois, and F. Cheriet. A video stream processor for realtime detection and correction of specular reflections in endoscopic images. In 2008 joint 6th international IEEE northeast workshop on circuits and systems and TAISA conference, pages 49–52. IEEE, 2008. [36] A. Telea. An image inpainting technique based on the fast marching method. Journal of graphics tools, 9(1):23–34, 2004. [37] P. A. Thomas, D. Krishnamoorthi, J. Mohan, R. Raju, S. Rajajayam, and S. Venkatesan. Digital smile design. Journal of Pharmacy and Bioallied Sciences, 14(Suppl1):S43–S49, 2022. [38] M. Tian, W. W. Lu, K. W. C. Foong, and E. Loh. Generative adversarial networks for dental patient identity protection in orthodontic educational imaging. arXiv preprint arXiv:2307.02019, 2023. [39] F. Umer and N. Adnan. Generative artificial intelligence: synthetic datasets in dentistry. bdj open. 2024; 10: 13. [40] S. Wen, Y. Zheng, and F. Lu. Polarization guided specular reflection separation. IEEE Transactions on Image Processing, 30:7280–7291, 2021. [41] L. B. Wolff and T. E. Boult. Constraining object features using a polarization reflectance model. Phys. Based Vis. Princ. Pract. Radiom, 1:167, 1993. [42] H. Wu, S. Zheng, J. Zhang, and K. Huang. Gp-gan: Towards realistic high-resolution image blending. In Proceedings of the 27th ACM international conference on multimedia, pages 2487–2495, 2019. [43] J. Yang, L. Liu, and S. Li. Separating specular and diffuse reflection components in the hsi color space. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 891–898, 2013. [44] Q. Yang, K.-H. Tan, and N. Ahuja. Specular highlight removal for real-world images. In International Journal of Computer Vision, volume 110, pages 60–74. Springer, 2015. [45] C. Ye, L. Qiu, X. Gu, Q. Zuo, Y. Wu, Z. Dong, L. Bo, Y. Xiu, and X. Han. Stablenormal: Reducing diffusion variance for stable and sharp normal. ACM Transactions on Graphics (TOG), 43(6):1–18, 2024. [46] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018. [47] Y. Zhu, X. Fu, P.-T. Jiang, H. Zhang, Q. Sun, J. Chen, Z.-J. Zha, and B. Li. Revisiting single image reflection removal in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25468–25478, 2024. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98969 | - |
| dc.description.abstract | 臨床牙科比色一直是牙醫師與技師所面臨的挑戰。傳統比色方法依賴人工肉眼判斷,正確率不高,常導致牙貼重作,造成時間與成本損失。市面上的比色輔助設備價格高昂且需專用儀器,此外,牙齒表面的反光也會影響比色準確性。
本研究提出一套深度學習牙齒比色系統,整合牙貼預測與反光去除兩大功能。透過暗箱與偏光攝影建立資料集,並以生成對抗網路訓練兩組模型:一組預測牙貼套用效果,另一組去除鏡面反光。牙貼預測模型為本研究首次提出,能根據支台齒外觀生成擬真牙貼影像,具高度應用潛力,並透過資料擴增提升其在不同角度、色階與厚度下的穩定性。實驗顯示,本系統的反光處理效能可與現有通用型深度學習模型相當,並優於傳統影像修復技術。系統操作簡便,僅需相機與預訓練模型即可應用於臨床,協助快速直觀預測牙貼效果,提升溝通效率並降低成本。 | zh_TW |
| dc.description.abstract | Shade matching in dentistry is a long-standing challenge for dentists and technicians. Traditional visual methods are often inaccurate, leading to veneer remakes, wasted time, and higher costs. Existing tools are expensive and require specialized hardware. Reflections on tooth surfaces also interfere with accurate shade assessment.
This study presents a deep learning–based system that combines veneer prediction and reflection removal. Using a darkroom and polarized photography, we built two datasets and trained two Generative Adversarial Network models. One predicts realistic veneer appearances based on abutment teeth, while the other removes specular reflections. The veneer model is newly proposed and enhanced with data augmentation for better performance under various conditions. Results show that our reflection removal model performs comparably to general deep learning models and outperforms traditional image-based techniques. The system is easy to use, requiring only a camera and pretrained models, and supports fast, intuitive veneer simulations to improve communication and reduce clinical costs. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-20T16:28:44Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-08-20T16:28:44Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
致謝 ii 摘要 iii Abstract iv Contents vi List of Figures viii List of Tables xi Chapter 1 Introduction 1 Chapter 2 Related Work 7 2.1 Dental Shade Matching . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Reflection Removal . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 Generative Artificial Intelligence . . . . . . . . . . . . . . . . . . . 11 Chapter 3 Preliminary 13 3.1 Generative Adversarial Network . . . . . . . . . . . . . . . . . . . . 13 3.2 Polarized Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Segment Anything Model . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 4 System Design 20 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.2 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Reflection Removal Model . . . . . . . . . . . . . . . . . . . . . . . 23 4.3.1 Reflection Removal Dataset Collection . . . . . . . . . . . . . . . . 23 4.3.2 Reflection Removal Data Preprocessing . . . . . . . . . . . . . . . 25 4.3.3 Applying Grayscale Trained Models to Color Photographs . . . . . 28 4.4 Segment Anything Model . . . . . . . . . . . . . . . . . . . . . . . 30 4.5 Laminate Veneer Model . . . . . . . . . . . . . . . . . . . . . . . . 31 4.5.1 Laminate Dataset Collection . . . . . . . . . . . . . . . . . . . . . 31 4.5.2 Laminate Data Preprocessing . . . . . . . . . . . . . . . . . . . . . 34 Chapter 5 Evaluation 38 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 Evaluation Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.3 Reflection Removal Model . . . . . . . . . . . . . . . . . . . . . . . 41 5.4 Data Augmentation for Laminated Veneer Model . . . . . . . . . . . 53 5.5 Laminated Veneer Model . . . . . . . . . . . . . . . . . . . . . . . . 57 5.6 Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Chapter 6 Conclusion 67 References 69 | - |
| dc.language.iso | en | - |
| dc.subject | 牙科比色 | zh_TW |
| dc.subject | 生成式人工智慧 | zh_TW |
| dc.subject | 反光去除 | zh_TW |
| dc.subject | 偏振相機 | zh_TW |
| dc.subject | generative artificial intelligence | en |
| dc.subject | reflection removal | en |
| dc.subject | shade matching | en |
| dc.subject | polarized camera | en |
| dc.title | 使用生成式對抗網路之牙齒色階與形狀預測系統 | zh_TW |
| dc.title | A Tooth Shade and Shape Prediction System Based on Generative Adversarial Networks | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 李明穗;柯宗瑋;姜昱至 | zh_TW |
| dc.contributor.oralexamcommittee | Ming-Sui Lee;Tsung-Wei Ke;Yu-Chih Chiang | en |
| dc.subject.keyword | 牙科比色,生成式人工智慧,反光去除,偏振相機, | zh_TW |
| dc.subject.keyword | shade matching,generative artificial intelligence,reflection removal,polarized camera, | en |
| dc.relation.page | 75 | - |
| dc.identifier.doi | 10.6342/NTU202504348 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2025-08-15 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 37.93 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
