請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95889完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 周呈霙 | zh_TW |
| dc.contributor.advisor | Cheng-Ying Chou | en |
| dc.contributor.author | 廖俊凱 | zh_TW |
| dc.contributor.author | Jun-Kai Liao | en |
| dc.date.accessioned | 2024-09-19T16:13:10Z | - |
| dc.date.available | 2024-09-20 | - |
| dc.date.copyright | 2024-09-19 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-13 | - |
| dc.identifier.citation | Abati, S., Bramati, C., Bondi, S., Lissoni, A., and Trimarchi, M. (2020). Oral cancer and precancer: a narrative review on the relevance of early diagnosis. International Journal of Environmental Research and Public Health, 17(24):9160.
Alhazmi, A., Alhazmi, Y., Makrami, A., Masmali, A., Salawi, N., Masmali, K., and Patil, S. (2021). Application of artificial intelligence and machine learning for prediction of oral cancer risk. Journal of Oral Pathology & Medicine, 50(5):444–450. Aurelio, Y. S., De Almeida, G. M., de Castro, C. L., and Braga, A. P. (2019). Learning from imbalanced data sets with weighted cross-entropy function. Neural Processing Letters, 50:1937–1949. Awan, K. H., Yang, Y.-H., Morgan, P. R., and Warnakulasuriya, S. (2012). Utility of toluidine blue as a diagnostic adjunct in the detection of potentially malignant disorders of the oral cavity–a clinical and histological assessment. Oral Diseases, 18(8):728–733. Balasubramaniam, A. M., Sriraman, R., Sindhuja, P., Mohideen, K., Parameswar, R. A., and Haris, K. M. (2015). Autofluorescence based diagnostic techniques for oral cancer. Journal of Pharmacy and Bioallied Sciences, 7(Suppl 2):S374–S377. Browne, M. W. (2000). Cross-validation methods. Journal of Mathematical Psychology, 44(1):108–132. Choi, S. and Myers, J. (2008). Molecular pathogenesis of oral squamous cell carcinoma: implications for therapy. Journal of Dental Research, 87(1):14–32. Chu, C. S., Lee, N. P., Adeoye, J., Thomson, P., and Choi, S.-W. (2020). Machine learn- ing and treatment outcome prediction for oral cancer. Journal of Oral Pathology & Medicine, 49(10):977–985. Deng, L. (2012). The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142. Dutta, A. and Zisserman, A. (2019). The via annotation software for images, audio and video. In Proceedings of the 27th ACM International Conference on Multimedia, pages 2276–2279. Epstein, J., Silverman Jr, S., Epstein, J., Lonky, S., and Bride, M. (2008). Analysis of oral lesion biopsies identified and evaluated by visual examination, chemiluminescence and toluidine blue. Oral oncology, 44(6):538–544. Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 1440–1448. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778. Hou, Q., Zhou, D., and Feng, J. (2021). Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13713–13722. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4700–4708. International, W. C. R. F. (2022). Mouth and oral cancer statistics. https://www.wcrf.org/cancer-trends/mouth-and-oral-cancer-statistics/. Accessed: 2024-07-03. Jeyaraj, P. R. and Samuel Nadar, E. R. (2019). Computer-assisted medical image classi-fication for early diagnosis of oral cancer employing deep learning algorithm. Journal of Cancer Research and Clinical Oncology, 145:829–837. Jocher, G. (2020). Ultralytics yolov5. Jubair, F., Al-karadsheh, O., Malamos, D., Al Mahdi, S., Saad, Y., and Hassona, Y. (2022). A novel lightweight deep convolutional neural network for early detection of oral cancer. Oral Diseases, 28(4):1123–1130. Jung, A. B., Wada, K., Crall, J., Tanaka, S., Graving, J., Reinders, C., Yadav, S., Banerjee, J., Vecsei, G., Kraft, A., Rui, Z., Borovec, J., Vallentin, C., Zhydenko, S., Pfeiffer, K., Cook, B., Fernández, I., De Rainville, F.-M., Weng, C.-H., Ayala-Acevedo, A., Meudec, R., Laporte, M., et al. (2020). imgaug. https://github.com/aleju/imgaug. Online; accessed 01-Feb-2020. Karadaghy, O. A., Shew, M., New, J., and Bur, A. M. (2019). Development and assessment of a machine learning model to help predict survival among patients with oral squamous cell carcinoma. JAMA Otolaryngology–Head & Neck Surgery, 145(12):1115–1120. Khishe, M. and Mosavi, M. R. (2020). Chimp optimization algorithm. Expert Systems with Applications, 149:113338. Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kullback, S. (1997). Information Theory and Statistics. Courier Corporation. Kumaraswamy, K., Vidhya, M., Rao, P. K., and Mukunda, A. (2012). Oral biopsy: Oral pathologist′ s perspective. Journal of Cancer Research and Therapeutics, 8(2):192–198. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer. Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., et al. (2022a). Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12009–12019. Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022b). A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11976–11986. Marzouk, R., Alabdulkreem, E., Dhahbi, S., Nour, M. K., Duhayyim, M. A., Othman, M., Hamza, M. A., Motwakel, A., Yaseen, I., and Rizwanullah, M. (2022). Deep transfer learning driven oral cancer detection and classification model. Computers, Materials & Continua, 73(2). Nieves-Rivera, A., Benchetrit, L., Kan, K., Tucker, S., Johnson, M., and Edwards, H. (2023). Use of tongue base palpation among oral healthcare providers: Cross-sectional survey. American Journal of Otolaryngology, 44(2):103765. Organization, W. H. et al. (2023). Global oral health status report: towards universal health coverage for oral health by 2030. Regional summary of the African Region. World Health Organization. Ouyang, D., He, S., Zhang, G., Luo, M., Guo, H., Zhan, J., and Huang, Z. (2023). Efficient multi-scale attention module with cross-spatial learning. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. prathapa, J. (2022). Normalskin dataset. https://universe.roboflow.com/janitha-prathapa/normalskin. visited on 2024-04-17. Qt-Company (2023). Pyqt5. https://www.riverbankcomputing.com/static/Docs/PyQt5/. Qt-Company (2024). Qtdesigner. https://doc.qt.io/qt-6/qtdesigner-manual.html. Ruder, S. (2016). An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747. Scheer, M., Fuss, J., Derman, M. A., Kreppel, M., Neugebauer, J., Rothamel, D., Drebber, U., and Zoeller, J. E. (2016). Autofluorescence imaging in recurrent oral squamous cell carcinoma. Oral and Maxillofacial Surgery, 20:27–33. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 618–626. Sharma, N., Om, H., et al. (2015). Usage of probabilistic and general regression neural network for early detection and prevention of oral cancer. The Scientific World Journal, 2015. Shukla, A., Singh, N. N., Adsul, S., Kumar, S., Shukla, D., and Sood, A. (2018). Compar-ative efficacy of chemiluminescence and toluidine blue in the detection of potentially malignant and malignant disorders of the oral cavity. Journal of Oral and Maxillofacial Pathology, 22(3):442. Song, B., Sunny, S., Uthoff, R. D., Patrick, S., Suresh, A., Kolur, T., Keerthi, G., An-barani, A., Wilder-Smith, P., Kuriakose, M. A., et al. (2018). Automatic classification of dual-modalilty, smartphone-based oral dysplasia and malignancy images using deep learning. Biomedical Optics Express, 9(11):5318–5329. Speight, P. M., Khurram, S. A., and Kujan, O. (2018). Oral potentially malignant disor- ders: risk of progression to malignancy. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 125(6):612–627. Stonebraker, M. and Rowe, L. A. (1986). The design of postgres. ACM Sigmod Record,15(2):340–355. Tanriver, G., Soluk Tekkesin, M., and Ergen, O. (2021). Automated detection and classifi-cation of oral lesions using deep learning to detect oral potentially malignant disorders.Cancers, 13(11):2766. Technology, M. I. (2024). 醫創科技股份有限公司. https://www.mi-tech.com.tw/. Van der Maaten, L. and Hinton, G. (2008). Visualizing data using t-sne. Journal of machinelearning research, 9(11). Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y. M. (2023). Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7464–7475. Warin, K., Limprasert, W., Suebnukarn, S., Jinaporntham, S., and Jantana, P. (2021). Au-tomatic classification and detection of oral cancer in photographic images using deep learning algorithms. Journal of Oral Pathology & Medicine, 50(9):911–918. Welikala, R. A., Remagnino, P., Lim, J. H., Chan, C. S., Rajendran, S., Kallarakkal, T. G., Zain, R. B., Jayasinghe, R. D., Rimal, J., Kerr, A. R., et al. (2020). Automated detection and classification of oral lesions using deep learning for early detection of oral cancer. IEEE Access, 8:132677–132693. Wightman, R. (2019). Pytorch image models. https://github.com/rwightman/pytorch-image-models. Zhou, M., Jie, W., Tang, F., Zhang, S., Mao, Q., Liu, C., and Hao, Y. (2024). Deep learning algorithms for classification and detection of recurrent aphthous ulcerations using oral clinical photographic images. Journal of Dental Sciences, 19(1):254–260. 台 灣 口 腔 癌 篩 檢 計 畫 (2024). 口 腔 篩 檢 沿 革. https://www.mohw.gov.tw/cp-16-74869-1.html. 經 濟 部 統 計 處 (2024). 112 年 國 人 死 因 統 計 結 果. https://www.mohw.gov.tw/cp-16-79055-1.html. Accessed: 2024-06-17. 衛福部國民健康署 (2015). 口腔癌篩檢. https://www.hpa.gov.tw/Pages/List.aspx?nodeid=191. Accessed: 2021-10-26. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95889 | - |
| dc.description.abstract | 口腔癌是一種常見的癌症。在全球統計上位列第 13 常見的癌症。在台灣,因其特有的檳榔文化,每年新增逾八千例口腔癌患者,口腔癌的死亡率位列第六名。由於缺乏病識感及延誤就醫,導致大多數的案例被診斷時已處於口腔癌的晚期。早期篩檢能有效地提高患者的生存率。傳統上,口腔癌的診斷主要透過醫師肉眼觀察、觸診或使用組織切片進行診斷。然而這些方法往往耗時且依賴專業知識。故本研究旨在開發行動應用程式,用於辨識潛在的口腔癌病灶。本研究透過卷積神經網路作為核心技術,並在模型骨幹加入注意力機制來提升模型表現。口腔影像之評估分為兩階段。第一階段由口腔病灶檢測模型標示出病灶位置及嚴重程度。第二階段由口腔病灶分類模型將對標示之病灶類別進行修正並評估影像之整體嚴重程度並給予對應的醫療建議。實驗結果顯示口腔病灶檢測模型之平均精度達 74.7%,F1-score 為 71%。口腔病灶分類模型之準確率為 77%。評估單張影像整體嚴重性之準確率達 89.3%。本研究開發之行動應用程式具備簡潔且易懂的介面,可引導使用者針對口腔內八個部位進行拍攝。透過部署在後台的模型分析後,分析結果約在 10 秒內回傳。此應用程式之便利性及高準確率不僅有利於民眾進行自我篩檢,在臨床上亦可提供醫師客觀的輔助診斷。在未來,期望此篇研究能有效降低口腔癌之發生率及死亡率。 | zh_TW |
| dc.description.abstract | Oral cancer ranked as the thirteenth most common cancer worldwide. In Taiwan, due to the unique culture of betel nut chewing, over 8,000 new cases of oral cancer are reported annually, with a mortality rate ranking sixth. Due to the lack of awareness and delay in seeking medical treatments, most cases of oral cancer were diagnosed at a terminal stage. Early detection can significantly increase the survival rate of oral cancer. Traditional di- agnostic approaches such as visual inspection, palpation, and biopsy are time-consuming and require specialized expertise. Thus, this study aims to develop a mobile application for identifying potential oral cancer lesions. This study leverages convolutional neural networks as the core technology and inserts the attention mechanism into the backbone of the model to enhance performance. The assessment of an oral image is divided into two stages. In the first stage, the lesion detection model marks the lesion location and the severity. In the second stage, the classes of the marked lesions are verified by the le- sion classifier. Then, the comprehensive severity and the associated medical suggestions are provided. Experimental results show that the lesion detection model achieves a mean average precision (mAP@0.5) of 74.7% and an F1-score of 71%. The lesion classifier achieves an overall accuracy of 77%. The accuracy of evaluating the overall severity of an image is 89.3%. The developed mobile applications feature a simple and user-friendly interface, guiding users to capture images of eight specific areas in the oral cavity. After being analyzed by the models deployed in the backend, the results returned in ten sec- onds. The convenience and high accuracy of developed applications benefit the public in self-screening and can provide specialist aids in diagnosis in clinical scenarios. In the future, the developed applications are expected to reduce the incidence and mortality of oral cancer. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-09-19T16:13:10Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-09-19T16:13:10Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements iii 摘要 v Abstract vii Contents ix List of Figures xiii List of Tables xv Denotation xvii Chapter 1 Introduction 1 1.1 Background 1 1.2 Research Purpose 3 1.3 Organization 3 Chapter 2 Literature Review 5 2.1 Traditional Approaches of Oral Cancer Detection 5 2.2 Machine Learning Approaches for Oral Cancer Detection 6 2.3 Deep Learning Approaches for Oral Cancer Detection 8 2.3.1 Lesion classification 8 2.3.2 Lesion detection 9 Chapter 3 Materials and Methods 11 3.1 Overview of the Oral Cancer Detection 11 3.2 Image Collection and Annotation 13 3.3 Dataset Split 15 3.4 Data Augmentation 16 3.5 Anomaly Detection Model 17 3.6 Lesion Detection Model 18 3.6.1 YOLOv7 18 3.6.2 Attention mechanism 20 3.6.3 Training strategy and environment 24 3.7 Lesion Classifier 26 3.8 Visual Explanation 28 3.8.1 Grad-CAM 28 3.8.2 t-SNE 28 Chapter 4 Results and Discussion 31 4.1 Evaluation Metrics 31 4.2 Performance of Anomaly Detection Model 34 4.3 Lesion Detection Model 36 4.3.1 Training loss and mAP 36 4.3.2 Performance of lesion detection model 37 4.3.3 Cross-validation 38 4.4 Performance of Lesion Classifier 40 4.5 Performance of Early Detection 43 4.6 LesionEdit Tool 45 4.7 Mobile Application 47 4.7.1 Line chatbot 47 4.7.2 Oral Screening App 49 4.8 Challenges in Lesion Detection Model 52 Chapter 5 Conclusion 55 References 57 | - |
| dc.language.iso | en | - |
| dc.title | 應用深度學習於開發口腔病灶檢測與分類之行動應用程式 | zh_TW |
| dc.title | Application of Deep Learning for Developing Mobile APP for Oral Lesion Detection and Classification | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 王偉仲;陳世杰;李正喆 | zh_TW |
| dc.contributor.oralexamcommittee | Wei-Chung Wang;Shyh-Jye Chen;Jang-Jaer Lee | en |
| dc.subject.keyword | 口腔癌篩檢,物件檢測,深度學習, | zh_TW |
| dc.subject.keyword | Oral cancer screening,Object detection,Deep learning, | en |
| dc.relation.page | 63 | - |
| dc.identifier.doi | 10.6342/NTU202401147 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2024-08-14 | - |
| dc.contributor.author-college | 生物資源暨農學院 | - |
| dc.contributor.author-dept | 生物機電工程學系 | - |
| dc.date.embargo-lift | 2027-08-05 | - |
| 顯示於系所單位: | 生物機電工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 26.8 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
