請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95889| 標題: | 應用深度學習於開發口腔病灶檢測與分類之行動應用程式 Application of Deep Learning for Developing Mobile APP for Oral Lesion Detection and Classification |
| 作者: | 廖俊凱 Jun-Kai Liao |
| 指導教授: | 周呈霙 Cheng-Ying Chou |
| 關鍵字: | 口腔癌篩檢,物件檢測,深度學習, Oral cancer screening,Object detection,Deep learning, |
| 出版年 : | 2024 |
| 學位: | 碩士 |
| 摘要: | 口腔癌是一種常見的癌症。在全球統計上位列第 13 常見的癌症。在台灣,因其特有的檳榔文化,每年新增逾八千例口腔癌患者,口腔癌的死亡率位列第六名。由於缺乏病識感及延誤就醫,導致大多數的案例被診斷時已處於口腔癌的晚期。早期篩檢能有效地提高患者的生存率。傳統上,口腔癌的診斷主要透過醫師肉眼觀察、觸診或使用組織切片進行診斷。然而這些方法往往耗時且依賴專業知識。故本研究旨在開發行動應用程式,用於辨識潛在的口腔癌病灶。本研究透過卷積神經網路作為核心技術,並在模型骨幹加入注意力機制來提升模型表現。口腔影像之評估分為兩階段。第一階段由口腔病灶檢測模型標示出病灶位置及嚴重程度。第二階段由口腔病灶分類模型將對標示之病灶類別進行修正並評估影像之整體嚴重程度並給予對應的醫療建議。實驗結果顯示口腔病灶檢測模型之平均精度達 74.7%,F1-score 為 71%。口腔病灶分類模型之準確率為 77%。評估單張影像整體嚴重性之準確率達 89.3%。本研究開發之行動應用程式具備簡潔且易懂的介面,可引導使用者針對口腔內八個部位進行拍攝。透過部署在後台的模型分析後,分析結果約在 10 秒內回傳。此應用程式之便利性及高準確率不僅有利於民眾進行自我篩檢,在臨床上亦可提供醫師客觀的輔助診斷。在未來,期望此篇研究能有效降低口腔癌之發生率及死亡率。 Oral cancer ranked as the thirteenth most common cancer worldwide. In Taiwan, due to the unique culture of betel nut chewing, over 8,000 new cases of oral cancer are reported annually, with a mortality rate ranking sixth. Due to the lack of awareness and delay in seeking medical treatments, most cases of oral cancer were diagnosed at a terminal stage. Early detection can significantly increase the survival rate of oral cancer. Traditional di- agnostic approaches such as visual inspection, palpation, and biopsy are time-consuming and require specialized expertise. Thus, this study aims to develop a mobile application for identifying potential oral cancer lesions. This study leverages convolutional neural networks as the core technology and inserts the attention mechanism into the backbone of the model to enhance performance. The assessment of an oral image is divided into two stages. In the first stage, the lesion detection model marks the lesion location and the severity. In the second stage, the classes of the marked lesions are verified by the le- sion classifier. Then, the comprehensive severity and the associated medical suggestions are provided. Experimental results show that the lesion detection model achieves a mean average precision (mAP@0.5) of 74.7% and an F1-score of 71%. The lesion classifier achieves an overall accuracy of 77%. The accuracy of evaluating the overall severity of an image is 89.3%. The developed mobile applications feature a simple and user-friendly interface, guiding users to capture images of eight specific areas in the oral cavity. After being analyzed by the models deployed in the backend, the results returned in ten sec- onds. The convenience and high accuracy of developed applications benefit the public in self-screening and can provide specialist aids in diagnosis in clinical scenarios. In the future, the developed applications are expected to reduce the incidence and mortality of oral cancer. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95889 |
| DOI: | 10.6342/NTU202401147 |
| 全文授權: | 同意授權(限校園內公開) |
| 電子全文公開日期: | 2027-08-05 |
| 顯示於系所單位: | 生物機電工程學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 26.8 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
