Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 公共衛生學院
  3. 健康數據拓析統計研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99927
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor盧子彬zh_TW
dc.contributor.advisorTzu-Pin Luen
dc.contributor.author邱柏鈞zh_TW
dc.contributor.authorPo-Chun Chiuen
dc.date.accessioned2025-09-19T16:19:22Z-
dc.date.available2025-09-20-
dc.date.copyright2025-09-19-
dc.date.issued2025-
dc.date.submitted2025-07-25-
dc.identifier.citationReference
1. Chiang, Y.-C., et al., Trends in incidence and survival outcome of epithelial ovarian cancer: 30-year national population-based registry in Taiwan. Journal of Gynecologic Oncology, 2013. 24(4): p. 342.
2. 李家儀, 免疫因子與上皮性卵巢癌預後之關聯性, in 臨床醫學研究所. 2022, 國立臺灣大學.
3. Buys, S.S., et al., Effect of screening on ovarian cancer mortality: the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Randomized Controlled Trial. Jama, 2011. 305(22): p. 2295-303.
4. Pignata, S., et al., Chemotherapy in epithelial ovarian cancer. Cancer Lett, 2011. 303(2): p. 73-83.
5. Wakabayashi, M.T., P.S. Lin, and A.A. Hakim, The Role of Cytoreductive/Debulking Surgery in Ovarian Cancer. Journal of the National Comprehensive Cancer Network, 2008. 6(8): p. 803-811.
6. Manegold-Brauer, G., D. Timmerman, and M. Hoopmann, Evaluation of Adnexal Masses: The IOTA Concept. Ultraschall Med, 2022. 43(6): p. 550-569.
7. Hack, K. and P. Glanc, The Abnormal Ovary: Evolving Concepts in Diagnosis and Management. Obstetrics and Gynecology Clinics of North America, 2019. 46(4): p. 607-624.
8. Van Calster, B., et al., Evaluating the risk of ovarian cancer before surgery using the ADNEX model to differentiate between benign, borderline, early and advanced stage invasive, and secondary metastatic tumours: prospective multicentre diagnostic study. BMJ, 2014. 349(oct07 3): p. g5920-g5920.
9. Andreotti, R.F., et al., Ovarian-Adnexal Reporting Lexicon for Ultrasound: A White Paper of the ACR Ovarian-Adnexal Reporting and Data System Committee. J Am Coll Radiol, 2018. 15(10): p. 1415-1429.
10. Xu, H.L., et al., Artificial intelligence performance in image-based ovarian cancer identification: A systematic review and meta-analysis. EClinicalMedicine, 2022. 53: p. 101662.
11. Sone, K., et al., Application of artificial intelligence in gynecologic malignancies: A review. J Obstet Gynaecol Res, 2021. 47(8): p. 2577-2585.
12. Liu, P., et al., Pattern Classification for Ovarian Tumors by Integration of Radiomics and Deep Learning Features. Curr Med Imaging, 2022. 18(14): p. 1486-1502.
13. Mathur, M. and V. Jindal, A Convolutional Neural Network Approach for Detecting Malignancy of Ovarian Cancer. Advances in Intelligent Systems and Computing, 2021.
14. Gao, Y., et al., Deep learning-enabled pelvic ultrasound images for accurate diagnosis of ovarian cancer in China: a retrospective, multicentre, diagnostic study. The Lancet Digital Health, 2022. 4(3): p. e179-e187.
15. Christiansen, F., et al., Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment. Ultrasound Obstet Gynecol, 2021. 57(1): p. 155-163.
16. Wang, Y. and Q. Zeng, Ovarian Tumor Texture Classification Based on Sparse Auto-Encoder Network Combined with Multi-Feature Fusion and Random Forest in Ultrasound Image. J. Medical Imaging Health Informatics, 2021. 11: p. 424-431.
17. Wang, H., et al., Application of Deep Convolutional Neural Networks for Discriminating Benign, Borderline, and Malignant Serous Ovarian Tumors From Ultrasound Images. Frontiers in Oncology, 2021. 11.
18. Huang, G., et al., Densely Connected Convolutional Networks. 2018.
19. Liu, Z., et al., Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. 2021.
20. Alsentzer, E., et al., Publicly Available Clinical BERT Embeddings. 2019.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99927-
dc.description.abstract背景與目的:卵巢癌是台灣女性生殖器官惡性腫瘤的主要死因之一,早期診斷是對於改善患者預後的重要議題。然而,良性與惡性卵巢腫瘤的治療策略差異極大,術前診斷的準確性直接影響治療決策。傳統超音波診斷高度依賴操作者經驗,存在主觀性與變異性問題。本研究目的為開發一種結合超音波影像與臨床文字報告的多模態深度學習模型,以提升卵巢腫瘤良惡性診斷的精準度。
方法:本論文採用回溯性單中心設計,收集2011年至2021年間台大醫院系統中1,062名接受卵巢腫瘤手術患者的1,342張超音波影像及其相應的超音波文字報告。研究對象依據病理結果分類為良性組(n=612)及惡性組(n=450)後,建構多模態深度學習架構,同時整合DenseNet-121和Swin Transformer用於影像特徵提取,最終採用Bio-Clinical BERT處理臨床文字報告。模型採用受試者層級分層分割,並使用五倍交叉驗證,利用獨立測試集進行預測模型效果評估。
結果:本論文建構之多模態模型在受試者層級分類中達到81.68%準確率、79.38%敏感度、83.81%特異度及0.89的曲線下面積(AUC)。在影像層級分類中表現更佳,準確率為85.15%、敏感度86.73%、特異度83.65%、AUC為0.91。相較於單純影像模型,加入臨床文字資訊能顯著提升診斷效能。在變數重要性上,後向選擇分析顯示在文字報告上的子宮相關描述與卵巢腫瘤特徵描述對最終診斷皆具重要貢獻價值。
結論:本論文成功開發出一套創新整合超音波影像與臨床文字報告的多模態深度學習模型,其診斷效能優於傳統單一模態方法及現有臨床指引。此模型展現了人工智慧輔助診斷在卵巢腫瘤診斷上的應用潛力,有望成為臨床醫師重要的輔助工具,提升術前診斷精準度以改善患者照護品質。
zh_TW
dc.description.abstractBackground and Objectives: In Taiwan, ovarian cancer represents the primary cause of mortality from gynecological malignancies among women, making early detection essential for enhancing patient prognosis. However, treatment strategies for benign versus malignant ovarian tumors differ significantly, making accurate preoperative diagnosis essential for appropriate clinical decision-making. Traditional ultrasound diagnosis is highly operator-dependent, introducing subjectivity and variability. To improve diagnostic precision in ovarian tumor classification, this research focuses on developing a multimodal deep learning system that combines ultrasound images with corresponding clinical text reports.
Methods: Our study retrospectively analyzed 1,342 ultrasound images from 1,062 patients who received surgical treatment for ovarian tumors at the National Taiwan University Hospital system during the period from 2011 to 2021. Patients were classified into benign (endometrioma, n=612) and malignant (including borderline, n=450) groups based on pathological confirmation. A multimodal deep learning architecture was developed, incorporating DenseNet-121 and Swin Transformer for image feature extraction and Bio-Clinical BERT for processing clinical text reports. The dataset was split using subject-level stratification with 5-fold cross-validation and a 15% independent test set.
Results: The multimodal model achieved superior performance at the subject level with 81.68% accuracy, 79.38% sensitivity, 83.81% specificity, and an area under the curve (AUC) of 0.89. Image-level classification demonstrated even better performance with 85.15% accuracy, 86.73% sensitivity, 83.65% specificity, and an AUC of 0.91. The integration of clinical text information significantly improved diagnostic performance compared to image-only models. Backward selection analysis revealed that both uterine findings and ovarian tumor descriptions contributed synergistically to the final diagnosis.
Conclusions: This study successfully developed a multimodal deep learning model that integrates ultrasound images with clinical text reports, demonstrating superior diagnostic performance compared to traditional unimodal approaches and existing clinical guidelines. The model shows promising potential as an AI-assisted diagnostic tool for ovarian tumor classification, offering clinicians a valuable adjunctive tool to improve preoperative diagnostic accuracy and enhance patient care quality.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-09-19T16:19:22Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-09-19T16:19:22Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents中文摘要 i
Abstract iii
Introduction 1
Materials and Methods 5
1 Data Source 5
1.1 Image Data Source 5
1.2 Text Data Source 6
2 Study Participants 7
3 Method 8
3.1 Analysis Environment 8
3.2 Subject Level Dataset Splitting Methodology 9
3.3 Data Augmentation 11
3.4 CNN Model – DenseNet-121 12
3.5 Transformer Model – Swin Transformer 13
3.6 BERT Model – Bio-Clinical BERT 15
3.7 Transfer Learning 16
3.8 Multi-Modal across Images and Text 16
3.9 Performance metrics 20
Results 22
1. Subject-Image Relationship Analysis 22
2. Model Performance Comparison 23
2.1 Subject Level Classification 23
2.2 Image Level Classification 24
2.3 Contribution of Textual Features 25
Conclusion 26
Discussion 27
1. Cross-Modal Attention 27
2. Comparison with Existing Diagnostic Approaches 28
2 Technical Innovations and Model Architectures Decisions 29
3. Heterogeneity Data Source 31
Limitation 32
1. Single-Center Design and Generalizability 32
2. Class Imbalance 33
3. Operator Dependency and Image Quality 33
4. Text Report Variability 33
5. Limited Clinical Data 34
Reference 35
Appendix - A 37
Appendix - B 38
-
dc.language.isoen-
dc.subject卵巢腫瘤zh_TW
dc.subject多模態深度學習zh_TW
dc.subject超音波影像zh_TW
dc.subject人工智慧診斷zh_TW
dc.subjectmultimodal deep learningen
dc.subjectultrasound imagingen
dc.subjectovarian tumorsen
dc.subjectartificial intelligence diagnosisen
dc.title開發一創新多模態深度學習方法提升卵巢腫瘤診斷精準度zh_TW
dc.titleDevelopment of A Novel Multi-Modal Deep Learning Approach to Improve Diagnostic Precision in Ovarian Canceren
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee馮嬿臻;蕭自宏;江盈澄zh_TW
dc.contributor.oralexamcommitteeYen-Chen Feng;Tzu-Hung Hsiao;Ying-Cheng Chiangen
dc.subject.keyword卵巢腫瘤,多模態深度學習,超音波影像,人工智慧診斷,zh_TW
dc.subject.keywordovarian tumors,multimodal deep learning,ultrasound imaging,artificial intelligence diagnosis,en
dc.relation.page38-
dc.identifier.doi10.6342/NTU202502500-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2025-07-25-
dc.contributor.author-college公共衛生學院-
dc.contributor.author-dept健康數據拓析統計研究所-
dc.date.embargo-lift2028-07-24-
顯示於系所單位:健康數據拓析統計研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  未授權公開取用
1.44 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved