請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97153
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 莊曜宇 | zh_TW |
dc.contributor.advisor | Eric Y. Chuang | en |
dc.contributor.author | 田庚昀 | zh_TW |
dc.contributor.author | Geng-Yun Tien | en |
dc.date.accessioned | 2025-02-27T16:26:14Z | - |
dc.date.available | 2025-02-28 | - |
dc.date.copyright | 2025-02-27 | - |
dc.date.issued | 2025 | - |
dc.date.submitted | 2025-02-07 | - |
dc.identifier.citation | [1] T. A. Ngoma and M. Ngoma, "Global Burden of Cancer: Prevalence, Pattern, and Trends," in Handbook of Global Health: Springer, 2021, pp. 459-494.
[2] C. M. Johnson et al., "Meta-analyses of colorectal cancer risk factors," Cancer causes & control, vol. 24, pp. 1207-1222, 2013. [3] M. Świderska et al., "The diagnostics of colorectal cancer," Contemporary Oncology/Współczesna Onkologia, vol. 18, no. 1, pp. 1-6, 2014. [4] B. M. Wolpin and R. J. Mayer, "Systemic treatment of colorectal cancer," Gastroenterology, vol. 134, no. 5, pp. 1296-1310. e1, 2008. [5] P. Rawla, A. Barsouk, A. V. Hadjinicolaou, and A. Barsouk, "Immunotherapies and targeted therapies in the treatment of metastatic colorectal cancer," Medical sciences, vol. 7, no. 8, p. 83, 2019. [6] B. Clapp. "What is Colorectal Cancer?" https://www.elpasobariatric.com/blog/about/what-is-colorectal-cancer/. [7] J. M. Niehues et al., "Generalizable biomarker prediction from cancer pathology slides with self-supervised deep learning: A retrospective multi-centric study," Cell reports Medicine, vol. 4, no. 4, 2023. [8] J. Tabernero, J. Ros, and E. Élez, "The evolving treatment landscape in BRAF-V600E–mutated metastatic colorectal cancer," American Society of Clinical Oncology Educational Book, vol. 42, pp. 254-263, 2022. [9] The Johns Hopkins University. "BRAF Mutation and Cancer." https://www.hopkinsmedicine.org/health/conditions-and-diseases/braf-mutation-and-cancer. [10] N. Staff. "Targeted Drug Trio Improves Survival in Colorectal Cancer with BRAF Mutations." https://www.cancer.gov/news-events/cancer-currents-blog/2019/colorectal-cancer-braf-triplet-targeted-therapy. [11] M. L. W. Maurie Markman, Karl S Roth. "Colorectal Cancer and KRAS/BRAF." https://emedicine.medscape.com/article/1690010-overview. [12] M. Meng, K. Zhong, T. Jiang, Z. Liu, H. Y. Kwan, and T. Su, "The current understanding on the impact of KRAS on colorectal cancer," Biomedicine & pharmacotherapy, vol. 140, p. 111717, 2021. [13] S. Misale et al., "Emergence of KRAS mutations and acquired resistance to anti-EGFR therapy in colorectal cancer," Nature, vol. 486, no. 7404, pp. 532-536, 2012. [14] E. Shtivelman. "The Trouble With KRAS." https://cancercommons.org/latest-insights/the-trouble-with-kras/. [15] Z. Gatalica, S. Vranic, J. Xiu, J. Swensen, and S. Reddy, "High microsatellite instability (MSI-H) colorectal carcinoma: a brief review of predictive biomarkers in the era of personalized medicine," Familial cancer, vol. 15, no. 3, pp. 405-412, 2016. [16] G. Kurzawski, J. Suchy, T. Dębniak, J. Kładny, and J. Lubiński, "Importance of microsatellite instability (MSI) in colorectal cancer: MSI as a diagnostic tool," Annals of oncology, vol. 15, pp. iv283-iv284, 2004. [17] Y. Eso, T. Shimizu, H. Takeda, A. Takai, and H. Marusawa, "Microsatellite instability and immune checkpoint inhibitors: toward precision medicine against gastrointestinal and hepatobiliary cancers," Journal of gastroenterology, vol. 55, no. 1, pp. 15-26, 2020. [18] N. Mahdieh and B. Rabbani, "An overview of mutation detection methods in genetic disorders," Iranian journal of pediatrics, vol. 23, no. 4, p. 375, 2013. [19] A. T. Feldman and D. Wolfe, "Tissue processing and hematoxylin and eosin staining," Histopathology: methods and protocols, pp. 31-43, 2014. [20] A. H. Fischer, K. A. Jacobson, J. Rose, and R. Zeller, "Hematoxylin and eosin staining of tissue and cell sections," Cold spring harbor protocols, vol. 2008, no. 5, p. pdb. prot4986, 2008. [21] D. S. McClintock, J. T. Abel, and T. C. Cornish, "Whole Slide Imaging Hardware, Software, and Infrastructure," Whole Slide Imaging: Current Applications and Future Directions, pp. 23-56, 2022. [22] T. C. Cornish, R. E. Swapp, and K. J. Kaplan, "Whole-slide imaging: routine pathologic diagnosis," Advances in anatomic pathology, vol. 19, no. 3, pp. 152-159, 2012. [23] M. S. Hosseini et al., "Computational pathology: a survey review and the way forward," Journal of Pathology Informatics, p. 100357, 2024. [24] H. Lin, H. Chen, S. Graham, Q. Dou, N. Rajpoot, and P.-A. Heng, "Fast scannet: Fast and dense analysis of multi-gigapixel whole-slide images for cancer metastasis detection," IEEE transactions on medical imaging, vol. 38, no. 8, pp. 1948-1958, 2019. [25] C. Saillard et al., "Validation of MSIntuit as an AI-based pre-screening tool for MSI detection from colorectal cancer histology slides," Nature Communications, vol. 14, no. 1, p. 6695, 2023. [26] C.-W. Wang, H. Muzakky, Y.-C. Lee, Y.-J. Lin, and T.-K. Chao, "Annotation-free deep learning-based prediction of thyroid molecular cancer biomarker BRAF (V600E) from cytological slides," International Journal of Molecular Sciences, vol. 24, no. 3, p. 2521, 2023. [27] J. Barker, A. Hoogi, A. Depeursinge, and D. L. Rubin, "Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles," Medical image analysis, vol. 30, pp. 60-71, 2016. [28] B. Korbar et al., "Deep learning for classification of colorectal polyps on whole-slide images," Journal of pathology informatics, vol. 8, no. 1, p. 30, 2017. [29] N. Dimitriou, O. Arandjelović, and P. D. Caie, "Deep learning for whole slide image analysis: an overview," Frontiers in medicine, vol. 6, p. 264, 2019. [30] Z. Zhang et al., "Pathologist-level interpretable whole-slide cancer diagnosis with deep learning," Nature Machine Intelligence, vol. 1, no. 5, pp. 236-245, 2019. [31] M. Khened, A. Kori, H. Rajkumar, G. Krishnamurthi, and B. Srinivasan, "A generalized deep learning framework for whole-slide image segmentation and analysis," Scientific reports, vol. 11, no. 1, p. 11579, 2021. [32] L. Fan, A. Sowmya, E. Meijering, and Y. Song, "Cancer survival prediction from whole slide images with self-supervised learning and slide consistency," IEEE Transactions on Medical Imaging, vol. 42, no. 5, pp. 1401-1412, 2022. [33] J. Yao, X. Zhu, J. Jonnagaddala, N. Hawkins, and J. Huang, "Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks," Medical Image Analysis, vol. 65, p. 101789, 2020. [34] N. N. Phan, C.-Y. Hsu, C.-C. Huang, L.-M. Tseng, and E. Y. Chuang, "Prediction of breast cancer recurrence using a deep convolutional neural network without region-of-interest labeling," Frontiers in Oncology, vol. 11, p. 734015, 2021. [35] N. N. Phan, C.-C. Huang, L.-M. Tseng, and E. Y. Chuang, "Predicting breast cancer gene expression signature by applying deep convolutional neural networks from unannotated pathological images," Frontiers in oncology, vol. 11, p. 769447, 2021. [36] A. Chaudhary. "The Illustrated Self-Supervised Learning." https://amitness.com/posts/self-supervised-learning. [37] L. Ericsson, H. Gouk, C. C. Loy, and T. M. Hospedales, "Self-supervised representation learning: Introduction, advances, and challenges," IEEE Signal Processing Magazine, vol. 39, no. 3, pp. 42-62, 2022. [38] J. Gui et al., "A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. [39] R. Krishnan, P. Rajpurkar, and E. J. Topol, "Self-supervised learning in medicine and healthcare," Nature Biomedical Engineering, vol. 6, no. 12, pp. 1346-1352, 2022. [40] Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, "What makes for good views for contrastive learning?," Advances in neural information processing systems, vol. 33, pp. 6827-6839, 2020. [41] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple framework for contrastive learning of visual representations," in International conference on machine learning, 2020: PMLR, pp. 1597-1607. [42] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, "Momentum contrast for unsupervised visual representation learning," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729-9738. [43] V. Cherkassky, H.-H. Chen, and H.-T. Shiao, "Group learning for high-dimensional sparse data," in 2019 International Joint Conference on Neural Networks (IJCNN), 2019: IEEE, pp. 1-10. [44] Y. Fu et al., "Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis," Nature cancer, vol. 1, no. 8, pp. 800-810, 2020. [45] M. van Treeck et al., "DeepMed: a unified, modular pipeline for end-to-end deep learning in computational pathology," BioRxiv, p. 2021.12. 19.473344, 2021. [46] G. Campanella et al., "Clinical-grade computational pathology using weakly supervised deep learning on whole slide images," Nature medicine, vol. 25, no. 8, pp. 1301-1309, 2019. [47] L. Brand, H. Seo, L. Z. Baker, C. Ellefsen, J. Sargent, and H. Wang, "A linear primal–dual multi-instance SVM for big data classifications," Knowledge and Information Systems, vol. 66, no. 1, pp. 307-338, 2024. [48] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016. [49] C. Janiesch, P. Zschech, and K. Heinrich, "Machine learning and deep learning," Electronic Markets, vol. 31, no. 3, pp. 685-695, 2021. [50] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015. [51] J. Schmidhuber, "Deep learning in neural networks: An overview," Neural networks, vol. 61, pp. 85-117, 2015. [52] C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, "Activation functions: Comparison of trends in practice and research for deep learning," arXiv preprint arXiv:1811.03378, 2018. [53] S. Sharma, S. Sharma, and A. Athaiya, "Activation functions in neural networks," Towards Data Sci, vol. 6, no. 12, pp. 310-316, 2017. [54] E. Akgün and M. Demir, "Modeling course achievements of elementary education teacher candidates with artificial neural networks," International Journal of Assessment Tools in Education, vol. 5, no. 3, pp. 491-509, 2018. [55] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, "Efficient processing of deep neural networks: A tutorial and survey," Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, 2017. [56] S. Ruder, "An overview of gradient descent optimization algorithms," arXiv preprint arXiv:1609.04747, 2016. [57] A. Lozano-Diez, R. Zazo, D. T. Toledano, and J. Gonzalez-Rodriguez, "An analysis of the influence of deep neural network (DNN) topology in bottleneck feature based language recognition," PloS one, vol. 12, no. 8, p. e0182580, 2017. [58] K. O'shea and R. Nash, "An introduction to convolutional neural networks," arXiv preprint arXiv:1511.08458, 2015. [59] S. Saha. "A Guide to Convolutional Neural Networks — the ELI5 way." https://saturncloud.io/blog/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way/. [60] "Student Notes: Convolutional Neural Networks (CNN) Introduction." https://indoml.com/2018/03/07/student-notes-convolutional-neural-networks-cnn-introduction/. [61] H. Yingge, I. Ali, and K.-Y. Lee, "Deep neural networks on chip-a survey," in 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), 2020: IEEE, pp. 589-592. [62] L. Alzubaidi et al., "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," Journal of big Data, vol. 8, pp. 1-74, 2021. [63] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, 2012. [64] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014. [65] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [66] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708. [67] F. Chollet, "Xception: Deep learning with depthwise separable convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251-1258. [68] P.-T. De Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein, "A tutorial on the cross-entropy method," Annals of operations research, vol. 134, pp. 19-67, 2005. [69] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. [70] M. Ilse, J. Tomczak, and M. Welling, "Attention-based deep multiple instance learning," in International conference on machine learning, 2018: PMLR, pp. 2127-2136. [71] A. Vaswani, "Attention is all you need," Advances in Neural Information Processing Systems, 2017. [72] Z. Shao, H. Bian, Y. Chen, Y. Wang, J. Zhang, and X. Ji, "Transmil: Transformer based correlated multiple instance learning for whole slide image classification," Advances in neural information processing systems, vol. 34, pp. 2136-2147, 2021. [73] B. Khemani, S. Patil, K. Kotecha, and S. Tanwar, "A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions," Journal of Big Data, vol. 11, no. 1, p. 18, 2024. [74] J. Zhou et al., "Graph neural networks: A review of methods and applications," AI open, vol. 1, pp. 57-81, 2020. [75] M. Bilal et al., "Development and validation of a weakly supervised deep learning framework to predict the status of molecular pathways and key mutations in colorectal cancer from routine histology images: a retrospective study," The Lancet Digital Health, vol. 3, no. 12, pp. e763-e772, 2021. [76] B. Guo, X. Li, M. Yang, J. Jonnagaddala, H. Zhang, and X. S. Xu, "Predicting microsatellite instability and key biomarkers in colorectal cancer from H&E‐stained images: achieving state‐of‐the‐art predictive performance with fewer data using swin transformer," The Journal of Pathology: Clinical Research, vol. 9, no. 3, pp. 223-235, 2023. [77] P.-C. Tsai et al., "Histopathology images predict multi-omics aberrations and prognoses in colorectal cancer patients," Nature communications, vol. 14, no. 1, p. 2102, 2023. [78] S. J. Wagner et al., "Transformer-based biomarker prediction from colorectal cancer histology: A large-scale multicentric study," Cancer Cell, vol. 41, no. 9, pp. 1650-1661. e4, 2023. [79] M. Macenko et al., "A method for normalizing histology slides for quantitative analysis," in 2009 IEEE international symposium on biomedical imaging: from nano to macro, 2009: IEEE, pp. 1107-1110. [80] K. Tomczak, P. Czerwińska, and M. Wiznerowicz, "Review The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge," Contemporary Oncology/Współczesna Onkologia, vol. 2015, no. 1, pp. 68-77, 2015. [81] N. J. Edwards et al., "The CPTAC data portal: a resource for cancer proteomics research," Journal of proteome research, vol. 14, no. 6, pp. 2707-2713, 2015. [82] J. Gao et al., "Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal," Science signaling, vol. 6, no. 269, pp. pl1-pl1, 2013. [83] R. Kumari and S. K. Srivastava, "Machine learning: A review on binary classification," International Journal of Computer Applications, vol. 160, no. 7, 2017. [84] K. Zhao et al., "Artificial intelligence quantified tumour-stroma ratio is an independent predictor for overall survival in resectable colorectal cancer," EBioMedicine, vol. 61, 2020. [85] N. Otsu, "A threshold selection method from gray-level histograms," Automatica, vol. 11, no. 285-296, pp. 23-27, 1975. [86] X. Xu, S. Xu, L. Jin, and E. Song, "Characteristic analysis of Otsu threshold and its applications," Pattern recognition letters, vol. 32, no. 7, pp. 956-961, 2011. [87] M. Muñoz-Aguirre, V. F. Ntasis, S. Rojas, and R. Guigó, "PyHIST: a histological image segmentation tool," PLoS computational biology, vol. 16, no. 10, p. e1008349, 2020. [88] J. N. Kather et al., "Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study," PLoS medicine, vol. 16, no. 1, p. e1002730, 2019. [89] A. Fred and M. Agarap, "Deep learning using rectified linear units (relu)," arXiv preprint arXiv:1803.08375, pp. 1-6, 2018. [90] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in International conference on machine learning, 2015: pmlr, pp. 448-456. [91] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017. [92] A. Paszke et al., "Pytorch: An imperative style, high-performance deep learning library," Advances in neural information processing systems, vol. 32, 2019. [93] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626. [94] S. Mukherjee. "The Annotated ResNet-50." https://towardsdatascience.com/the-annotated-resnet-50-a6c536034758. [95] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE conference on computer vision and pattern recognition, 2009: Ieee, pp. 248-255. [96] F. Rahutomo, T. Kitasuka, and M. Aritsugi, "Semantic cosine similarity," in The 7th international student conference on advanced science and technology ICAST, 2012, vol. 4, no. 1: University of Seoul South Korea, p. 1. [97] K. Sohn, "Improved deep metric learning with multi-class n-pair loss objective," Advances in neural information processing systems, vol. 29, 2016. [98] Y. Ma, Y. Luo, and Z. Yang, "GCN-based MIL: multi-instance learning utilizing structural relationships among instances," Signal, Image and Video Processing, pp. 1-13, 2024. [99] S. Zhang, H. Tong, J. Xu, and R. Maciejewski, "Graph convolutional networks: a comprehensive review," Computational Social Networks, vol. 6, no. 1, pp. 1-23, 2019. [100] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, "Graph attention networks," stat, vol. 1050, no. 20, pp. 10-48550, 2017. [101] A. Manconi, G. Armano, M. Gnocchi, and L. Milanesi, "A soft-voting ensemble classifier for detecting patients affected by COVID-19," Applied Sciences, vol. 12, no. 15, p. 7554, 2022. [102] İ. Kılıç. "ROC Curve and AUC: Evaluating Model Performance." https://medium.com/@ilyurek/roc-curve-and-auc-evaluating-model-performance-c2178008b02. [103] S. Diao et al., "Weakly supervised framework for cancer region detection of hepatocellular carcinoma in whole-slide pathologic images based on multiscale attention convolutional neural network," The American journal of pathology, vol. 192, no. 3, pp. 553-563, 2022. [104] G. N. Fanelli et al., "The heterogeneous clinical and pathological landscapes of metastatic Braf-mutated colorectal cancer," Cancer Cell International, vol. 20, pp. 1-12, 2020. [105] S. M. Tria, M. E. Burge, and V. L. Whitehall, "The therapeutic landscape for KRAS-mutated colorectal cancers," Cancers, vol. 15, no. 8, p. 2375, 2023. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97153 | - |
dc.description.abstract | 大腸直腸癌是全球最致命的癌症之一。辨識生物標記狀態對於標靶治療和免疫治療至關重要,但現行的篩檢方法通常成本高昂且耗時費力,特別是在低收入地區。組織病理全切片影像是診斷中經常使用的工具,它能提供有關細胞異質性的關鍵資訊。在本研究中,我們提出一個整合的深度學習框架,用於預測大腸直腸癌中的重要生物標記,包括BRAF V600E突變、KRAS突變和MSI-H。
我們從TCGA-COAD和CPTAC-COAD資料庫中收集帶有生物標記標註的組織病理全切片影像。TCGA-COAD用於模型訓練和交叉驗證,CPTAC-COAD則用於外部測試。這些切片影像被分割成補丁,以滿足深度學習模型的輸入要求。首先,我們訓練一個腫瘤偵測模型,用來準確區分腫瘤補丁和非腫瘤補丁。接著使用TCGA-COAD的腫瘤補丁來微調基於自監督學習的特徵提取器,其將來自同一切片影像的腫瘤補丁轉換成特徵矩陣。最後,我們基於多實例學習配合注意力機制、變換器、圖神經網路三種技術,分別訓練了Att-MIL、Tran-MIL 和 GNN-MIL三個生物標記預測模型。這些模型在特徵矩陣上進行完整的訓練,並使用集成方法將三個模型的預測輸出整合為最終結果。 本研究提出的腫瘤偵測模型有著優異的表現,在測試階段分辨腫瘤補丁時達到至少90%的精確度和召回率。特徵提取器微調過程中訓練損失曲線的穩定收斂,進一步驗證該模型能夠從腫瘤補丁中捕捉最具代表性的特徵。在生物標記預測模型中。基於TCGA-COAD的交叉驗證結果顯示,MSI-H預測的表現最佳,三個模型的集成預測之AUC達到90%;其次是BRAF V600E突變預測,其AUC達到87%;而KRAS突變預測之AUC則為64%,三項生物標記的預測效能均優於過往的其他研究。總結來說,所提出的深度學習框架提供一種有效的方式來判斷生物標記狀態,並展示了從臨床影像中發現潛在分子異常的能力。 | zh_TW |
dc.description.abstract | Colorectal cancer (CRC) is one of the deadliest cancers globally. Identifying biomarkers status for targeted therapy and immunotherapy is essential, but current screening methods are often expensive and labor-intensive, particularly in low-income regions. Histopathological whole slide images (WSIs), routinely used for diagnostics, can provide valuable insights into cellular heterogeneity. In this study, we propose an integrated deep learning (DL) framework to predict important CRC biomarkers, including BRAF V600E mutation, KRAS mutation and MSI-H.
WSIs with biomarker labels were collected from the TCGA-COAD and CPTAC-COAD cohorts, using TCGA-COAD for model training and cross-validation while CPTAC-COAD served as the external testing. These WSIs were segmented into patches to meet the input requirements of the DL models. First, we trained a tumor detection model to accurately distinguish tumor patches from non-tumor patches. Then, the TCGA-COAD tumor patches were used to fine-tune a self-supervised learning-based feature extractor, which converts tumor patches from the same WSI into a feature matrix. Finally, we trained three biomarker prediction models—Att-MIL, Tran-MIL, and GNN-MIL—based on multiple instance learning with attention mechanisms, transformers, and graph neural networks, respectively. These models were fully trained on the feature matrix, and the predictions from all three models were integrated using an ensemble method to generate the final results. The proposed tumor detection model exhibited excellent performance, achieving at least 90% precision and recall in identifying tumor patches during testing. The stable convergence of the training loss curve during the fine-tuning of the feature extractor further validated the model's ability to capture the most representative features from tumor patches. In the biomarker prediction models, cross-validation results based on TCGA-COAD indicated that MSI-H prediction achieved the best performance, with the ensemble prediction achieving an AUC of 90%; BRAF V600E mutation prediction followed with an AUC of 87%, KRAS mutation prediction achieved an AUC of 64%. The prediction performance of all three biomarkers surpassed that of previous studies. In conclusion, the proposed DL framework offers an efficient approach to determining biomarker status and demonstrates a capability to uncover potential molecular aberration from the clinical images. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-27T16:26:14Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2025-02-27T16:26:14Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 致謝 i
摘要 iii Abstract v Content vii List of Abbreviations xi List of Figures xiii List of Tables xv Chapter 1 Introduction 1 1.1 Colorectal Cancer 1 1.2 Generalizable Biomarker for Colorectal Cancer 2 1.3 Histopathological Image 7 1.4 Computational Pathology 8 1.5 Machine Learning Paradigms 10 1.5.1 Supervised Learning 10 1.5.2 Self-Supervised Learning 11 1.5.3 Contrastive Learning 12 1.5.4 Multiple-Instance Learning 13 1.6 Deep Learning 14 1.6.1 Deep Neural Network 15 1.6.2 Convolutional Neural Network 18 1.6.3 Loss Function 20 1.6.4 Optimization 21 1.6.5 MIL Attention Mechanism 22 1.6.6 Transformer 23 1.6.7 Graph and Graph Neural Network 24 1.7 Previous Works in Biomarker Prediction 26 1.8 Motivation 29 1.9 Aims 30 Chapter 2 Materials & Methods 33 2.1 Overview of the Proposed Pipeline 33 2.2 Materials 35 2.2.1 Imaging Data Acquisition 35 2.2.2 Genetic Biomarker Profiles Acquisition 37 2.2.3 Data Inclusion Criteria and Mapping 39 2.2.4 Tissue Type Dataset in Patch-Level 41 2.3 Histopathological Images Preprocessing 42 2.4 Tumor Detection Model 44 2.4.1 Model Development 46 2.4.1.1 Data Augmentation 47 2.4.1.2 Five-Fold Cross-Validation 48 2.4.1.3 Hyperparameter Setting 48 2.4.2 Performance Metrics 49 2.4.3 Model Interpretability 51 2.4.3.1 Grad-CAM Analysis 51 2.4.3.2 Tissue Category Map Visualization 52 2.4.4 Tumor Patches Quality Control 52 2.4.4.1 Color Normalization 52 2.4.4.2 Blurry Patches Elimination 54 2.4.5 Tumor Patches Feature Extraction 55 2.5 Biomarker Prediction Models 59 2.5.1 Attention-Based Multiple Instance Learning Model 59 2.5.2 Transformer-Based Multiple Instance Learning Model 61 2.5.3 GNN-Based Multiple Instance Learning Model 64 2.5.4 Majority Voting 66 2.5.5 Model Development 67 2.5.5.1 Up-Sampling 67 2.5.5.2 Weighted Loss Function 68 2.5.5.3 Hyperparameter Setting 68 2.5.6 Performance Metrics 69 2.5.7 Attention Weights Visualization 72 Chapter 3 Results 75 3.1 Tumor Detection Model 75 3.1.1 Model Interpretability 77 3.1.1.1 Grad-CAM Analysis 77 3.1.1.2 Tissue Category Map 80 3.1.2 Tumor Patches Prediction on Cohorts 83 3.1.3 SimCLR ResNet50 Feature Extractor Training 84 3.2 Biomarker Prediction Model 85 3.2.1 BRAF V600E Mutation Prediction 85 3.2.2 KRAS Mutation Prediction 92 3.2.3 MSI-H Prediction 98 3.2.4 Feature Extractor Weights Comparison 104 3.2.5 Attention Weight Heatmaps Visualization 106 3.3 Comparison of Related Works 107 Chapter 4 Discussion 109 4.1 Tumor Detection Model 109 4.2 SimCLR Training 110 4.3 Biomarker Prediction 111 4.3.1 BRAF V600E Mutation Prediction 112 4.3.2 KRAS Mutation Prediction 113 4.3.3 MSI-H Prediction 115 4.4 Attention Weights Heatmaps Visualization 117 4.5 Comparison of Related Works 118 4.6 Study Limitations 120 4.7 Future Works 121 Chapter 5 Conclusion 123 References 125 | - |
dc.language.iso | en | - |
dc.title | 基於整合式深度學習框架於大腸直腸癌病理切片影像預測生物標記 | zh_TW |
dc.title | An Integrated Deep Learning Framework for Biomarker Prediction from Histopathological Images in Colorectal Cancer | en |
dc.type | Thesis | - |
dc.date.schoolyear | 113-1 | - |
dc.description.degree | 碩士 | - |
dc.contributor.coadvisor | 陳翔瀚 | zh_TW |
dc.contributor.coadvisor | Hsiang-Han Chen | en |
dc.contributor.oralexamcommittee | 賴亮全;李建樂;周呈霙 | zh_TW |
dc.contributor.oralexamcommittee | Liang-Chuan Lai;Chien-Yueh Lee;Cheng-Ying Chou | en |
dc.subject.keyword | 大腸直腸癌,組織病理切片影像,生物標記預測,深度學習, | zh_TW |
dc.subject.keyword | colorectal cancer,histopathological image,biomarker prediction,deep learning, | en |
dc.relation.page | 132 | - |
dc.identifier.doi | 10.6342/NTU202500414 | - |
dc.rights.note | 同意授權(限校園內公開) | - |
dc.date.accepted | 2025-02-07 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | - |
dc.date.embargo-lift | 2030-02-07 | - |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-113-1.pdf 目前未授權公開取用 | 9.5 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。