Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83108
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張瑞峰zh_TW
dc.contributor.advisorRuey-Feng Changen
dc.contributor.author張詠辰zh_TW
dc.contributor.authorYung-Chen Changen
dc.date.accessioned2023-01-08T17:07:34Z-
dc.date.available2023-11-09-
dc.date.copyright2023-01-06-
dc.date.issued2022-
dc.date.submitted2022-11-25-
dc.identifier.citation[1] R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, "Cancer statistics, 2022," CA: A Cancer Journal for Clinicians, vol. 72, no. 1, pp. 7-33, 2022, doi: https://doi.org/10.3322/caac.21708.
[2] D. W. Bell et al., "Epidermal growth factor receptor mutations and gene amplification in non–small-cell lung cancer: molecular analysis of the IDEAL/INTACT gefitinib trials," Journal of clinical oncology, vol. 23, no. 31, pp. 8081-8092, 2005.
[3] S. Dearden, J. Stevens, Y.-L. Wu, and D. Blowers, "Mutation incidence and coincidence in non small-cell lung cancer: meta-analyses by ethnicity and histology (mutMap)," Annals of oncology, vol. 24, no. 9, pp. 2371-2376, 2013.
[4] L. Wu, L. Ke, Z. Zhang, J. Yu, and X. Meng, "Development of EGFR TKIs and options to manage resistance of third-generation EGFR TKI osimertinib: conventional ways and immune checkpoint inhibitors," Frontiers in Oncology, vol. 10, p. 602762, 2020.
[5] Y. Liu et al., "CT features associated with epidermal growth factor receptor mutation status in patients with lung adenocarcinoma," Radiology, vol. 280, no. 1, p. 271, 2016.
[6] M. Hasegawa, F. Sakai, R. Ishikawa, F. Kimura, H. Ishida, and K. Kobayashi, "CT features of epidermal growth factor receptor–mutated adenocarcinoma of the lung: comparison with nonmutated adenocarcinoma," Journal of Thoracic Oncology, vol. 11, no. 6, pp. 819-826, 2016.
[7] H. Zhang, W. Cai, Y. Wang, M. Liao, and S. Tian, "CT and clinical characteristics that predict risk of EGFR mutation in non-small cell lung cancer: a systematic review and meta-analysis," International Journal of Clinical Oncology, vol. 24, no. 6, pp. 649-659, 2019.
[8] Y. Liu et al., "Radiomic features are associated with EGFR mutation status in lung adenocarcinomas," Clinical lung cancer, vol. 17, no. 5, pp. 441-448. e6, 2016.
[9] W. Tu et al., "Radiomics signature: a potential and incremental predictor for EGFR mutation status in NSCLC patients, comparison with CT morphology," Lung Cancer, vol. 132, pp. 28-35, 2019.
[10] S. Khan, H. Rahmani, S. A. A. Shah, and M. Bennamoun, "A guide to convolutional neural networks for computer vision," Synthesis lectures on computer vision, vol. 8, no. 1, pp. 1-207, 2018.
[11] D. Sarvamangala and R. V. Kulkarni, "Convolutional neural networks in medical image understanding: a survey," Evolutionary intelligence, pp. 1-22, 2021.
[12] S. Wang et al., "Predicting EGFR mutation status in lung adenocarcinoma on computed tomography image using deep learning," European Respiratory Journal, vol. 53, no. 3, 2019.
[13] W. Zhao et al., "Toward automatic prediction of EGFR mutation status in pulmonary adenocarcinoma with 3D deep learning," Cancer Med, vol. 8, no. 7, pp. 3532-3543, Jul 2019, doi: 10.1002/cam4.2233.
[14] Y. Dong et al., "Multi-channel multi-task deep learning for predicting EGFR and KRAS mutations of non-small cell lung cancer on CT images," Quantitative Imaging in Medicine and Surgery, vol. 11, no. 6, p. 2354, 2021.
[15] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
[16] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, "mixup: Beyond empirical risk minimization," arXiv preprint arXiv:1710.09412, 2017.
[17] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, "Inception-v4, inception-resnet and the impact of residual connections on learning," in Thirty-first AAAI conference on artificial intelligence, 2017.
[18] O. Gevaert et al., "Predictive radiogenomics modeling of EGFR mutation status in lung cancer," Scientific reports, vol. 7, no. 1, pp. 1-8, 2017.
[19] A. Hatamizadeh et al., "Unetr: Transformers for 3d medical image segmentation," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 574-584.
[20] B. Landman, Z. Xu, J. Igelsias, M. Styner, T. Langerak, and A. Klein, "Miccai multi-atlas labeling beyond the cranial vault–workshop and challenge," in Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, 2015, vol. 5, p. 12.
[21] E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo, "SegFormer: Simple and efficient design for semantic segmentation with transformers," Advances in Neural Information Processing Systems, vol. 34, pp. 12077-12090, 2021.
[22] K. Han et al., "A survey on vision transformer," IEEE transactions on pattern analysis and machine intelligence, 2022.
[23] H. Zhang et al., "Resnest: Split-attention networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2736-2746.
[24] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE conference on computer vision and pattern recognition, 2009: Ieee, pp. 248-255.
[25] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500.
[26] Z. Zhang, C. Lan, W. Zeng, X. Jin, and Z. Chen, "Relation-aware global attention for person re-identification," in Proceedings of the ieee/cvf conference on computer vision and pattern recognition, 2020, pp. 3186-3195.
[27] F. Zhu, J. Xu, and C. Yao, "Local information fusion network for 3D shape classification and retrieval," Image and Vision Computing, vol. 121, p. 104405, 2022.
[28] E. A. Kazerooni et al., "ACR–STR Practice Parameter for the Performance and Reporting of Lung Cancer Screening Thoracic Computed Tomography (CT): 2014 (Resolution 4)*," Journal of Thoracic Imaging, vol. 29, no. 5, pp. 310-316, 2014, doi: 10.1097/rti.0000000000000097.
[29] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
[30] D. Hendrycks and K. Gimpel, "Gaussian error linear units (gelus)," arXiv preprint arXiv:1606.08415, 2016.
[31] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon, "Cbam: Convolutional block attention module," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 3-19.
[32] J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
[33] C. Qi and F. Su, "Contrastive-center loss for deep neural networks," in 2017 IEEE international conference on image processing (ICIP), 2017: IEEE, pp. 2851-2855.
[34] S. G. Armato, 3rd et al., "The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans," Med Phys, vol. 38, no. 2, pp. 915-31, Feb 2011, doi: 10.1118/1.3528204.
[35] R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection," in Ijcai, 1995, vol. 14, no. 2: Montreal, Canada, pp. 1137-1145.
[36] T. G. Dietterich, "Approximate statistical tests for comparing supervised classification learning algorithms," Neural computation, vol. 10, no. 7, pp. 1895-1923, 1998.
[37] E. R. DeLong, D. M. DeLong, and D. L. Clarke-Pearson, "Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach," Biometrics, pp. 837-845, 1988.
[38] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, 2015: Springer, pp. 234-241.
[39] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, "Unet++: Redesigning skip connections to exploit multiscale features in image segmentation," IEEE transactions on medical imaging, vol. 39, no. 6, pp. 1856-1867, 2019.
[40] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A convnet for the 2020s," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976-11986.
[41] Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012-10022.
[42] S. Y. Ha et al., "Lung cancer in never-smoker Asian females is driven by oncogenic mutations, most often involving EGFR," Oncotarget, vol. 6, no. 7, p. 5465, 2015.
[43] H. R. Kim et al., "Distinct clinical features and outcomes in never‐smokers with nonsmall cell lung cancer who harbor EGFR or KRAS mutations or ALK rearrangement," Cancer, vol. 118, no. 3, pp. 729-739, 2012.
[44] L. St and S. Wold, "Analysis of variance (ANOVA)," Chemometrics and intelligent laboratory systems, vol. 6, no. 4, pp. 259-272, 1989.
[45] S. Hashem, "Optimal linear combinations of neural networks," Neural networks, vol. 10, no. 4, pp. 599-614, 1997.
[46] B. Zenko, L. Todorovski, and S. Dzeroski, "A comparison of stacking with meta decision trees to bagging, boosting, and stacking with other methods," in Proceedings 2001 IEEE International Conference on Data Mining, 2001: IEEE, pp. 669-670.
[47] K. Clark et al., "The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository," J Digit Imaging, vol. 26, no. 6, pp. 1045-57, Dec 2013, doi: 10.1007/s10278-013-9622-7.
[48] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626.
[49] Z. Zhou, V. Sodha, J. Pang, M. B. Gotway, and J. Liang, "Models genesis," Medical image analysis, vol. 67, p. 101840, 2021.
[50] F. Haghighi, M. R. H. Taher, Z. Zhou, M. B. Gotway, and J. Liang, "Transferable visual words: Exploiting the semantics of anatomical patterns for self-supervised learning," IEEE transactions on medical imaging, vol. 40, no. 10, pp. 2857-2868, 2021.
[51] Z. Dai, H. Liu, Q. V. Le, and M. Tan, "Coatnet: Marrying convolution and attention for all data sizes," Advances in Neural Information Processing Systems, vol. 34, pp. 3965-3977, 2021.
[52] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, "Cutmix: Regularization strategy to train strong classifiers with localizable features," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6023-6032.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/83108-
dc.description.abstract肺腺癌是一種常見的組織學亞型肺癌,而它的成因又跟表皮生長因子受器(epidermal growth factor receptor, EGFR)的突變有關。透過早期的診斷跟標靶治療,可以有效地提升EGFR突變腺癌患者的疾病無惡化存活期(progression-free survival, PFS)。因此,儘早去確認EGFR的突變情形對腺癌患者非常關鍵。過去的研究中提出了電腦斷層掃描(computer tomography, CT)特徵以及影像組學(radiomic)特徵來對EGFR突變進行非侵入的預測。然而,這兩種方法存在著缺點。電腦斷層掃描特徵需要人力去定義以及量測每個特徵,而影像組學是基於固定的公式來提取特徵,提取完還需經過特徵選取(feature selection)來篩選出重要的特徵。近年來, 深度學習(deep learning)在醫學影像的任務上發揮了很大的影響,可以在不用人為介入下自動提取多樣且獨特的特徵。因此,本研究提出一個基於卷積神經網路(convolution neural networks, CNN)模型和Transformer模型的電腦輔助診斷系統(computer-aided diagnosis)來提供快速且非侵入式的診斷。
本研究提出的系統包含了影像前處理,肺結節切割和EGFR分類。在影像前處理中,肺結節和周圍組織會從電腦斷層影像中被提取出來,再進行影像大小的調整以及正規化處理。接著,結合UNETR模型和雙頭(dual head)架構的3-D DHeadUNETR切割模型會從處理過的影像中提取肺結節的遮罩。最後,處理後的影像以及對應的結節遮罩會合併匯入到3-D RGA-SANet分類模型來進行EGFR突變狀況預測。這個3-D RGA-SANet分類模型使用分組注意力(split-attention, SA)區塊來整合多分支架構的特徵,以及全局關係感知注意力(relation-aware global attention, RGA)來捕捉長距離的相依關係。實驗結果指出此系統在肺結節分割上可以達到0.8265和0.7105在Dice係數和IoU上,在最後EGFR的分類上可以達到75.00%的正確率、75.70%的靈敏性、74.31%的特異性和0.7731的ROC曲線下面積,證實了本系統可以有效的幫助EGFR的診斷。
zh_TW
dc.description.abstractLung adenocarcinoma is a common histological subtype of lung cancer related to epidermal growth factor receptor (EGFR) mutation. Through early diagnosis and target therapy, the progression-free survival (PFS) rate could be increased effectively in EGFR-positive adenocarcinoma patients. Thus, it is essential to identify EGFR mutation status early for adenocarcinoma patients. Non-invasive methods like chest computer tomography (CT) features and radiomic features have been used in previous studies to predict the EGFR mutation status. However, there are drawbacks to these two methods. Chest CT features need physician efforts to define and measure, and radiomic features are based on fixed functions to extract, which are not flexible and need feature selection. Recently, convolution neural network (CNN) has shown a great impact on medical imaging tasks. It could automatically capture various and distinct features without human intervention. Therefore, a computer diagnosis system (CADx) based on CNN and Transformer models was proposed to provide a fast and non-invasive diagnosis for EGFR status.
The proposed CADx system in this study included image preprocessing, nodule segmentation, and EGFR classification. First, nodules and surrounding tissue were extracted to volumes of interest (VOIs), and then the VOIs were resized, and normalized in the image preprocessing. Next, the proposed segmentation model, 3-D DHeadUNETR, which incorporated the UNETR model and the dual head structure, was utilized to acquire nodule masks from preprocessed VOIs. Lastly, the preprocessed VOIs and the corresponding nodule masks were concatenated and fed into the proposed classification model 3-D RGA-SANet for EGFR prediction. The proposed 3-D RGA-SANet was constructed with the split-attention (SA) blocks to integrate the feature maps from the multi-branch structure, and relation-aware global attention (RGA) blocks to seize long-distance relationships in spatial and channel information. In the experiments, our proposed CADx system could achieve the Dice coefficient and Intersection over Union (IoU) of 0.8265 and 0.7105 in nodule segmentation and reach the accuracy, sensitivity, specificity, and the area under the receiver operating characteristic (ROC) curve (AUC) of 75.00%, 75.70%, 74.31% and 0.7731 in EGFR classification. The results indicated that our proposed system could assist radiologists in EGFR status diagnosis.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-01-08T17:07:34Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-01-08T17:07:34Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 I
致謝 II
摘要 III
Abstract V
Table of Contents VII
List of Figures IX
List of Tables X
Chapter 1. Introduction 1
Chapter 2. Material 7
Chapter 3. Methods 10
3.1. Image preprocessing 11
3.2. Nodule segmentation 13
3.2.1. 3-D DHeadUNETR 13
3.2.1.1. Transformer block 15
3.2.1.2. Dual head structure 17
3.2.2. Loss function for segmentation 19
3.3. EGFR classification 20
3.3.1. 3-D RGA-SANet Architecture 20
3.3.2. 3-D RGA-SA Block 22
3.3.2.1. 3-D SA Block 23
3.3.2.2. 3-D RGA Block 25
3.3.3. Loss function for classification 29
Chapter 4. Experiment Results and Discussions 31
4.1. Experiment environment 31
4.2. Evaluation 31
4.3. Result 33
4.3.1. Comparison of Different Models for Segmentation 33
4.3.2. Comparison of Different Models and Input for Classification 34
4.3.3. Ablation Study 38
4.3.4. Comparison of Different Feature Types for Classification 40
4.4. Discussion 43
Chapter 5. Conclusion 51
Reference 53
-
dc.language.isoen-
dc.subject表皮生長因子受體zh_TW
dc.subject電腦斷層掃描zh_TW
dc.subject肺癌zh_TW
dc.subject卷積神經網路zh_TW
dc.subject注意力機制zh_TW
dc.subject電腦輔助診斷系統zh_TW
dc.subjectattention mechanismen
dc.subjectlung canceren
dc.subjectcomputer tomographyen
dc.subjectepidermal growth factor receptoren
dc.subjectcomputer-aided diagnosisen
dc.subjectconvolution neural networksen
dc.title以 U 型 Transformer 模型及分組注意力網路作為電腦輔助 EGFR 突變診斷系統於肺部電腦斷層影像zh_TW
dc.titleEGFR Mutation Diagnosis System in Lung CT Images Based on U-shape Transformer and Split-attention Networken
dc.title.alternativeEGFR Mutation Diagnosis System in Lung CT Images Based on U-shape Transformer and Split-attention Network-
dc.typeThesis-
dc.date.schoolyear111-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee羅崇銘;陳啓禎zh_TW
dc.contributor.oralexamcommitteeChung-Ming Lo;Chii-Jen Chenen
dc.subject.keyword肺癌,電腦斷層掃描,表皮生長因子受體,電腦輔助診斷系統,卷積神經網路,注意力機制,zh_TW
dc.subject.keywordlung cancer,computer tomography,epidermal growth factor receptor,computer-aided diagnosis,convolution neural networks,attention mechanism,en
dc.relation.page59-
dc.identifier.doi10.6342/NTU202210076-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2022-11-28-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-0355221124572148.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
1.31 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved