Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90784
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張瑞峰zh_TW
dc.contributor.advisorRuey-Feng Changen
dc.contributor.author林奕辰zh_TW
dc.contributor.authorYi-Chen Linen
dc.date.accessioned2023-10-03T17:36:21Z-
dc.date.available2023-11-09-
dc.date.copyright2023-10-03-
dc.date.issued2023-
dc.date.submitted2023-05-07-
dc.identifier.citation[1] A. N. Giaquinto et al., "Breast cancer statistics, 2022," CA: A Cancer Journal for Clinicians, vol. 72, no. 6, pp. 524-541, 2022.
[2] K. D. Miller et al., "Cancer treatment and survivorship statistics, 2022," CA: a cancer journal for clinicians, vol. 72, no. 5, pp. 409-436, 2022.
[3] O. Ginsburg et al., "Breast cancer early detection: A phased approach to implementation," Cancer, vol. 126, pp. 2379-2393, 2020.
[4] A. J. Redig and S. S. McAllister, "Breast cancer as a systemic disease: a view of metastasis," Journal of internal medicine, vol. 274, no. 2, pp. 113-126, 2013.
[5] A. A. Alizadeh, D. T. Ross, C. M. Perou, and M. van de Rijn, "Towards a novel classification of human malignancies based on gene expression patterns," The Journal of pathology, vol. 195, no. 1, pp. 41-52, 2001.
[6] M. T. Weigel and M. Dowsett, "Current and emerging biomarkers in breast cancer: prognosis and prediction," Endocrine-related cancer, vol. 17, no. 4, pp. R245-R262, 2010.
[7] M. E. H. Hammond et al., "American Society of Clinical Oncology/College of American Pathologists guideline recommendations for immunohistochemical testing of estrogen and progesterone receptors in breast cancer (unabridged version)," Archives of pathology & laboratory medicine, vol. 134, no. 7, pp. e48-e72, 2010.
[8] A. C. Wolff et al., "American Society of Clinical Oncology/College of American Pathologists guideline recommendations for human epidermal growth factor receptor 2 testing in breast cancer," Archives of pathology & laboratory medicine, vol. 131, no. 1, pp. 18-43, 2007.
[9] C. Taylor and R. M. Levenson, "Quantification of immunohistochemistry—issues concerning methods, utility and semiquantitative assessment II," Histopathology, vol. 49, no. 4, pp. 411-424, 2006.
[10] R. J. Hooley, L. Andrejeva, and L. M. Scoutt, "Breast cancer screening and problem solving using mammography, ultrasound, and magnetic resonance imaging," Ultrasound quarterly, vol. 27, no. 1, pp. 23-47, 2011.
[11] J. S. Drukteinis, B. P. Mooney, C. I. Flowers, and R. A. Gatenby, "Beyond mammography: new frontiers in breast cancer screening," The American journal of medicine, vol. 126, no. 6, pp. 472-479, 2013.
[12] W. A. Berg et al., "Combined Screening With Ultrasound and Mammography vs Mammography Alone in Women at Elevated Risk of Breast Cancer," JAMA, vol. 299, no. 18, pp. 2151-2163, 2008, doi: 10.1001/jama.299.18.2151.
[13] A. Irshad et al., "Assessing the role of ultrasound in predicting the biological behavior of breast cancer," American Journal of Roentgenology, vol. 200, no. 2, pp. 284-290, 2013.
[14] Y. M. Elsaeid, D. Elmetwally, and S. M. Eteba, "Association between ultrasound findings, tumor type, grade, and biological markers in patients with breast cancer," Egyptian Journal of Radiology and Nuclear Medicine, vol. 50, no. 1, pp. 1-11, 2019.
[15] S. H. Kim et al., "Correlation of ultrasound findings with histology, tumor grade, and biological markers in breast cancer," Acta oncologica, vol. 47, no. 8, pp. 1531-1538, 2008.
[16] F. W. F. Au, S. Ghai, F. I. Lu, H. Moshonov, and P. Crystal, "Histological grade and immunohistochemical biomarkers of breast cancer: correlation to ultrasound features," Journal of Ultrasound in Medicine, vol. 36, no. 9, pp. 1883-1894, 2017.
[17] Y. Shen et al., "Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams," Nature communications, vol. 12, no. 1, pp. 1-13, 2021.
[18] K. Doi, "Current status and future potential of computer-aided diagnosis in medical imaging," The British journal of radiology, vol. 78, no. suppl_1, pp. s3-s19, 2005.
[19] B. Halalli and A. Makandar, "Computer aided diagnosis-medical image analysis techniques," Breast Imaging, vol. 85, 2018.
[20] M. E. Mayerhoefer et al., "Introduction to radiomics," Journal of Nuclear Medicine, vol. 61, no. 4, pp. 488-495, 2020.
[21] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, "Deep learning for computer vision: A brief review," Computational intelligence and neuroscience, vol. 2018, 2018.
[22] K. O'Shea and R. Nash, "An introduction to convolutional neural networks," arXiv preprint arXiv:1511.08458, 2015.
[23] N. Cohen and A. Shashua, "Inductive bias of deep convolutional networks through pooling geometry," arXiv preprint arXiv:1605.06743, 2016.
[24] J. De Fauw et al., "Clinically applicable deep learning for diagnosis and referral in retinal disease," Nature medicine, vol. 24, no. 9, pp. 1342-1350, 2018.
[25] P. Lakhani and B. Sundaram, "Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks," Radiology, vol. 284, no. 2, pp. 574-582, 2017.
[26] P. Rajpurkar et al., "Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning," arXiv preprint arXiv:1711.05225, 2017.
[27] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, 2015: Springer, pp. 234-241.
[28] D. Jin, Z. Xu, A. P. Harrison, and D. J. Mollura, "White matter hyperintensity segmentation from T1 and FLAIR images using fully convolutional neural networks enhanced with residual connections," in 2018 IEEE 15th International Symposium on biomedical imaging (ISBI 2018), 2018: IEEE, pp. 1060-1064.
[29] O. Oktay et al., "Attention u-net: Learning where to look for the pancreas," arXiv preprint arXiv:1804.03999, 2018.
[30] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[31] A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
[32] Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012-10022.
[33] H. Cao et al., "Swin-unet: Unet-like pure transformer for medical image segmentation," arXiv preprint arXiv:2105.05537, 2021.
[34] K. Han et al., "A survey on vision transformer," IEEE transactions on pattern analysis and machine intelligence, 2022.
[35] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A convnet for the 2020s," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976-11986.
[36] L. Wu et al., "Preoperative ultrasound radiomics analysis for expression of multiple molecular biomarkers in mass type of breast ductal carcinoma in situ," BMC Medical Imaging, vol. 21, no. 1, pp. 1-14, 2021.
[37] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
[38] M. Ma et al., "Predicting the molecular subtype of breast cancer and identifying interpretable imaging features using machine learning algorithms," European Radiology, vol. 32, no. 3, pp. 1652-1662, 2022.
[39] I. Bene et al., "Radiomic signatures Derived from hybrid contrast-enhanced ultrasound images (CEUS) for the assessment of Histological characteristics of breast cancer: A pilot study," Cancers, vol. 14, no. 16, p. 3905, 2022.
[40] L. Zhang, Y.-J. Liu, S.-Q. Jiang, H. Cui, Z.-Y. Li, and J.-W. Tian, "Ultrasound utility for predicting biological behavior of invasive ductal breast cancers," Asian Pacific Journal of Cancer Prevention, vol. 15, no. 19, pp. 8057-8062, 2014.
[41] Z. Xu et al., "Predicting HER2 Status in Breast Cancer on Ultrasound Images Using Deep Learning Method," Frontiers in oncology, vol. 12, p. 829041, 2022.
[42] J.-w. Li et al., "Prediction for pathological and immunohistochemical characteristics of triple-negative invasive breast carcinomas: the performance comparison between quantitative and qualitative sonographic feature analysis," European Radiology, vol. 32, no. 3, pp. 1590-1600, 2022.
[43] H. Cui et al., "Identifying ultrasound features of positive expression of Ki67 and P53 in breast cancer using radiomics," Asia‐Pacific Journal of Clinical Oncology, vol. 17, no. 5, pp. e176-e184, 2021.
[44] C. Cheng, H. Zhao, W. Tian, C. Hu, and H. Zhao, "Predicting the expression level of Ki-67 in breast cancer using multi-modal ultrasound parameters," BMC Medical Imaging, vol. 21, no. 1, pp. 1-7, 2021.
[45] F. Zhuang et al., "A comprehensive survey on transfer learning," Proceedings of the IEEE, vol. 109, no. 1, pp. 43-76, 2020.
[46] W. K. Moon, Y. W. Lee, H. H. Ke, S. H. Lee, C. S. Huang, and R. F. Chang, "Computer-aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks," Comput Methods Programs Biomed, vol. 190, p. 105361, Jul 2020, doi: 10.1016/j.cmpb.2020.105361.
[47] Y. Ganjisaffar, R. Caruana, and C. V. Lopes, "Bagging gradient-boosted trees for high precision, low variance ranking models," in Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, 2011, pp. 85-94.
[48] Y. Liu and X. Yao, "Ensemble learning via negative correlation," Neural networks, vol. 12, no. 10, pp. 1399-1404, 1999.
[49] Z.-H. Zhou, J. Wu, and W. Tang, "Ensembling neural networks: many could be better than all," Artificial intelligence, vol. 137, no. 1-2, pp. 239-263, 2002.
[50] L. Bi, J. Kim, A. Kumar, M. Fulham, and D. Feng, "Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation," The Visual Computer, vol. 33, no. 6, pp. 1061-1071, 2017.
[51] J. Du, W. Li, K. Lu, and B. Xiao, "An overview of multi-modal medical image fusion," Neurocomputing, vol. 215, pp. 3-20, 2016.
[52] J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
[53] J. Park, S. Woo, J.-Y. Lee, and I. S. Kweon, "Bam: Bottleneck attention module," arXiv preprint arXiv:1807.06514, 2018.
[54] Q. Hou, D. Zhou, and J. Feng, "Coordinate attention for efficient mobile network design," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 13713-13722.
[55] X. Pan et al., "On the integration of self-attention and convolution," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 815-825.
[56] M. A. Ganaie and M. Hu, "Ensemble deep learning: A review," arXiv preprint arXiv:2104.02395, 2021.
[57] L. I. Kuncheva and C. J. Whitaker, "Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy," Machine learning, vol. 51, no. 2, pp. 181-207, 2003.
[58] H. R. Bonab and F. Can, "A theoretical framework on the ideal number of classifiers for online ensembles in data streams," in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, 2016, pp. 2053-2056.
[59] H. Zhang et al., "Resnest: Split-attention networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2736-2746.
[60] J. Yang et al., "Focal self-attention for local-global interactions in vision transformers," arXiv preprint arXiv:2107.00641, 2021.
[61] C. Ju, A. Bibaut, and M. van der Laan, "The relative performance of ensemble methods with deep convolutional neural networks for image classification," Journal of Applied Statistics, vol. 45, no. 15, pp. 2800-2818, 2018.
[62] S.-A. Rebuffi, S. Gowal, D. A. Calian, F. Stimberg, O. Wiles, and T. A. Mann, "Data augmentation can improve robustness," Advances in Neural Information Processing Systems, vol. 34, pp. 29935-29948, 2021.
[63] S. A. Taghanaki et al., "Combo loss: Handling input and output imbalance in multi-organ segmentation," Computerized Medical Imaging and Graphics, vol. 75, pp. 24-33, 2019.
[64] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
[65] L. R. Dice, "Measures of the amount of ecologic association between species," Ecology, vol. 26, no. 3, pp. 297-302, 1945.
[66] O. U. Aydin et al., "On the usage of average Hausdorff distance for segmentation performance assessment: hidden error when used for ranking," European Radiology Experimental, vol. 5, no. 1, pp. 1-7, 2021.
[67] V. Yeghiazaryan and I. D. Voiculescu, "Family of boundary overlap metrics for the evaluation of medical image segmentation," Journal of Medical Imaging, vol. 5, no. 1, p. 015006, 2018.
[68] M. Murguía and J. L. Villaseñor, "Estimating the effect of the similarity coefficient and the cluster algorithm on biogeographic classifications," in Annales Botanici Fennici, 2003: JSTOR, pp. 415-421.
[69] V. Yeghiazaryan and I. Voiculescu, "Family of boundary overlap metrics for the evaluation of medical image segmentation," Journal of Medical Imaging, vol. 5, no. 1, pp. 015006-015006, 2018.
[70] R. Kohavi, "A study of cross-validation and bootstrap for accuracy estimation and model selection," in Ijcai, 1995, vol. 14, no. 2: Montreal, Canada, pp. 1137-1145.
[71] S. K. Kumar, "On weight initialization in deep neural networks," arXiv preprint arXiv:1704.08863, 2017.
[72] C. Drummond and R. C. Holte, "C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling," in Workshop on learning from imbalanced datasets II, 2003, vol. 11, pp. 1-8.
[73] C. Qi and F. Su, "Contrastive-center loss for deep neural networks," in 2017 IEEE international conference on image processing (ICIP), 2017: IEEE, pp. 2851-2855.
[74] I. Loshchilov and F. Hutter, "Decoupled weight decay regularization," arXiv preprint arXiv:1711.05101, 2017.
[75] A.-M. Šimundić, "Measures of diagnostic accuracy: basic definitions," ejifcc, vol. 19, no. 4, p. 203, 2009.
[76] J. A. Hanley and B. J. McNeil, "The meaning and use of the area under a receiver operating characteristic (ROC) curve," Radiology, vol. 143, no. 1, pp. 29-36, 1982.
[77] T. G. Dietterich, "Approximate statistical tests for comparing supervised classification learning algorithms," Neural computation, vol. 10, no. 7, pp. 1895-1923, 1998.
[78] J. Chen et al., "Transunet: Transformers make strong encoders for medical image segmentation," arXiv preprint arXiv:2102.04306, 2021.
[79] Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, "Object detection with deep learning: A review," IEEE transactions on neural networks and learning systems, vol. 30, no. 11, pp. 3212-3232, 2019.
[80] S. Ruder, "An overview of multi-task learning in deep neural networks," arXiv preprint arXiv:1706.05098, 2017.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90784-
dc.description.abstract乳癌是全球十大癌症死因之一,更是發生率最高的癌症。乳癌可以根據雌激素受體(ER)、孕激素受體(PR)、人類表皮生長因子受體2 (HER2)、與細胞增殖蛋白(Ki67)等生物標記的表達情況,區分為多個亞型,每種亞型的生物學行為、治療反應和治療結果皆不同。目前對生物標記表達的評估依賴於活體組織檢查後的免疫組織化學分析(IHC),這可能無法準確代表整個組織中的目標蛋白和衡量腫瘤時空演變的差異。而研究人員試圖尋找生物標記表達與乳房超音波(US)影像的關聯性,以提供初步且動態的生物行為的監控以協助臨床醫師進行診斷、衡量治療的結果及預後。因此,本研究提出了一種電腦輔助診斷(CAD)系統來預測多種生物標誌物的表達,以協助臨床醫生開立客製化的治療方案並減輕醫務人員的負擔。
我們所提出的系統(BioCAN)包括影像前處理、腫瘤分割和生物標記的預測。首先,我們會提取腫瘤區域並將影像調整為一致的大小。接著,預處理後的影像會透過Swin-Unet分割模型獲得腫瘤遮罩和腫瘤圖像。接下來,我們通過連接原始影像、腫瘤遮罩和腫瘤圖像來獲得融合圖像,以增強腫瘤的內容表達,並將融合圖像作為注意力增強的卷積神經網路(Convolutional neural network, CNN)的輸入來進行生物標誌物表達的預測。我們的分類器以ConvNeXt網路為基底,集成了座標注意力(Coordinate Attention)以及ACmix,為了產生穩定和準確的預測結果,我們還選定準確率前三名的新穎分類器進行加權平均(WA)集成學習。
在本研究中,我們使用總計507張影像評估我們的BioCAN系統,包含了414例ER陽性與93例ER陰性的腫瘤、324例PR陽性與183例PR陰性的腫瘤、83例HER2陽性與424例HER2陰性的腫瘤、以及351例Ki67陽性與156例Ki67陰性的腫瘤。在腫瘤分割中,我們的BioCAN系統可以達到Dice係數為0.9010、IoU值為0.8225、HD95值為17.5166、及ASSD值為6.4369的結果。而在生物標記表達的預測中,我們的BioCAN系統的準確度、靈敏度、特異度和ROC曲線下面積AUC分別在ER狀態的預測達到83.23%、83.57%、81.72%、0.9060;在PR狀態的預測達到77.71%、78.70%、75.96%、0.8526;在HER2狀態的預測達到83.23%、80.72%、83.73%、0.9109,在Ki67狀態的預測達到84.02%、83.76%、84.62%、0.9111。實驗結果證明了本研究提出的方法能夠減輕醫護的負擔並協助放射科醫生預測多種生物標記的表達。

關鍵詞:乳癌、乳房超音波、電腦輔助診斷、雌激素受體、孕激素受體、人類表皮生長因子受體2、細胞增殖蛋白、卷積神經網絡、注意力機制、自注意力
zh_TW
dc.description.abstractBreast cancer is one of the top ten leading causes of cancer death, with the highest incidence in females worldwide. Breast cancer can be divided into multiple subtypes with different biological behaviors, therapy responses, and treatment outcomes according to the expression of biomarkers, including estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2), and proliferation marker Ki67. The assessment of biomarkers expression currently relies on Immunohistochemistry (IHC) analysis after biopsy, which may not precisely represent the target protein in the overall tissue and evaluate differences in the spatiotemporal evolution of the tumor. Recently, researchers have attempted to correlate the expression of biomarkers and breast ultrasound (US), which can provide preliminary and dynamic monitoring of biological behavior to assist the clinician in diagnosing and assessing the outcome of treatment and prognosis. Therefore, this study proposed a computer-aided diagnosis (CAD) system to predict the expression of multiple biomarkers to assist clinicians in prescribing personalized treatment and relieve the burden of medical staff.
The proposed system (BioCAN) contains image preprocessing, tumor segmentation, and biomarkers prediction. First, the region of interest (ROI) containing the tumor was extracted and resized to a consistent image size. Afterward, the Swin-Unet segmentation model was adopted to obtain the tumor mask from the ROI. Next, we attained the fused image by concatenating ROI, tumor mask, and tumor image, which emphasize the texture and morphological features of the tumor correlated to the expression of biomarkers. The fused image is then used as the input of the proposed convolutional neural network (CNN) equipped with two attention mechanisms. The proposed classifier integrated the Coordinate Attention module to catch the channel dependencies with direction-sensitive positional information and the ACmix module to introduce the long-range features by mixing the convolution and self-attention paradigm in a single block. Finally, we adopted the weighted average (WA) ensemble learning technique using the models with the Top 3 accuracies to produce a stable and accurate outcome.
In this study, we used 507 patients to evaluate our BioCAN system with 414 ER-positive and 93 ER-negative tumors, 324 PR-positive and 183 PR-negative tumors, 83 HER2-positive and 424 HER2-negative tumors, and 351 Ki67-positive and 156 Ki67-negative tumors. The experimental result of the segmentation stage demonstrated that the Dice score coefficient (DSC), Intersection over Union (IoU), Hausdorff distance (HD95), and Average symmetric surface distance (ASSD) were 0.9010, 0.8225, 17.5166, and 6.4369, respectively. In addition, in the biomarkers prediction, the accuracy, sensitivity, specificity, and area under the ROC curve (AUC) of our BioCAN system reached 83.23%, 83.57%, 81.72%, and 0.9060 for ER status, 77.71%, 78.70%, 75.96%, and 0.8526 for PR status, 83.23%, 80.72%, 83.73%, and 0.9109 for HER2 status, 84.02%, 83.76%, 84.62%, and 0.9111 for Ki67 status. The two experimental results proved that the method proposed in this study could reduce the burden of medical staff and assist radiologists in predicting multiple biomarkers expression.

Keywords: Breast cancer, Breast ultrasound, Computer-aided diagnosis, Estrogen receptor, Progesterone receptor, Human epidermal growth factor receptor 2, Ki67, Convolutional neural network, Attention mechanism, Self-attention
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-10-03T17:36:21Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-10-03T17:36:21Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審訂書 i
致謝 ii
摘要 iii
Abstract v
Table of Contents viii
List of Figures x
List of Tables xiii
Chapter 1 Introduction 1
Chapter 2 Materials 10
Chapter 3 Methods 13
3.1. Image Preprocessing 16
3.2. Tumor Segmentation 17
3.2.1. U-shaped Shifted Windows Transformer (Swin-Unet) 17
3.2.2. Image Fusion 21
3.3. Biomarkers Prediction 22
3.3.1. ConvNeXt Backbone 22
3.3.2. Attention Modules 23
3.3.3. The Proposed Model 28
3.3.4. Ensemble Learning 31
3.3.4.1. Selection of Base Learners 31
3.3.4.2. Ensemble Strategies 33
Chapter 4 Experimental Results 35
4.1. Experimental Setting and Evaluation 35
4.1.1. Training Environment 35
4.1.2. Experimental setting of Tumor Segmentation 35
4.1.3. Experimental setting of Biomarkers Prediction 38
4.2. Comparisons of Different Models for Segmentation 41
4.3. Comparisons of Different Image Representations 42
4.4. Comparisons of Different Classifiers 45
4.5. Comparisons of Different Ensemble Strategies 48
4.6. Comparisons of Different Studies 52
Chapter 5 Discussion and Conclusion 55
References 64
-
dc.language.isozh_TW-
dc.subject雌激素受體zh_TW
dc.subject孕激素受體zh_TW
dc.subject人類表皮生長因子受體2zh_TW
dc.subject細胞增殖蛋白zh_TW
dc.subject卷積神經網絡zh_TW
dc.subject注意力機制zh_TW
dc.subject乳癌zh_TW
dc.subject自注意力zh_TW
dc.subject乳房超音波zh_TW
dc.subject電腦輔助診斷zh_TW
dc.subjectSelf-attentionen
dc.subjectBreast canceren
dc.subjectBreast ultrasounden
dc.subjectComputer-aided diagnosisen
dc.subjectEstrogen receptoren
dc.subjectProgesterone receptoren
dc.subjectHuman epidermal growth factor receptor 2en
dc.subjectKi67en
dc.subjectConvolutional neural networken
dc.subjectAttention mechanismen
dc.title以注意力強化 ConvNeXt網絡於乳房超音波預測生物標誌zh_TW
dc.titleBioCAN: Prediction of Molecular Biomarkers Using Breast Ultrasound Images Based on Attention-augmented ConvNeXt Networken
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee羅崇銘;陳啓禎zh_TW
dc.contributor.oralexamcommitteeChung-Ming Lo;Chii-Jen Chenen
dc.subject.keyword乳癌,乳房超音波,電腦輔助診斷,雌激素受體,孕激素受體,人類表皮生長因子受體2,細胞增殖蛋白,卷積神經網絡,注意力機制,自注意力,zh_TW
dc.subject.keywordBreast cancer,Breast ultrasound,Computer-aided diagnosis,Estrogen receptor,Progesterone receptor,Human epidermal growth factor receptor 2,Ki67,Convolutional neural network,Attention mechanism,Self-attention,en
dc.relation.page69-
dc.identifier.doi10.6342/NTU202300239-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-05-08-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2025-05-08-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
3.3 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved