請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84668
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 張瑞峰(Ruey-Feng Chang) | |
dc.contributor.author | Ya-Hui Chien | en |
dc.contributor.author | 簡雅慧 | zh_TW |
dc.date.accessioned | 2023-03-19T22:19:48Z | - |
dc.date.copyright | 2022-10-05 | |
dc.date.issued | 2022 | |
dc.date.submitted | 2022-09-13 | |
dc.identifier.citation | H. Sung et al., 'Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,' CA: a cancer journal for clinicians, vol. 71, no. 3, pp. 209-249, 2021. M. Zanotel et al., 'Automated breast ultrasound: basic principles and emerging clinical applications,' La radiologia medica, vol. 123, no. 1, pp. 1-12, 2018. B. Wilczek, H. E. Wilczek, L. Rasouliyan, and K. Leifland, 'Adding 3D automated breast ultrasound to mammography screening in women with heterogeneously and extremely dense breasts: report from a hospital-based, high-volume, single-center breast cancer screening program,' European journal of radiology, vol. 85, no. 9, pp. 1554-1563, 2016. H. Nemat, H. Fehri, N. Ahmadinejad, A. F. Frangi, and A. Gooya, 'Classification of breast lesions in ultrasonography using sparse logistic regression and morphology‐based texture features,' Medical physics, vol. 45, no. 9, pp. 4112-4124, 2018. Q. Huang, B. Hu, and F. Zhang, 'Evolutionary optimized fuzzy reasoning with mined diagnostic patterns for classification of breast tumors in ultrasound,' Information Sciences, vol. 502, pp. 525-536, 2019. E. H. Houssein, M. M. Emam, A. A. Ali, and P. N. Suganthan, 'Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review,' Expert Systems with Applications, vol. 167, p. 114161, 2021. Y. Lei et al., 'Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R‐CNN,' Medical physics, vol. 48, no. 1, pp. 204-214, 2021. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, 'Unet++: A nested u-net architecture for medical image segmentation,' in Deep learning in medical image analysis and multimodal learning for clinical decision support: Springer, 2018, pp. 3-11. Q. Zhou, Q. Wang, Y. Bao, L. Kong, X. Jin, and W. Ou, 'LAEDNet: A Lightweight Attention Encoder–Decoder Network for ultrasound medical image segmentation,' Computers and Electrical Engineering, vol. 99, p. 107777, 2022. D. Sui, Z. Huang, X. Song, Y. Zhang, Y. Wang, and L. Zhang, 'Breast Regions Segmentation Based on U-net++ from DCE-MRI Image Sequences,' in Journal of Physics: Conference Series, 2021, vol. 1748, no. 4, p. 042058: IOP Publishing. A. Vaswani et al., 'Attention is all you need,' Advances in neural information processing systems, vol. 30, 2017. A. Dosovitskiy et al., 'An image is worth 16x16 words: Transformers for image recognition at scale,' arXiv preprint arXiv:2010.11929, 2020. Z. Liu et al., 'Swin transformer: Hierarchical vision transformer using shifted windows,' arXiv preprint arXiv:2103.14030, 2021. Y. LeCun and Y. Bengio, 'Convolutional networks for images, speech, and time series,' The handbook of brain theory and neural networks, vol. 3361, no. 10, p. 1995, 1995. A. G. Howard et al., 'Mobilenets: Efficient convolutional neural networks for mobile vision applications,' arXiv preprint arXiv:1704.04861, 2017. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, 'Mobilenetv2: Inverted residuals and linear bottlenecks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510-4520. C. Wang, W. Gong, J. Cheng, and Y. Qian, 'DBLCNN: Dependency-based lightweight convolutional neural network for multi-classification of breast histopathology images,' Biomedical Signal Processing and Control, vol. 73, p. 103451, 2022. H. Xiang et al., '3-D Res-CapsNet convolutional neural network on automated breast ultrasound tumor diagnosis,' European Journal of Radiology, vol. 138, p. 109608, 2021. C. D’Orsi, E. Sickles, E. Mendelson, and E. Morris, Breast Imaging Reporting and Data System: ACR BI-RADS® Atlas, 5 ed. Reston: American College of Radiology, 2013. K. H. Ko, H. K. Jung, S. J. Kim, H. Kim, and J. H. Yoon, 'Potential role of shear-wave ultrasound elastography for the differential diagnosis of breast non-mass lesions: preliminary report,' European radiology, vol. 24, no. 2, pp. 305-311, 2014. S. J. Kim, Y. M. Park, and H. K. Jung, 'Nonmasslike lesions on breast sonography: comparison between benign and malignant lesions,' Journal of Ultrasound in Medicine, vol. 33, no. 3, pp. 421-430, 2014. Z. L. Wang, N. Li, M. Li, and W. B. Wan, 'Non-mass-like lesions on breast ultrasound: classification and correlation with histology,' La radiologia medica, vol. 120, no. 10, pp. 905-910, 2015. K.-H. Ko et al., 'Non-mass-like breast lesions at ultrasonography: feature analysis and BI-RADS assessment,' European journal of radiology, vol. 84, no. 1, pp. 77-85, 2015. J. C. Russ, The image processing handbook. CRC press, 2006. O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234-241: Springer. C. Shorten and T. M. Khoshgoftaar, 'A survey on image data augmentation for deep learning,' Journal of big data, vol. 6, no. 1, pp. 1-48, 2019. J. Wang and L. Perez, 'The effectiveness of data augmentation in image classification using deep learning,' Convolutional Neural Networks Vis. Recognit, vol. 11, pp. 1-8, 2017. B. Neyshabur, 'Towards learning convolutions from scratch,' Advances in Neural Information Processing Systems, vol. 33, pp. 8078-8088, 2020. T. DeVries and G. W. Taylor, 'Improved regularization of convolutional neural networks with cutout,' arXiv preprint arXiv:1708.04552, 2017. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, 'mixup: Beyond empirical risk minimization,' arXiv preprint arXiv:1710.09412, 2017. N. L. Johnson, S. Kotz, and N. Balakrishnan, Continuous univariate distributions, volume 2. John wiley & sons, 1995. S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, 'Cutmix: Regularization strategy to train strong classifiers with localizable features,' in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 6023-6032. R. Kohavi, 'A study of cross-validation and bootstrap for accuracy estimation and model selection,' in Ijcai, 1995, vol. 14, no. 2, pp. 1137-1145: Montreal, Canada. T. G. Dietterich, 'Approximate statistical tests for comparing supervised classification learning algorithms,' Neural computation, vol. 10, no. 7, pp. 1895-1923, 1998. R. N. Strickland, Image-processing techniques for tumor detection. CRC Press, 2002. J. Hu, L. Shen, and G. Sun, 'Squeeze-and-excitation networks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, 'Rethinking the inception architecture for computer vision,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818-2826. K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, 'Aggregated residual transformations for deep neural networks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500. H. Zhang et al., 'Resnest: Split-attention networks,' arXiv preprint arXiv:2004.08955, 2020. S. Sabour, N. Frosst, and G. E. Hinton, 'Dynamic routing between capsules,' Advances in neural information processing systems, vol. 30, 2017. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, 'Gradient-based learning applied to document recognition,' Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998. A. T. Stavros, D. Thickman, C. L. Rapp, M. A. Dennis, S. H. Parker, and G. A. Sisney, 'Solid breast nodules: use of sonography to distinguish between benign and malignant lesions,' Radiology, vol. 196, no. 1, pp. 123-134, 1995. A. S. Hong, E. L. Rosen, M. S. Soo, and J. A. Baker, 'BI-RADS for sonography: positive and negative predictive values of sonographic features,' American Journal of Roentgenology, vol. 184, no. 4, pp. 1260-1265, 2005. K. W. Park et al., 'Non-mass lesions detected by breast US: stratification of cancer risk for clinical management,' European Radiology, vol. 31, no. 3, pp. 1693-1706, 2021. F. Zhang et al., 'The role of contrast-enhanced ultrasound in the diagnosis of malignant non-mass breast lesions and exploration of diagnostic criteria,' The British Journal of Radiology, vol. 94, no. 1120, p. 20200880, 2021. Y. Ho and S. Wookey, 'The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling,' IEEE Access, vol. 8, pp. 4806-4813, 2019. Z. Qin, P. Zhang, F. Wu, and X. Li, 'Fcanet: Frequency channel attention networks,' in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 783-792. J. H. Friedman, 'Greedy function approximation: a gradient boosting machine,' Annals of statistics, pp. 1189-1232, 2001. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84668 | - |
dc.description.abstract | 乳癌是主要造成死亡的癌症之一,及早診斷與治療可以大幅提升乳癌病患的生存率。自動乳房超音波(automated breast ultrasound, ABUS)是常見的乳房檢測技術,提供乳房組織的三維資訊。然而大量二維切片構成的乳房超音波影像讓醫生花費冗長的時間審閱,並且延誤診斷惡性腫瘤。近年來開發了基於深度學習的電腦輔助診斷(computer-aided diagnosis, CADx)系統來自動提取特徵並加速診斷流程。因此本研究提出一個基於Transformer和卷積神經網路(Convolutional Neural Network, CNN)的電腦輔助診斷系統以診斷乳房超音波影像。提出的系統利用自注意力機制來獲得全局影像關係,且加入反向殘差卷積架構來提取局部影像資訊。 本系統包含影像前處理、腫瘤切割和腫瘤分類。在影像前處理,會提取腫瘤區域並調整至固定影像大小,然後利用直方圖均衡化增強影像對比度。接著,調整大小後的影像會透過腫瘤分割模型產生腫瘤遮罩。最後,調整大小後的腫瘤影像、增強後的腫瘤影像和腫瘤遮罩會作為所提出的三維漩渦Transformer腫瘤分類模型的輸入並判斷腫瘤良惡性。而提出的模型是由移動窗口自注意利機制和反向殘差互換器所建構來結合全局和局部特徵。根據實驗結果,本研究提出的電腦輔助診斷系統可達到89.9%的正確率、89.8%的靈敏性、89.9%的特異性和0.9401的曲線下面積。實驗結果顯示提出的系統可以有效輔助醫生更準確的診斷乳癌。 | zh_TW |
dc.description.abstract | Breast cancer is one of the leading causes of death from cancer. Through early diagnosis and treatment, the survival rate of patients with breast cancer could be significantly improved. Automated breast ultrasound (ABUS) is the most common breast inspection technique and could provide three-dimensional (3-D) spatial information on breast tissue. However, ABUS images consisting of a large number of 2-D slices make radiologists spend much time reviewing and delaying the diagnosis of malignant tumors. Recently, deep learning-based computer-aided diagnosis (CADx) systems have been developed to extract features automatically and speed up the diagnostic process. Thus, this study proposed a CADx system based on the Transformer and convolutional neural network (CNN) for tumor diagnosis on ABUS images. The proposed tumor classification model employed the self-attention mechanism to capture the global relationship and added the inverted residual convolution block to extract local information from images. The CADx system in this study contained data preprocessing, tumor segmentation, and tumor classification. In the data preprocessing, the tumor region was extracted as a volume of interest (VOI) and resized to the fixed image size. Afterward, the histogram equalization was conducted to increase the image contrast. Next, the resized VOI was utilized to generate a tumor mask through the tumor segmentation model. Finally, the resized VOI, the enhanced VOI, and the corresponding tumor mask were used as the inputs of the proposed 3-D SWIRL Transformer tumor classification model to determine tumor malignancy. The proposed model was constructed with shifted window-based self-attention mechanism and inverted residual switch to aggregate global and local features. According to the results, the proposed CADx system could reach 89.9% accuracy, 89.8% sensitivity, 89.9% specificity, and 0.9401 AUC. These results demonstrated that the proposed CADx system could effectively help radiologists diagnose breast cancer more accurately. | en |
dc.description.provenance | Made available in DSpace on 2023-03-19T22:19:48Z (GMT). No. of bitstreams: 1 U0001-1109202209063300.pdf: 1886191 bytes, checksum: 789da6f34d400bc236a5e49b2114c104 (MD5) Previous issue date: 2022 | en |
dc.description.tableofcontents | 口試委員會審定書 I 致謝 II 摘要 III Abstract IV Table of Contents VI List of Figures VIII List of Tables X Chapter 1 Introduction 1 Chapter 2 Materials 6 Chapter 3 Methods 9 3.1 Data Preprocessing 10 3.2 Tumor Segmentation 11 3.2.1 3-D U-Net++ 12 3.2.2 Post-processing 13 3.3 Tumor Classification 15 3.3.1 3-D SWIRL Transformer 15 3.3.2 3-D SWIRL Transformer block 18 3.3.3 Inverted Residual Switch 20 3.3.4 Data Augmentation 22 Chapter 4 Experiment Results and Discussion 25 4.1 Experiment Environment 25 4.2 Evaluation 25 4.3 Experiment Results 26 4.3.1 Ablation Study 27 4.3.2 Model Comparison 30 4.4 Discussion 41 Chapter 5 Conclusion 47 References 49 | |
dc.language.iso | en | |
dc.title | 基於移動窗口和反向殘差互換器之3-D漩渦Transformer於自動乳房超音波腫瘤診斷 | zh_TW |
dc.title | A 3-D SWIRL Transformer based on Swin and Inverted Residual Switch for Tumor Diagnosis in Automated Breast Ultrasound Image | en |
dc.type | Thesis | |
dc.date.schoolyear | 110-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳啟禎,羅崇銘 | |
dc.subject.keyword | 乳癌,自動乳房超音波,電腦輔助診斷系統,卷積神經網路,自注意力機制, | zh_TW |
dc.subject.keyword | Breast cancer,Automated breast ultrasound (ABUS),Computer-aided diagnosis (CADx),Convolution neural network (CNN),Self-attention mechanism, | en |
dc.relation.page | 52 | |
dc.identifier.doi | 10.6342/NTU202203279 | |
dc.rights.note | 同意授權(限校園內公開) | |
dc.date.accepted | 2022-09-13 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | zh_TW |
dc.date.embargo-lift | 2024-09-30 | - |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1109202209063300.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 1.84 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。