請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59439
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 張瑞峰(Ruey-Feng Chang) | |
dc.contributor.author | Jun Zhang | en |
dc.contributor.author | 張鈞 | zh_TW |
dc.date.accessioned | 2021-06-16T09:23:47Z | - |
dc.date.available | 2023-09-11 | |
dc.date.copyright | 2020-09-22 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-17 | |
dc.identifier.citation | [1] R. L. Siegel, K. D. Miller, and A. J. C. a. c. j. f. c. Jemal, 'Cancer statistics, 2019,' vol. 69, no. 1, pp. 7-34, 2019. [2] N. Antropova, B. Q. Huynh, and M. L. J. M. p. Giger, 'A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets,' vol. 44, no. 10, pp. 5162-5171, 2017. [3] A. Vourtsis and A. J. E. r. Kachulis, 'The performance of 3D ABUS versus HHUS in the visualisation and BI-RADS characterisation of breast lesions in a large cohort of 1,886 women,' vol. 28, no. 2, pp. 592-601, 2018. [4] A. Jalalian, S. B. Mashohor, H. R. Mahmud, M. I. B. Saripan, A. R. B. Ramli, and B. J. C. i. Karasfi, 'Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review,' vol. 37, no. 3, pp. 420-426, 2013. [5] M. Elter and A. J. M. p. Horsch, 'CADx of mammographic masses and clustered microcalcifications: a review,' vol. 36, no. 6Part1, pp. 2052-2068, 2009. [6] W. K. Moon, Y.-W. Shen, C.-S. Huang, L.-R. Chiang, R.-F. J. U. i. m. Chang, and biology, 'Computer-aided diagnosis for the classification of breast masses in automated whole breast ultrasound images,' vol. 37, no. 4, pp. 539-548, 2011. [7] G. Litjens et al., 'A survey on deep learning in medical image analysis,' vol. 42, pp. 60-88, 2017. [8] M.-C. Yang et al., 'Robust texture analysis using multi-resolution gray-scale invariant features for breast sonographic tumor diagnosis,' vol. 32, no. 12, pp. 2262-2273, 2013. [9] S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. J. J. o. m. s. Khan, 'Medical image analysis using convolutional neural networks: a review,' vol. 42, no. 11, p. 226, 2018. [10] D. Lévy and A. J. a. p. a. Jain, 'Breast mass classification from mammograms using deep convolutional neural networks,' 2016. [11] O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' in International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2015, pp. 234-241: Springer. [12] J. Long, E. Shelhamer, and T. Darrell, 'Fully convolutional networks for semantic segmentation,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3431-3440. [13] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, 'Unet++: A nested u-net architecture for medical image segmentation,' in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Springer, 2018, pp. 3-11. [14] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, 'Densely connected convolutional networks,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4700-4708. [15] M. Gong, Y. Liang, J. Shi, W. Ma, and J. J. I. t. o. i. p. Ma, 'Fuzzy c-means clustering with local information and kernel metric for image segmentation,' vol. 22, no. 2, pp. 573-584, 2012. [16] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, 'Rethinking the inception architecture for computer vision,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818-2826. [17] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778. [18] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, 'Inception-v4, inception-resnet and the impact of residual connections on learning,' in Thirty-first AAAI Conference on Artificial Intelligence (AAAI), 2017. [19] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, 'Aggregated residual transformations for deep neural networks,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1492-1500. [20] M. Z. Alom, C. Yakopcic, M. S. Nasrin, T. M. Taha, and V. K. J. J. o. d. i. Asari, 'Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network,' vol. 32, no. 4, pp. 605-617, 2019. [21] J. Hu, L. Shen, and G. Sun, 'Squeeze-and-excitation networks,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7132-7141. [22] G. Zhang, Z. Yang, L. Gong, S. Jiang, L. Wang, and H. J. L. r. m. Zhang, 'Classification of lung nodules based on CT images using squeeze-and-excitation network and aggregated residual transformations,' pp. 1-10, 2020. [23] T.-C. Chiang, Y.-S. Huang, R.-T. Chen, C.-S. Huang, and R.-F. J. I. t. o. m. i. Chang, 'Tumor detection in automated breast ultrasound using 3-D CNN and prioritized candidate aggregation,' vol. 38, no. 1, pp. 240-249, 2018. [24] C. M. Chen, Y. S. Huang, P. W. Fang, C. W. Liang, and R. F. J. M. P. Chang, 'A computer‐aided diagnosis system for differentiation and delineation of malignant regions on whole‐slide prostate histopathology image using spatial statistics and multidimensional DenseNet,' vol. 47, no. 3, pp. 1021-1033, 2020. [25] W. K. Moon et al., 'Computer-aided tumor detection in automated breast ultrasound using a 3-D convolutional neural network,' vol. 190, p. 105360, 2020. [26] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, 'Focal loss for dense object detection,' in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2980-2988. [27] C. Szegedy et al., 'Going deeper with convolutions,' in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1-9. [28] Y. Chen et al., 'Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution,' in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019, pp. 3435-3444. [29] S. Ioffe and C. J. a. p. a. Szegedy, 'Batch normalization: Accelerating deep network training by reducing internal covariate shift,' 2015. [30] V. Nair and G. E. Hinton, 'Rectified linear units improve restricted boltzmann machines,' in ICML, 2010. [31] A. Satorra and P. M. J. P. Bentler, 'A scaled difference chi-square test statistic for moment structure analysis,' vol. 66, no. 4, pp. 507-514, 2001. [32] X. Xie et al., 'SERU: A cascaded SE‐ResNeXT U‐Net for kidney and tumor segmentation,' vol. 32, no. 14, p. e5738, 2020. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59439 | - |
dc.description.abstract | 自動乳房超音波(Automated Breast Ultrasound, ABUS)是一種廣泛應用於乳癌檢測與診斷的超音波造影技術,可以一次掃描整個乳房,提供完整的三維乳房影像資訊。然而大量的影像需要放射科醫生花費較多的時間來診斷,也存在著初步誤判的風險。為了減少誤判的情況,電腦輔助診斷系統(Computer-Aided Diagnosis, CADx)應運而生,近年來卷積神經網路(Convolutional Neural Network, CNN)能夠自動提取特徵,在醫學影像上發展迅速,基於卷積神經網路的電腦輔助診斷系統在診斷上也有出色的表現。因此,本研究提出了一個基於三維卷積神經網路的電腦輔助診斷系統,系統內包含一個增強型三維U形網路與一個三維注意力卷積神經網路,其中注意力卷積神經網路由Inception網路、ResNeXt網路及SE (Squeeze-and-Excitation)網路組成。首先,從ABUS影像中擷取腫瘤區域並縮放至固定大小,同時進行影像增強的前處理;接著,利用增強型三維U形網路對腫瘤區域進行切割,取得腫瘤遮罩;最後,將腫瘤區域原始影像、腫瘤區域增強影像及腫瘤遮罩匯入三維注意力卷積神經網路進行腫瘤的分類進行初步診斷腫瘤的良惡性。本論文所提出的系統可達到89.2%的準確率、90.3%的靈敏性、88.1%的特異性以及0.9255的曲線下面積的結果,實驗結果顯示提出的系統利用ABUS影像能超越3年經驗醫師的判斷結果。 | zh_TW |
dc.description.abstract | The automated breast ultrasound (ABUS) has been widely used in the detection and diagnosis of breast cancer since it could scan the whole breast and provide the complete three-dimensional (3-D) volume of the breast. However, it was a time-consuming task for a radiologist to diagnosis by reviewing an ABUS image, and there was a risk of misdiagnosis. To eliminate the risk, the computer-aided diagnosis (CADx) systems were proposed to assist the physicians. In recent years, the convolutional neural networks (CNN), which could extract features automatically, has developed rapidly in the field of medical images, and the CNN-based CADx could have outstanding performance. Hence, in our study, a CADx system based on 3-D CNN was proposed for ABUS tumor classification. Our CADx system was composed of the tumor volume of interest (VOI) extraction, the tumor segmentation, and the tumor classification. First, the tumor VOI extracted from the ABUS image was resized to fixed size. Then, the VOI was enhanced by histogram equalization. Second, in the tumor segmentation, the 3-D U-Net++ was applied to the resized VOI for generating the tumor mask. Finally, for the tumor classification, the VOI, the enhanced VOI, and the corresponding tumor mask were fed into the 3-D SE-Inception-ResNeXt network, which was composed of the Inception model, the ResNeXt model and the Squeeze-and-Excitation (SE) module, to classify the tumor as benign or malignant. In our experiment result, the accuracy, sensitivity, and specificity could reach 89.2%, 90.3% and 88.1% respectively, and the area under the receiver operating characteristic curve (AUC) was 0.9255, which demonstrated that our CADx system for ABUS image was comparable to a 3-year-experience reader in tumor diagnosis tasks. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T09:23:47Z (GMT). No. of bitstreams: 1 U0001-1408202010061700.pdf: 2254472 bytes, checksum: ca068e5ecbb4b2450b58cdbe986f0231 (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 口試委員審定書 i 致謝 ii 摘要 iii Abstract iv Table of Contents vi List of Figures viii List of Tables ix Chapter 1. Introduction 1 Chapter 2. Materials 6 Chapter 3. Methods 9 3.1 VOI Extraction 10 3.2 Tumor Segmentation 10 3.2.1 3-D U-Net++ 11 3.2.2 Post-processing 13 3.3 Tumor Classification 15 3.3.1 SE Module 15 3.3.2 3-D SE-Inception Modules 16 3.3.3 3-D SE-ResNeXt Module 19 3.3.4 Loss Function 23 Chapter 4. Experiment Results and Discussions 25 4.1 Environment 25 4.2 Evaluation 25 4.2.1 Comparison in All Tumors 26 4.2.2 Comparison in Mass Tumors 30 4.2.3 Comparison in Non-mass Tumors 34 4.2.4 Comparison with Connection Method of SE Module 37 4.3 Discussion 38 Chapter 5. Conclusions and Future Works 43 Reference 44 | |
dc.language.iso | en | |
dc.title | 基於深度注意力卷積神經網路之乳房自動超音波腫瘤診斷 | zh_TW |
dc.title | Tumor Diagnosis on Automated Breast Ultrasound Using Attention Inception Neural Network | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 羅崇銘(Chung-Ming Lo),陳鴻豪(Hung-Hao Chen) | |
dc.subject.keyword | 乳癌,自動乳房超音波,電腦輔助診斷系統,卷積神經網路,分組卷積,注意力機制, | zh_TW |
dc.subject.keyword | breast cancer,ABUS,CADx,CNN,group convolution,attention mechanism, | en |
dc.relation.page | 46 | |
dc.identifier.doi | 10.6342/NTU202003377 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2020-08-18 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | zh_TW |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1408202010061700.pdf 目前未授權公開取用 | 2.2 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。