請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70920
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 張瑞峰 | |
dc.contributor.author | Hao-Hsiang Ke | en |
dc.contributor.author | 柯皓翔 | zh_TW |
dc.date.accessioned | 2021-06-17T04:43:52Z | - |
dc.date.available | 2021-08-08 | |
dc.date.copyright | 2018-08-08 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-03 | |
dc.identifier.citation | [1] L. A. Torre, F. Bray, R. L. Siegel, J. Ferlay, J. Lortet‐Tieulent, and A. Jemal, 'Global cancer statistics, 2012,' CA: a cancer journal for clinicians, vol. 65, no. 2, pp. 87-108, 2015.
[2] R. L. Siegel, K. D. Miller, and A. Jemal, 'Cancer statistics, 2016,' CA: a cancer journal for clinicians, vol. 66, no. 1, pp. 7-30, 2016. [3] A. Takemura, A. Shimizu, and K. Hamamoto, 'Discrimination of breast tumors in ultrasonic images using an ensemble classifier based on the AdaBoost algorithm with feature selection,' IEEE Transactions on Medical Imaging, vol. 29, no. 3, pp. 598-609, 2010. [4] S. K. Moore, 'Better breast cancer detection,' Ieee Spectrum, vol. 38, no. 5, pp. 50-54, 2001. [5] L. Ma, E. Fishell, B. Wright, W. Hanna, S. Allan, and N. Boyd, 'Case-control study of factors associated with failure to detect breast cancer by mammography,' JNCI: Journal of the National Cancer Institute, vol. 84, no. 10, pp. 781-785, 1992. [6] T. M. Kolb, J. Lichy, and J. H. Newhouse, 'Occult cancer in women with dense breasts: detection with screening US--diagnostic yield and tumor characteristics,' Radiology, vol. 207, no. 1, pp. 191-199, 1998. [7] T. Tan, B. Platel, H. Huisman, C. I. Sánchez, R. Mus, and N. Karssemeijer, 'Computer-aided lesion diagnosis in automated 3-D breast ultrasound using coronal spiculation,' IEEE transactions on medical imaging, vol. 31, no. 5, pp. 1034-1042, 2012. [8] H.-D. Cheng, J. Shan, W. Ju, Y. Guo, and L. Zhang, 'Automated breast cancer detection and classification using ultrasound images: A survey,' Pattern recognition, vol. 43, no. 1, pp. 299-317, 2010. [9] M. Samulski, R. Hupse, C. Boetes, R. D. Mus, G. J. den Heeten, and N. Karssemeijer, 'Using computer-aided detection in mammography as a decision support,' European radiology, vol. 20, no. 10, pp. 2323-2330, 2010. [10] L. A. Meinel, A. H. Stolpen, K. S. Berbaum, L. L. Fajardo, and J. M. Reinhardt, 'Breast MRI lesion classification: Improved performance of human readers with a backpropagation neural network computer‐aided diagnosis (CAD) system,' Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine, vol. 25, no. 1, pp. 89-95, 2007. [11] B. Sahiner et al., 'Malignant and benign breast masses on 3D US volumetric images: effect of computer-aided diagnosis on radiologist accuracy,' Radiology, vol. 242, no. 3, pp. 716-724, 2007. [12] H.-P. Chan et al., 'Improvement of radiologists' characterization of mammographic masses by using computer-aided diagnosis: an ROC study,' Radiology, vol. 212, no. 3, pp. 817-827, 1999. [13] J. Shan, S. K. Alam, B. Garra, Y. Zhang, and T. Ahmed, 'Computer-aided diagnosis for breast ultrasound using computerized BI-RADS features and machine learning methods,' Ultrasound in medicine & biology, vol. 42, no. 4, pp. 980-988, 2016. [14] D.-R. Chen et al., 'Classification of breast ultrasound images using fractal feature,' Clinical imaging, vol. 29, no. 4, pp. 235-245, 2005. [15] H.-W. Lee, B.-D. Liu, K.-C. Hung, S.-F. Lei, P.-C. Wang, and T.-L. Yang, 'Breast tumor classification of ultrasound images using wavelet-based channel energy and ImageJ,' IEEE Journal of Selected Topics in Signal Processing, vol. 3, no. 1, pp. 81-93, 2009. [16] P.-H. Tsui, Y.-Y. Liao, C.-C. Chang, W.-H. Kuo, K.-J. Chang, and C.-K. Yeh, 'Classification of benign and malignant breast tumors by 2-D analysis based on contour description and scatterer characterization,' IEEE transactions on medical imaging, vol. 29, no. 2, pp. 513-522, 2010. [17] H.-W. Lee et al., 'Breast tumor classification of ultrasound images using a reversible round-off nonrecursive 1-D discrete periodic wavelet transform,' IEEE Transactions on Biomedical Engineering, vol. 56, no. 3, pp. 880-884, 2009. [18] F. Hu, G.-S. Xia, J. Hu, and L. Zhang, 'Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery,' Remote Sensing, vol. 7, no. 11, pp. 14680-14707, 2015. [19] R. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Region-based convolutional networks for accurate object detection and segmentation,' IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 1, pp. 142-158, 2016. [20] R. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Rich feature hierarchies for accurate object detection and semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587. [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' in Advances in neural information processing systems, 2012, pp. 1097-1105. [22] W. Zhang et al., 'Deep convolutional neural networks for multi-modality isointense infant brain image segmentation,' NeuroImage, vol. 108, pp. 214-224, 2015. [23] W. K. Moon et al., 'Computer-aided prediction of axillary lymph node status in breast cancer using tumor surrounding tissue features in ultrasound images,' Computer methods and programs in biomedicine, vol. 146, pp. 143-150, 2017. [24] H. R. Roth et al., 'Improving computer-aided detection using convolutional neural networks and random view aggregation,' IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1170-1181, 2016. [25] H. Demirel, G. Anbarjafari, and M. N. S. Jahromi, 'Image equalization based on singular value decomposition,' in Computer and Information Sciences, 2008. ISCIS'08. 23rd International Symposium on, 2008, pp. 1-5: IEEE. [26] T. Kim and J. Paik, 'Adaptive contrast enhancement using gain-controllable clipped histogram equalization,' IEEE Transactions on Consumer Electronics, vol. 54, no. 4, 2008. [27] W.-J. Wu and W. K. Moon, 'Ultrasound breast tumor image computer-aided diagnosis with texture and morphological features,' Academic radiology, vol. 15, no. 7, pp. 873-880, 2008. [28] Y. L. Huang, D. R. Chen, Y. R. Jiang, S. J. Kuo, H. K. Wu, and W. Moon, 'Computer‐aided diagnosis using morphological features for classifying breast lesions on ultrasound,' Ultrasound in Obstetrics & Gynecology, vol. 32, no. 4, pp. 565-572, 2008. [29] R. M. Rangayyan, N. M. El-Faramawy, J. L. Desautels, and O. A. Alim, 'Measures of acutance and shape for classification of breast tumors,' IEEE Transactions on medical imaging, vol. 16, no. 6, pp. 799-810, 1997. [30] A. T. Stavros, D. Thickman, C. L. Rapp, M. A. Dennis, S. H. Parker, and G. A. Sisney, 'Solid breast nodules: use of sonography to distinguish between benign and malignant lesions,' Radiology, vol. 196, no. 1, pp. 123-134, 1995. [31] C. Sohn, J.-U. Blohmer, and U. Hamper, Breast ultrasound: a systematic approach to technique and image interpretation. New York, NY: Thieme, 1999. [32] S. Liu and Z. Liu, 'Multi-Channel CNN-based Object Detection for Enhanced Situation Awareness,' arXiv preprint arXiv:1712.00075, 2017. [33] M. A. Smeelen, P. B. Schwering, A. Toet, and M. Loog, 'Semi-hidden target recognition in gated viewer images fused with thermal IR images,' Information Fusion, vol. 18, pp. 131-147, 2014. [34] J. Han and B. Bhanu, 'Fusion of color and infrared video for moving human detection,' Pattern Recognition, vol. 40, no. 6, pp. 1771-1784, 2007. [35] Y. Niu, S. Xu, L. Wu, and W. Hu, 'Airborne infrared and visible image fusion for target perception based on target region segmentation and discrete wavelet transform,' Mathematical Problems in Engineering, vol. 2012, 2012. [36] G. Bhatnagar and Z. Liu, 'A novel image fusion framework for night-vision navigation and surveillance,' Signal, Image and Video Processing, vol. 9, no. 1, pp. 165-175, 2015. [37] L. Yu, H. Chen, Q. Dou, J. Qin, and P.-A. Heng, 'Automated melanoma recognition in dermoscopy images via very deep residual networks,' IEEE transactions on medical imaging, vol. 36, no. 4, pp. 994-1004, 2017. [38] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, 'Densely connected convolutional networks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, vol. 1, no. 2, p. 3. [39] K. Simonyan and A. Zisserman, 'Very deep convolutional networks for large-scale image recognition,' arXiv preprint arXiv:1409.1556, 2014. [40] K. He, X. Zhang, S. Ren, and J. Sun, 'Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,' in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026-1034. [41] J. Long, E. Shelhamer, and T. Darrell, 'Fully convolutional networks for semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440. [42] Y. Bengio, P. Simard, and P. Frasconi, 'Learning long-term dependencies with gradient descent is difficult,' IEEE transactions on neural networks, vol. 5, no. 2, pp. 157-166, 1994. [43] G. E. Hinton, S. Osindero, and Y.-W. Teh, 'A fast learning algorithm for deep belief nets,' Neural computation, vol. 18, no. 7, pp. 1527-1554, 2006. [44] S. Hochreiter, 'The vanishing gradient problem during learning recurrent neural nets and problem solutions,' International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 6, no. 02, pp. 107-116, 1998. [45] H.-C. Shin et al., 'Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,' IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285-1298, 2016. [46] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [47] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, 'Inception-v4, inception-resnet and the impact of residual connections on learning,' in AAAI, 2017, vol. 4, p. 12. [48] J. Dai, Y. Li, K. He, and J. Sun, 'R-fcn: Object detection via region-based fully convolutional networks,' in Advances in neural information processing systems, 2016, pp. 379-387. [49] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, 'Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,' IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834-848, 2018. [50] Y. Liu and X. Yao, 'Ensemble learning via negative correlation,' Neural networks, vol. 12, no. 10, pp. 1399-1404, 1999. [51] Y. Ganjisaffar, R. Caruana, and C. V. Lopes, 'Bagging gradient-boosted trees for high precision, low variance ranking models,' in Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, 2011, pp. 85-94: ACM. [52] Z.-H. Zhou, J. Wu, and W. Tang, 'Ensembling neural networks: many could be better than all,' Artificial intelligence, vol. 137, no. 1-2, pp. 239-263, 2002. [53] C. Ju, A. Bibaut, and M. van der Laan, 'The relative performance of ensemble methods with deep convolutional neural networks for image classification,' Journal of Applied Statistics, pp. 1-19, 2018. [54] T. Fawcett, 'An introduction to ROC analysis,' Pattern recognition letters, vol. 27, no. 8, pp. 861-874, 2006. [55] C. Van Rijsbergen, 'Information retrieval. dept. of computer science, university of glasgow,' URL: citeseer. ist. psu. edu/vanrijsbergen79information. html, vol. 14, 1979. [56] M. F. Akay, 'Support vector machines combined with feature selection for breast cancer diagnosis,' Expert systems with applications, vol. 36, no. 2, pp. 3240-3247, 2009. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70920 | - |
dc.description.abstract | 乳癌是女性癌症中很常見的一種,而近年來,乳癌的死亡率已大大下降,因為及早發現及早治療能有效的提高存活率。而在臨床上,乳房超音波影像常用來判斷腫瘤的良惡性,並且搭配電腦輔助診斷系統以協助醫生偵測及診斷,也能夠降低不同醫生對於相同腫瘤診斷的變異性。本研究主要的目的是使用卷積神經網路自動提取腫瘤特徵,並且搭配集成學習結合多個網路以增進效能。此篇研究我們提出的方法是利用四種不同架構的卷積神經網路分別學習不同腫瘤特徵,並且我們還提出利用全卷積神經網路自動切割出腫瘤遮罩影像的方法提供腫瘤形狀的特徵達成更精確的診斷。本篇研究使用了1687筆腫瘤資料,其中有953顆良性腫瘤,有734顆惡性腫瘤,研究顯示,使用集成學習結合學習到不同特徵的卷積神經網路的分類結果確實比單一卷積神經網路的分類結果還要好,可達到準確率91.10%,靈敏性 85.14%,特異性 95.77%,ROC曲線面積0.9697,比所有單一網路的分類結果還要高,因此,使用集合方法可以減少分類的偏差,並且使用代表性特徵可以提高診斷效果。 | zh_TW |
dc.description.abstract | Breast cancer is the most common malignancy of the total cancer cases in United States females. However, early diagnosis leads to early treatment and reduces mortality rates. In the clinical usage, breast ultrasound and computer aided diagnosis (CAD) is usually used to diagnosis tumors into benignancy or malignancy. In addition, CAD has been used to decrease the diagnosis variation of different physicians and assist to classify or detect the tumors. In our study, we use the convolutional neural network (CNN) for automatic feature extraction and the ensemble method to combine multi CNN models for better diagnostic performance. The CNN-based method proposed in this study includes VGG-Like, VGG-16, ResNet, and DenseNet. Also, we proposed a fully convolutional network (FCN) to employ tumor segmentation and extract tumor shape features automatically. There were total 1687 tumors used in this study, including 953 benign tumors and 734 malignant tumors. The accuracy, sensitivity, and the specificity of the proposed method were 91.10%, 85.14%, and 95.77%, respectively, and the area under the ROC curve was 0.9697. In conclusion, the ensemble method can improve the performance by using multiple CNN methods and the tumor shape feature can improve the diagnostic effect. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T04:43:52Z (GMT). No. of bitstreams: 1 ntu-107-R05922129-1.pdf: 1512136 bytes, checksum: c0d005096c5a3c3fcd04d9c22dd89391 (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 口試委員會審定書 i
致謝 ii 摘要 iii Abstract iv Table of Contents v List of Figures vii List of Tables x Chapter 1 Introduction 1 Chapter 2 Material 4 2.1 Data Acquisition 4 2.2 Data Characteristics 5 Chapter 3 Method 7 3.1 Images 8 3.1.1 ROI Image 9 3.1.2 Tumor Image 10 3.1.3 Tumor Shape Image (TSI) 13 3.1.4 Fused Image 14 3.2 CNN Architectures 15 3.2.1 VGG-16 And VGG-Like 17 3.2.2 ResNet 19 3.2.3 DenseNet 21 3.3 Ensemble Method 24 3.3.1 Base Machine Selection 24 3.3.2 Combining Strategy 25 Chapter 4 Experiment Results 28 4.1 Comparison of Different Image Dataset 29 4.1.1 Dataset 1– The original tumor image 31 4.1.2 Dataset 2– The segmented tumor image 33 4.1.3 Dataset 3– The tumor mask image (TSI) 36 4.1.4 Dataset 4– The 3-channel image 39 4.2 Comparison of Different Ensemble Method Strategy 42 4.2.1 Results of Base Machine Selection 43 4.2.2 Combining Strategy 44 4.2.3 Performance of the Proposed Method 47 Chapter 5 Conclusions and Discussion 49 References 54 | |
dc.language.iso | en | |
dc.title | 卷積神經網絡的集成學習之乳房超音波電腦輔助診斷 | zh_TW |
dc.title | Computer‐Aided Diagnosis of Breast Ultrasound Images Using Ensemble Learning from Convolutional Neural Networks | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 羅崇銘,陳啟禎 | |
dc.subject.keyword | 乳癌,乳房超音波影像,電腦輔助診斷,深度學習,卷積神經網絡,集成學習, | zh_TW |
dc.subject.keyword | Breast cancer,Breast ultrasound,Computer-aided diagnosis,Deep learning,Convolutional neural network,Ensemble learning, | en |
dc.relation.page | 58 | |
dc.identifier.doi | 10.6342/NTU201802461 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2018-08-03 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf 目前未授權公開取用 | 1.48 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。