請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/22153完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳中明(Chung-Ming Chen) | |
| dc.contributor.author | Bo-Wei Chen | en |
| dc.contributor.author | 陳柏瑋 | zh_TW |
| dc.date.accessioned | 2021-06-08T04:05:23Z | - |
| dc.date.copyright | 2020-12-25 | |
| dc.date.issued | 2020 | |
| dc.date.submitted | 2020-11-27 | |
| dc.identifier.citation | [1] Momenimovahed, Zohre, and Hamid Salehiniya. 'Epidemiological characteristics of and risk factors for breast cancer in the world.' Breast Cancer: Targets and Therapy 11 (2019): 151
[2] 108年國人死因統計結果分析-衛生福利部統計處。民國109年6月16日。檢自https://dep.mohw.gov.tw/DOS/cp-4927-54466-113.html [3] Kalli, Sirishma, et al. 'American joint committee on cancer’s staging system for breast cancer: what the radiologist needs to know.' Radiographics 38.7 (2018): 1921-1933. [4] 統計國人罹患乳癌五年存活率(2006-2011)。檢錄自https://www1.cgmh.org.tw/intr/intr5/c6210/breast%20cancer%20stage.html [5] 衛福部統計2019年國人罹患乳癌人數,2019。檢錄自https://www.hpa.gov.tw/Pages/Detail.aspx?nodeid=591 pid=980 [6] Morrell, S., Taylor, R., Roder, D., Robson, B., Gregory, M., Craig, K. (2017). Mammography service screening and breast cancer mortality in New Zealand: a National Cohort Study 1999–2011. British journal of cancer, 116(6), 828. [7] Lee, Rebecca Sawyer, et al. 'A curated mammography data set for use in computer-aided detection and diagnosis research.' Scientific data 4 (2017): 170177. [8] Wang, Hongyu, et al. 'Breast mass classification via deeply integrating the contextual information from multi-view data.' Pattern Recognition 80 (2018): 42-52. [9] American College of Radiology (ACR): Illustrated Breast Imaging Reporting and Data System (BI-RADS). ACR (1998) [10] Li, Zhongyu, et al. 'Large-scale retrieval for medical image analytics: A comprehensive review.' Medical image analysis 43 (2018): 66-84. [11] Jalalian, Afsaneh, et al. 'Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review.' Clinical imaging 37.3 (2013): 420-426. [12] Giger ML, Karssemeijer N. Computer-aided diagnosis in medical imaging. IEEE Trans Med Imaging 2001;20(12):1205-8. [13] Giger ML. Computer-aided diagnosis of breast lesions in medical images. Comput Sci Eng 2000;2(5):39-45. [14] Doi K, et al. Computer-aided diagnosis in radiology: potential and pitfalls. Eur J Radiol 1999;31(2):97–109. [15] Vyborny CJ, Giger ML, Nishikawa RM. Computer-aided detection and diagnosis of breast cancer. Radiol Clin North Am 2000;38(4):725-40. [16] Doi K. Computer-aided diagnosis in medical imaging: achievements and challenges. Congress on Medical Physics and Biomedical Engineering, 2009. p. 96. [17] Getty DJ, et al. Enhanced interpretation of diagnostic images. Invest Radiol 1988;23(4):240 [18] Horsch K, et al. Classification of breast lesions with multimodality computer-aided diagnosis: observer study results on an independent clinical data set. Radiology 2006;240(2):357-68. [19] Huo Z, et al. Effectiveness of computer-aided diagnosis—observer study with independent database of mammograms. Radiology 2002;224:560-8 [20] Computer-aided detection/diagnosis of breast cancer in mammography and ultrasound: a review/Marker-Controlled Watershed for Lesion Segmentation in Mammograms [21] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 'Imagenet classification with deep convolutional neural networks.' Advances in neural information processing systems. 2012. [22] Wang, Hongyu, et al. 'Breast mass classification via deeply integrating the contextual information from multi-view data.' Pattern Recognition 80 (2018): 42-52. [23] Dhungel, Neeraj, Gustavo Carneiro, and Andrew P. Bradley. 'A deep learning approach for the analysis of masses in mammograms with minimal user intervention.' Medical image analysis 37 (2017): 114-128 [24] Carneiro, Gustavo, Jacinto Nascimento, and Andrew P. Bradley. 'Automated analysis of unregistered multi-view mammograms with deep learning.' IEEE transactions on medical imaging 36.11 (2017): 2355-2365 [25] Khan, Hasan Nasir, et al. 'Multi-View Feature Fusion Based Four Views Model for Mammogram Classification Using Convolutional Neural Network.' IEEE Access 7 (2019): 165724-165733. [26] Zeiler, Matthew D., and Rob Fergus. 'Visualizing and understanding convolutional networks.' European conference on computer vision. Springer, Cham, 2014. [27] Geirhos, Robert, et al. 'ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.' arXiv preprint arXiv:1811.12231 (2018). [28] Kubilius, Jonas, Stefania Bracci, and Hans P. Op de Beeck. 'Deep neural networks as a computational model for human shape sensitivity.' PLoS computational biology 12.4 (2016): e1004896. [29] Ritter, Samuel, et al. 'Cognitive psychology for deep neural networks: A shape bias case study.' Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017 [30] Kruger, Norbert, et al. 'Deep hierarchies in the primate visual cortex: What can we learn for computer vision?.' IEEE transactions on pattern analysis and machine intelligence 35.8 (2012): 1847-1871. [31] Lee, Tai Sing. 'Image representation using 2D Gabor wavelets.' IEEE Transactions on pattern analysis and machine intelligence 18.10 (1996): 959-971. [32] G. Kylberg. The Kylberg Texture Dataset v. 1.0, Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, External report (Blue series) No. 35 [33] Mallikarjuna, P., et al. 'The kth-tips2 database.' KTH Royal Institute of Technology (2006). [34] Simonyan, Karen, and Andrew Zisserman. 'Very deep convolutional networks for large-scale image recognition.' arXiv preprint arXiv:1409.1556 (2014) [35] Szegedy, Christian, et al. 'Rethinking the inception architecture for computer vision.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [36] Lee, Geewon, et al. 'Radiomics and its emerging role in lung cancer research, imaging biomarkers and clinical management: state of the art.' European journal of radiology 86 (2017): 297-307. [37] Lu, Kaiyue, Shaodi You, and Nick Barnes. 'Deep Texture and Structure Aware Filtering Network for Image Smoothing.' Proceedings of the European Conference on Computer Vision (ECCV). 2018. [38] Szegedy, Christian, et al. 'Going deeper with convolutions.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. [39] Huang, Gao, et al. 'Densely connected convolutional networks.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [40] Liu, Li, et al. 'From BoW to CNN: Two decades of texture representation for texture classification.' International Journal of Computer Vision 127.1 (2019): 74-109 [41] Dana, Kristin J., et al. 'Reflectance and texture of real-world surfaces.' ACM Transactions On Graphics (TOG) 18.1 (1999): 1-34 [42] Lazebnik, Svetlana, Cordelia Schmid, and Jean Ponce. 'A sparse texture representation using local affine regions.' IEEE Transactions on Pattern Analysis and Machine Intelligence 27.8 (2005): 1265-1278 [43] Cimpoi, Mircea, et al. 'Describing textures in the wild.' Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014. [44] Cimpoi, Mircea, et al. 'Deep filter banks for texture recognition, description, and segmentation.' International Journal of Computer Vision 118.1 (2016): 65-94. [45] Bell, Sean, et al. 'Opensurfaces: A richly annotated catalog of surface appearance.' ACM Transactions on graphics (TOG) 32.4 (2013): 111. [46] Russakovsky, Olga, et al. 'Imagenet large scale visual recognition challenge.' International journal of computer vision 115.3 (2015): 211-252. [47] Zhou, Bolei, et al. 'Learning deep features for scene recognition using places database.' Advances in neural information processing systems. 2014 [48] Luan, Shangzhen, et al. 'Gabor convolutional networks.' IEEE Transactions on Image Processing 27.9 (2018): 4357-4366. [49] Bruna, Joan, and Stéphane Mallat. 'Invariant scattering convolution networks.' IEEE transactions on pattern analysis and machine intelligence 35.8 (2013): 1872-1886 [50] Andrearczyk, Vincent, and Paul F. Whelan. 'Using filter banks in convolutional neural networks for texture classification.' Pattern Recognition Letters 84 (2016): 63-69. [51] Li, Yanfeng, et al. 'Mass classification in mammograms based on two-concentric masks and discriminating texton.' Pattern Recognition 60 (2016): 648-656. [52] Liu, Xiaoming, and Jinshan Tang. 'Mass classification in mammograms using selected geometry and texture features, and a new SVM-based feature selection method.' IEEE Systems Journal 8.3 (2013): 910-920. [53] Verma, Brijesh, Peter McLeod, and Alan Klevansky. 'A novel soft cluster neural network for the classification of suspicious areas in digital mammograms.' Pattern Recognition 42.9 (2009): 1845-1852. [54] Petersen, Kersten, et al. 'Breast tissue segmentation and mammographic risk scoring using deep learning.' International workshop on digital mammography. Springer, Cham, 2014. [55] Arevalo, John, et al. 'Representation learning for mammography mass lesion classification with convolutional neural networks.' Computer methods and programs in biomedicine 127 (2016): 248-257. [56] Tsochatzidis, Lazaros, Lena Costaridou, and Ioannis Pratikakis. 'Deep Learning for Breast Cancer Diagnosis from Mammograms—A Comparative Study.' Journal of Imaging 5.3 (2019) [57] bdelhafiz, Dina, et al. 'Deep convolutional neural networks for mammography: advances, challenges and applications.' BMC bioinformatics 20.11 (2019): 281. [58] alconí, Lenin G., María Pérez, and Wilbert G. Aguilar. 'Transfer Learning in Breast Mammogram Abnormalities Classification With Mobilenet and Nasnet.' 2019 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2019 [59] Abdelhafiz, Dina, et al. 'Deep convolutional neural networks for mammography: advances, challenges and applications.' BMC bioinformatics 20.11 (2019): 281 [60] Wang, Hongyu, et al. 'Breast mass classification via deeply integrating the contextual information from multi-view data.' Pattern Recognition 80 (2018): 42-52 [61] Carneiro, Gustavo, Jacinto Nascimento, and Andrew P. Bradley. 'Automated analysis of unregistered multi-view mammograms with deep learning.' IEEE transactions on medical imaging 36.11 (2017): 2355-2365. [62] Khan, Hasan Nasir, et al. 'Multi-View Feature Fusion Based Four Views Model for Mammogram Classification Using Convolutional Neural Network.' IEEE Access 7 (2019): 165724-165733. [63] Al-antari, Mugahed A., et al. 'A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification.' International journal of medical informatics 117 (2018): 44-54 [64] Redmon, Joseph, et al. 'You only look once: Unified, real-time object detection.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [65] Moreira, Inês C., et al. 'Inbreast: toward a full-field digital mammographic database.' Academic radiology 19.2 (2012): 236-248. [66] Lee, Rebecca Sawyer, et al. 'A curated mammography data set for use in computer-aided detection and diagnosis research.' Scientific data 4 (2017): 170177. [67] ,Albiol, Alberto, Alberto Corbi, and Francisco Albiol. 'Automatic intensity windowing of mammographic images based on a perceptual metric.' Medical physics 44.4 (2017): 1369-1378. [68] Goodale, Melvyn A., and A. David Milner. 'Separate visual pathways for perception and action.' (1992): 20-25. [69] Daugman, John G. 'Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters.' JOSA A 2.7 (1985): 1160-1169. [70] Chen, Lianping, Guojun Lu, and Dengsheng Zhang. 'Effects of different Gabor filters parameters on image retrieval by texture.' 10th International Multimedia Modelling Conference, 2004. Proceedings.. IEEE, 2004. [71] Lin, Min, Qiang Chen, and Shuicheng Yan. 'Network in network.' arXiv preprint arXiv:1312.4400 (2013) [72] Hu, Jie, Li Shen, and Gang Sun. 'Squeeze-and-excitation networks.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [73] Christodoulidis, Stergios, et al. 'Multisource transfer learning with convolutional neural networks for lung pattern analysis.' IEEE journal of biomedical and health informatics 21.1 (2016): 76-84 [74] Hafemann, Luiz G., et al. 'Transfer learning between texture classification tasks using convolutional neural networks.' 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. [75] Pan, Sinno Jialin, and Qiang Yang. 'A survey on transfer learning.' IEEE Transactions on knowledge and data engineering 22.10 (2009): 1345-1359. [76] Dietterich, Thomas G. 'Ensemble methods in machine learning.' International workshop on multiple classifier systems. Springer, Berlin, Heidelberg, 2000 [77] Szegedy, Christian, et al. 'Going deeper with convolutions.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. [78] Hansen, Lars Kai, and Peter Salamon. 'Neural network ensembles.' IEEE transactions on pattern analysis and machine intelligence 12.10 (1990): 993-1001. [79] Wolpert, David H. 'Stacked generalization.' Neural networks 5.2 (1992): 241-259. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/22153 | - |
| dc.description.abstract | 乳癌為造成全世界女性死亡的主要癌症之一,透過國內衛生福利部資料顯示:國內女性發生率最高的癌症也同為乳癌,有鑒於乳癌的盛行率,政府利用乳房X光攝影作為乳癌篩檢的初步診斷工具。乳房X光攝影提供了乳房的鈣化點以及腫塊資訊,使無病症的零期乳癌得以被發現,大幅提高患者的存活率。 透過醫學實證,乳癌篩檢確實降低了乳癌的致死率,但龐大的乳房X光攝影影像同時加重了放射科醫師的負擔。且因為乳房組織的複雜性,以及乳房腫塊之間的變異性,促使放射科醫師之間依據經驗的不同,對於乳房腫塊惡性程度的主觀判定存在著觀察者間的差異性(Inter-observer variability)。為了減輕放射科醫師的負擔以及降低觀察者之間的差異性,發展電腦輔助診斷(Computer-aid diagnosis, CAD),提供醫師客觀乳房腫塊良惡性分類結果就顯得相當重要。 自從Alex Krizhevsky利用卷積神經網路的深度學習模型(Convolutional Neuron Network, CNN),在數量高達120萬的影像辨識競賽(ILSVRC)中以15.3%錯誤率取得冠軍後,促使近年來針對影像辨識的課題,大多朝向深度學習的方向進行開發。而在乳房攝影腫塊良惡性分類的議題上,以深度學習為導向的CAD開發上,也取得良好的成效。 雖然深度學習達到良好的影像辨識能力,但訓練模型所花費的成本相當可觀,其中樣本數量的多寡更是直接影響模型影像辨識的準確性。而在乳房X光攝影中,由於乳房腫塊的多樣性,訓練樣本不足的問題更是惡化了分類的準確性。為克服樣本不足的問題,本研究開發一學習紋理資訊為導向的深度學習模型。盼望在有限的影像數量下,利用其他深度學習模型所學習的特徵與紋理資訊相互融合,提高乳房腫塊良惡性分類的準確率。 開發學習紋理為導向模型的過程中,Gabor Filter bank具有近似於人類初級視覺皮質層(Primary visual cortex) 擷取紋理資訊的能力,因此本研究發展學習紋理資訊為導向的深度學習模型是以Gabor filter bank為基底。透過第一層卷積層濾波器為Gabor filter bank的設計,可以分別在紋理影像數據集(Kylberg Texture Dataset、Kth-Tips2-b Dataset)得到0.997±0.002、0.993±0.002辨識率。且在這兩個紋理數據集的樣本數量減少至原本的25%時,相較於單一的深度學習模型,合併AlexNet與本研究開發的深度學習模型可提升紋理影像的辨(Kylberg Texture Dataset提升1.6%;Kth-Tips-2-b Dataset 提升4%)。 相較於紋理數據集的資料量,乳房X光攝影數據集的樣本數量明顯不足。因此為降低模型過擬合(Overfitting)現象的產生,在訓練樣本(Training data)方面採取傳統資料擴增(Data augmentation)的方法,提高乳房腫塊樣本的數量以及多樣性。在模型的開發的研究上,則是以GAP(Global average pooling layer)取代傳統全連接層(Fully-connected layer)的設計,大幅降低訓練參數。此外,為使模型學習多樣化的紋理特徵,本研究將先前所使用的紋理影像數據集(Kylberg Texture Dataset、Kth-Tips2-b Dataset)遷徙學習(Transfer learning)至本研究所開發深度學習模型,並利用Squeeze and Excitation Net強化模型學習紋理資訊的能力。最終透過集成式學習(Ensemble learning)的方法結合VGG16、Inception-V3以及本研究開發的深度學習模型,最終得到乳房腫塊良惡性分類的整體準確率0.80±0.03(Specificity: 0.83±0.02,Sensitivity: 0.76±0.06)。 為輔助放射科醫師診斷乳房腫塊良惡性分類的議題上,本研究開發一學習紋理資訊為導向的深度學習模型,並藉由提供紋理資訊與其他深度學習模型相互融合後,提高模型判斷乳房腫塊良惡性的準確率,克服現有乳房腫塊影像有限的瓶頸。 | zh_TW |
| dc.description.abstract | Breast cancer is one of main cancer which causes death among females in the world. According to the Ministry of Health and Welfare statistics, breast cancer is the most commonly occurring cancer in woman. As the prevalence of breast cancer, the government promote mammogram as the initial diagnosis facility for breast cancer screening. The mammogram had the ability to detect micro-calcification and breast mass, the results could check carcinoma in situ which no signs or symptoms, and the outcome could elevate the survival rate. Evidence-based medicine confirmed that mammogram screening can efficiently reduce the mortality rate. However, an enormous amount of images raised radiologists’ burden simultaneously. The radiologist would have inter-observer variability to determine breast lesion category, the probable reasons are the distinct accumulation of experience, breast tissue complexity, and also the type of mass variability. To overcome this challenge, the development of CAD systems is quite an important issue. Because it could provide automatically diagnosis and objective opinions for the radiologist, which could reduce burden and inter-observer variability among radiologist. Since Alex Krizhevsky utilizes deep learning model to achieves a 15.3% error rate and win the ILSVRC challenge championship. The outcome leads to the trend towards deep learning for image recognition topics. With regard to classification mass benign or malignant issue, development CADx based deep learning model also achieved impressive results. Despite deep learning achieved brilliant results, training model would cost much time and the number of samples which affect model recognition ability directly. Due to the types of breast mass variability, insufficient samples issue would lead the model to have worse classification results. To overcome this challenge, we proposed the deep learning model which has the ability to learn texture features. Combination of different models’ features from another model and our proposed would improve the classification accuracy for mass benign and malignant in the limited medical images. The process of developing texture features based model, according to research confirm that Gabor filter bank had similar ability about primary visual cortex in the human, which could extract texture features potential. Hence, our proposed model is based on a Gabor filter bank. According to first convolutional layer kernels based on Gabor filter bank, which could achieve 0.997±0.002 and 0.993±0.002 accuracy in the texture image dataset (Kylberg Texture Dataset、Kth-Tips2-b Dataset) separately. In order to check our proposed network limitation, we reduce the texture image samples with 25% descending progressively ratio. Comparison of single model design, combining features from AlexNet and ours would improve texture image recognition accuracy. (Kylberg Texture Dataset improve 1.6% recognition accuracy; Kth-Tips-2-b Dataset improve 4% recognition accuracy) Mammogram samples are more insufficient than texture image dataset. To avoid the overfitting situation, we use traditional data augmentation method to increase mass samples and variability. For the model design, we use the GAP layer instead of the FC layer to reduce training parameters. Besides, we use previous texture dataset transfer learning to our proposed network, and utilization of SE Net to strength model which capture texture features ability. And use an ensemble learning algorithm to integrate VGG16, Inception-V3 and our proposed network which could achieve mass classification accuracy 0.80±0.03(Specificity: 0.83±0.02, Sensitivity: 0.76±0.06). To assist radiologist for the breast mass benign and malignant classification issue, we proposed the network which had the ability for texture features learning. And combination features from another network, which could improve accuracy for the mass classification, and overcome the insufficient medical images issue. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-08T04:05:23Z (GMT). No. of bitstreams: 1 U0001-2511202017414800.pdf: 4497961 bytes, checksum: ebb263be78f75d8fdd25232eed4ab4a5 (MD5) Previous issue date: 2020 | en |
| dc.description.tableofcontents | 審定書 II 致謝 III 摘要 IV Abstract IV 目錄 VII 圖目錄 IX 表目錄 IV 第1章 緒論 1 1.1 研究背景 1 1.2 研究動機 7 第2章 文獻回顧 9 形態學和紋理特徵 9 2.1 深度學習: 紋理影像辨認 9 2.2 深度學習: 乳房影像腫塊良惡性辨認 11 第3章 研究材料與方法 13 3.1 研究材料 13 3.1.1 Kylberg Texture Dataset 13 3.1.2 Kth-Tips2-b Dataset 14 3.1.3 INbreast 15 3.1.4 CBIS-DDSM 16 3.2 研究方法 18 3.2.1 紋理影像數據集:建立學習紋理特徵為導向之深度學習模型 19 3.2.2 乳房腫塊良惡性分類 27 3.2.2.1 Global Average Pooling 28 3.2.2.2 Multi-scale Gabor filter bank 29 3.2.2.3 Squeeze and excitation Net 30 3.2.2.4 Transfer learning 31 3.2.2.5 一般卷積網路模型 33 3.2.2.6 融合不同深度學習模型所學習的特徵 34 3.2.2.7 Ensemble learning 36 3.3 Performance metrics 39 第4章 研究結果與討論 41 4.1 紋理影像數據集辨識結果與討論 41 4.2 乳房X光攝影數據集(CBIS-DDSM)腫塊良惡性分類結果 45 4.3 討論遷徙學習後的模型對CBIS-DDSM乳房腫塊良惡性分類情形 49 4.4 討論融合不同模型學習的特徵對CBIS-DDSM乳房腫塊良惡性分類情形 50 4.5 討論模型融合型態學特徵對CBIS-DDSM乳房腫塊良惡性分類情形 51 第5章 結論 54 第6章 參考文獻 56 | |
| dc.language.iso | zh-TW | |
| dc.subject | Gabor filter bank | zh_TW |
| dc.subject | 卷積神經網路 | zh_TW |
| dc.subject | 紋理特徵 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 乳房腫塊良惡性分類 | zh_TW |
| dc.subject | 乳房X光攝影 | zh_TW |
| dc.subject | 集成式學習 | zh_TW |
| dc.subject | Classification of benign and malignant mass | en |
| dc.subject | Deep learning | en |
| dc.subject | CNN | en |
| dc.subject | Mammogram | en |
| dc.subject | Ensemble learning | en |
| dc.subject | Gabor filter bank | en |
| dc.subject | Texture features | en |
| dc.title | 融合紋理資訊之深度學習模型判斷乳房X光攝影腫塊良惡性 | zh_TW |
| dc.title | A Deep Learning Model Integrating Texture Features for Differential Diagnosis of Benign and Malignant Mass in Mammogram | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 109-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 張允中(Yeun-Chung Chang),李佳燕(Chia-Yen Lee) | |
| dc.subject.keyword | 乳房X光攝影,乳房腫塊良惡性分類,深度學習,卷積神經網路,Gabor filter bank,紋理特徵,集成式學習, | zh_TW |
| dc.subject.keyword | Mammogram,Classification of benign and malignant mass,Deep learning,CNN,Gabor filter bank,Texture features,Ensemble learning, | en |
| dc.relation.page | 60 | |
| dc.identifier.doi | 10.6342/NTU202004356 | |
| dc.rights.note | 未授權 | |
| dc.date.accepted | 2020-11-27 | |
| dc.contributor.author-college | 工學院 | zh_TW |
| dc.contributor.author-dept | 醫學工程學研究所 | zh_TW |
| 顯示於系所單位: | 醫學工程學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-2511202017414800.pdf 未授權公開取用 | 4.39 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
