Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73801
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張瑞峰
dc.contributor.authorChu-Hsuan Leeen
dc.contributor.author李竺軒zh_TW
dc.date.accessioned2021-06-17T08:10:34Z-
dc.date.available2021-08-19
dc.date.copyright2019-08-19
dc.date.issued2019
dc.date.submitted2019-08-15
dc.identifier.citation[1] R. L. Siegel, K. D. Miller, and A. Jemal, 'Cancer statistics, 2019,' CA: a cancer journal for clinicians, vol. 69, no. 1, pp. 7-34, Jan 2019.
[2] R. J. Hooley, L. M. Scoutt, and L. E. Philpotts, 'Breast ultrasonography: state of the art,' Radiology, vol. 268, no. 3, pp. 642-659, Sep 2013.
[3] D.-R. Chen, R.-F. Chang, W.-J. Kuo, M.-C. Chen, and Y.-L. Huang, 'Diagnosis of breast tumors with sonographic texture analysis using wavelet transform and neural networks,' Ultrasound in medicine & biology, vol. 28, no. 10, pp. 1301-1310, Oct 2002.
[4] M.-C. Yang et al., 'Robust texture analysis using multi-resolution gray-scale invariant features for breast sonographic tumor diagnosis,' IEEE Transactions on Medical Imaging, vol. 32, no. 12, pp. 2262-2273, Dec 2013.
[5] J.-Z. Cheng et al., 'Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans,' Scientific reports, vol. 6, p. 24454, Apr 2016.
[6] W. K. Moon, Y.-W. Shen, C.-S. Huang, L.-R. Chiang, and R.-F. Chang, 'Computer-aided diagnosis for the classification of breast masses in automated whole breast ultrasound images,' Ultrasound in medicine & biology, vol. 37, no. 4, pp. 539-548, Apr 2011.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' in Advances in neural information processing systems, 2012, pp. 1097-1105.
[8] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[9] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, 'You only look once: Unified, real-time object detection,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[10] J. Long, E. Shelhamer, and T. Darrell, 'Fully convolutional networks for semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[11] G. Litjens et al., 'A survey on deep learning in medical image analysis,' Medical image analysis, vol. 42, pp. 60-88, Dec 2017.
[12] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou, 'Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,' IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1207-1216, Feb 2016.
[13] D. Lévy and A. Jain, 'Breast mass classification from mammograms using deep convolutional neural networks,' arXiv preprint arXiv:1612.00542, 2016.
[14] O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234-241.
[15] K. He, G. Gkioxari, P. Dollár, and R. Girshick, 'Mask r-cnn,' in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[16] B. S. Lin, K. Michael, S. Kalra, and H. R. Tizhoosh, 'Skin lesion segmentation: U-nets versus clustering,' in 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 2017, pp. 1-7.
[17] M. U. Dalmış et al., 'Using deep learning to segment breast and fibroglandular tissue in MRI volumes,' Medical physics, vol. 44, no. 2, pp. 533-546, Dec 2017.
[18] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, '3D U-Net: learning dense volumetric segmentation from sparse annotation,' in International conference on medical image computing and computer-assisted intervention, 2016, pp. 424-432.
[19] S. Sabour, N. Frosst, and G. E. Hinton, 'Dynamic routing between capsules,' in Advances in neural information processing systems, 2017, pp. 3856-3866.
[20] G. E. Hinton, A. Krizhevsky, and S. D. Wang, 'Transforming auto-encoders,' in International Conference on Artificial Neural Networks, 2011, pp. 44-51.
[21] Y. LeCun, Y. Bengio, and G. Hinton, 'Deep learning,' nature, vol. 521, no. 7553, p. 436, May 2015.
[22] K. He, X. Zhang, S. Ren, and J. Sun, 'Identity mappings in deep residual networks,' in European conference on computer vision, 2016, pp. 630-645.
[23] Y. Wu and K. He, 'Group normalization,' in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3-19.
[24] Y.-S. Huang, E. Takada, S. Konno, C.-S. Huang, M.-H. Kuo, and R.-F. Chang, 'Computer-Aided tumor diagnosis in 3-D breast elastography,' Computer methods and programs in biomedicine, vol. 153, pp. 201-209, Jan 2018.
[25] Y. Bengio, P. Simard, and P. Frasconi, 'Learning long-term dependencies with gradient descent is difficult,' IEEE transactions on neural networks, vol. 5, no. 2, pp. 157-166, Mar 1994.
[26] S. Ioffe and C. Szegedy, 'Batch normalization: Accelerating deep network training by reducing internal covariate shift,' arXiv preprint arXiv:1502.03167, 2015.
[27] V. Nair and G. E. Hinton, 'Rectified linear units improve restricted boltzmann machines,' in Proceedings of the 27th international conference on machine learning (ICML-10), 2010, pp. 807-814.
[28] X. Glorot, A. Bordes, and Y. Bengio, 'Deep sparse rectifier neural networks,' in Proceedings of the fourteenth international conference on artificial intelligence and statistics, 2011, pp. 315-323.
[29] R. Kohavi, 'A study of cross-validation and bootstrap for accuracy estimation and model selection,' in Ijcai, 1995, vol. 14, no. 2, pp. 1137-1145.
[30] J. A. Hanley and B. J. McNeil, 'The meaning and use of the area under a receiver operating characteristic (ROC) curve,' Radiology, vol. 143, no. 1, pp. 29-36, Apr 1982.
[31] E. Xi, S. Bing, and Y. Jin, 'Capsule network performance on complex data,' arXiv preprint arXiv:1712.03480, 2017.
[32] A. Jaiswal, W. AbdAlmageed, Y. Wu, and P. Natarajan, 'Capsulegan: Generative adversarial capsule network,' in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 0-0.
[33] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, 'Fitnets: Hints for thin deep nets,' arXiv preprint arXiv:1412.6550, 2014.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73801-
dc.description.abstract乳癌是女性最常見的癌症之一,因此早期的偵測、診斷和治療是減低病人的死亡的最好方法。全自動乳房超音波(Automated Breast Ultrasound, ABUS)因可提供醫生完整的三維乳房影像資訊所以廣泛用於腫瘤偵測,然而大量的影像使得醫生需要花費較多時間檢閱影像、確定腫瘤位置、以及預先腫瘤良惡性。近來,基於卷積神經網路(Convolutional Neural Network, CNN)開發的電腦輔助診斷系統證實CNN能從影像中自動學習紋理與形狀特徵並提升醫生的診斷率,然而,卷積神經網路有著對於物體旋轉和對於特徵之間的相對關係學習不佳的問題。2017年,一個淺層且以向量方式呈現特徵的膠囊網路被提出解決卷積神經網路的問題,但也因為層數少的關係會有不易從影像中學習較複雜特徵的問題。因此,本研究提出一個包含3-D U型網路與改進的3-D膠囊網路的電腦輔助診斷系統,首先,從ABUS影像中取得腫瘤範圍,接著透過3-D U型網路取得腫瘤遮罩,最後,再利用改進的3-D膠囊網路同時取得紋理與形態特徵進行腫瘤診斷。論文中,我們引入了3-D殘差塊提高診斷腫瘤良惡性的準確率,此外,由於全自動乳房超音波影像資訊與膠囊網路使得系統訓練時限制了批量集的大小影響系統準確率,我們更採用組標準化取代原本的批量標準化解決此問題提升系統準確率。實驗中共使用了446筆全自動乳房超音波產生的乳房腫瘤影像,其中包含229個惡性腫瘤和217個良性腫瘤。根據實驗顯示,我們提出的方法達到準確率85.20%、靈敏性87.34%、特異性82.95%和ROC曲線下面積0.9134的成果,顯示系統是有能力在ABUS影像上進行腫瘤診斷。zh_TW
dc.description.abstractBreast cancer is one of the most common cancer in the female. Early detection, diagnosis, and treatment is the best way for reducing mortality. The automated breast ultrasound (ABUS) had been widely used for breast tumor detection since it can provide the three-dimensional (3-D) volume of the breast for a physician to review. However, it is time-consuming for a physician to review the whole ABUS image, find out the suspicious lesion, and determine lesion as benign or malignant. Some computer-aided diagnosis (CADx) systems based on the convolution neural network (CNN) had been proposed and proven that CNN is a useful architecture to learn texture and shape features automatically for helping the physician make a diagnosis. However, CNN is poor at dealing with the spatial relationship between different features and object rotation. In 2017, a shallow network, capsule net (CapsNet), representing features as a vector was proposed for overcoming the problem. However, the shallow architecture was also the drawback that made the CapsNet hard to learn complex features from the image. Therefore, in this study, a CADx consisted of the 3-D U-net and the modified 3-D CapsNet network is proposed for tumor diagnosis in ABUS. First, the volume of interest (VOI) is cropped out from the ABUS image. Then, the VOI is input into the 3-D U-net model to generate the tumor mask. Afterward, the VOI and mask are delivered to the modified CapsNet for determining tumor as malignant or benign. In this study, to overcome the drawback of original CapsNet, the 3-D residual block is introduced into the CapsNet for learning high-level features from tumor image. Furthermore, the group normalization is also substituted for the batch normalization to address the limitation of batch size during training because the usage of 3-D input and capsule is memory demanding. In our experiments, there were 446 breast tumors images generated from automated breast ultrasound system (ABUS), which included 229 malignant tumors and 217 benign tumors. In our experiment result, the accuracy, sensitivity, specificity, and area under the curve (AUC) were 85.20%, 87.34%, 82.95% and 0.9134 respectively which outperformed other CNN models.en
dc.description.provenanceMade available in DSpace on 2021-06-17T08:10:34Z (GMT). No. of bitstreams: 1
ntu-108-R05922158-1.pdf: 1686275 bytes, checksum: c668c4ec3d2c1fa662f7bd08d4524871 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents口試委員會審定書 i
致謝 ii
摘要 iii
Abstract v
Table of Contents vii
List of Figures viii
List of Tables ix
Chapter 1 Introduction 1
Chapter 2 Material 5
Chapter 3 Methods 7
3.1 VOI Extraction 8
3.2 Tumor Segmentation 8
3.2.1 3-D U-net 9
3.2.2 Post-processing 10
3.3 Tumor Classification 11
3.3.1 The Modified 3-D CapsNet 12
Chapter 4 Experiment Result and Discussion 15
4.1 Result 15
4.2 Discussion 24
Chapter 5 Conclusion 28
Reference 30
dc.language.isoen
dc.subject三維殘差卷積神經網路zh_TW
dc.subject電腦輔助診斷zh_TW
dc.subject乳癌zh_TW
dc.subject組標準化zh_TW
dc.subject全自動乳房超音波zh_TW
dc.subject膠囊網路zh_TW
dc.subjectBreast canceren
dc.subjectgroup normalizationen
dc.subjectautomated breast ultrasounden
dc.subjectcomputer-aided diagnosisen
dc.subject3-D residual convolution neural networken
dc.subjectcapsule networken
dc.title3D膠囊神經網路之乳房自動超音波電腦輔助腫瘤診斷zh_TW
dc.title3D Capsule Neural Network on Automated Breast Ultrasound Tumor Diagnosisen
dc.typeThesis
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee羅崇銘,陳鴻豪
dc.subject.keyword乳癌,全自動乳房超音波,三維殘差卷積神經網路,膠囊網路,組標準化,電腦輔助診斷,zh_TW
dc.subject.keywordBreast cancer,automated breast ultrasound,3-D residual convolution neural network,capsule network,group normalization,computer-aided diagnosis,en
dc.relation.page32
dc.identifier.doi10.6342/NTU201902942
dc.rights.note有償授權
dc.date.accepted2019-08-16
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
1.65 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved