Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73801
Title: | 3D膠囊神經網路之乳房自動超音波電腦輔助腫瘤診斷 3D Capsule Neural Network on Automated Breast Ultrasound Tumor Diagnosis |
Authors: | Chu-Hsuan Lee 李竺軒 |
Advisor: | 張瑞峰 |
Keyword: | 乳癌,全自動乳房超音波,三維殘差卷積神經網路,膠囊網路,組標準化,電腦輔助診斷, Breast cancer,automated breast ultrasound,3-D residual convolution neural network,capsule network,group normalization,computer-aided diagnosis, |
Publication Year : | 2019 |
Degree: | 碩士 |
Abstract: | 乳癌是女性最常見的癌症之一,因此早期的偵測、診斷和治療是減低病人的死亡的最好方法。全自動乳房超音波(Automated Breast Ultrasound, ABUS)因可提供醫生完整的三維乳房影像資訊所以廣泛用於腫瘤偵測,然而大量的影像使得醫生需要花費較多時間檢閱影像、確定腫瘤位置、以及預先腫瘤良惡性。近來,基於卷積神經網路(Convolutional Neural Network, CNN)開發的電腦輔助診斷系統證實CNN能從影像中自動學習紋理與形狀特徵並提升醫生的診斷率,然而,卷積神經網路有著對於物體旋轉和對於特徵之間的相對關係學習不佳的問題。2017年,一個淺層且以向量方式呈現特徵的膠囊網路被提出解決卷積神經網路的問題,但也因為層數少的關係會有不易從影像中學習較複雜特徵的問題。因此,本研究提出一個包含3-D U型網路與改進的3-D膠囊網路的電腦輔助診斷系統,首先,從ABUS影像中取得腫瘤範圍,接著透過3-D U型網路取得腫瘤遮罩,最後,再利用改進的3-D膠囊網路同時取得紋理與形態特徵進行腫瘤診斷。論文中,我們引入了3-D殘差塊提高診斷腫瘤良惡性的準確率,此外,由於全自動乳房超音波影像資訊與膠囊網路使得系統訓練時限制了批量集的大小影響系統準確率,我們更採用組標準化取代原本的批量標準化解決此問題提升系統準確率。實驗中共使用了446筆全自動乳房超音波產生的乳房腫瘤影像,其中包含229個惡性腫瘤和217個良性腫瘤。根據實驗顯示,我們提出的方法達到準確率85.20%、靈敏性87.34%、特異性82.95%和ROC曲線下面積0.9134的成果,顯示系統是有能力在ABUS影像上進行腫瘤診斷。 Breast cancer is one of the most common cancer in the female. Early detection, diagnosis, and treatment is the best way for reducing mortality. The automated breast ultrasound (ABUS) had been widely used for breast tumor detection since it can provide the three-dimensional (3-D) volume of the breast for a physician to review. However, it is time-consuming for a physician to review the whole ABUS image, find out the suspicious lesion, and determine lesion as benign or malignant. Some computer-aided diagnosis (CADx) systems based on the convolution neural network (CNN) had been proposed and proven that CNN is a useful architecture to learn texture and shape features automatically for helping the physician make a diagnosis. However, CNN is poor at dealing with the spatial relationship between different features and object rotation. In 2017, a shallow network, capsule net (CapsNet), representing features as a vector was proposed for overcoming the problem. However, the shallow architecture was also the drawback that made the CapsNet hard to learn complex features from the image. Therefore, in this study, a CADx consisted of the 3-D U-net and the modified 3-D CapsNet network is proposed for tumor diagnosis in ABUS. First, the volume of interest (VOI) is cropped out from the ABUS image. Then, the VOI is input into the 3-D U-net model to generate the tumor mask. Afterward, the VOI and mask are delivered to the modified CapsNet for determining tumor as malignant or benign. In this study, to overcome the drawback of original CapsNet, the 3-D residual block is introduced into the CapsNet for learning high-level features from tumor image. Furthermore, the group normalization is also substituted for the batch normalization to address the limitation of batch size during training because the usage of 3-D input and capsule is memory demanding. In our experiments, there were 446 breast tumors images generated from automated breast ultrasound system (ABUS), which included 229 malignant tumors and 217 benign tumors. In our experiment result, the accuracy, sensitivity, specificity, and area under the curve (AUC) were 85.20%, 87.34%, 82.95% and 0.9134 respectively which outperformed other CNN models. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73801 |
DOI: | 10.6342/NTU201902942 |
Fulltext Rights: | 有償授權 |
Appears in Collections: | 資訊工程學系 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ntu-108-1.pdf Restricted Access | 1.65 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.