Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 生醫電子與資訊學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2366
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor張瑞峰
dc.contributor.authorWei-Ren Lanen
dc.contributor.author藍偉任zh_TW
dc.date.accessioned2021-05-13T06:39:29Z-
dc.date.available2017-08-20
dc.date.available2021-05-13T06:39:29Z-
dc.date.copyright2017-08-20
dc.date.issued2017
dc.date.submitted2017-08-11
dc.identifier.citation[1] R. L. Siegel, K. D. Miller, and A. Jemal, 'Cancer statistics, 2016,' CA: a cancer journal for clinicians, vol. 66, pp. 7-30, 2016.
[2] R. S. Fontana, D. R. Sanderson, W. F. Taylor, L. B. Woolner, W. E. Miller, J. R. Muhm, et al., 'Early Lung Cancer Detection: Results of the Initial (Prevalence) Radiologic and Cytologic Screening in the Mayo Clinic Study 1, 2,' American Review of Respiratory Disease, vol. 130, pp. 561-565, 1984.
[3] T. N. L. S. T. R. Team, 'Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening,' New England Journal of Medicine, vol. 365, pp. 395-409, 2011.
[4] M. Kaneko, K. Eguchi, H. Ohmatsu, R. Kakinuma, T. Naruke, K. Suemasu, et al., 'Peripheral lung cancer: screening and detection with low-dose spiral CT versus radiography,' Radiology, vol. 201, pp. 798-802, 1996.
[5] E. A. Kazerooni, F. T. Lim, A. Mikhail, and F. J. Martinez, 'Risk of pneumothorax in CT-guided transthoracic needle aspiration biopsy of the lung,' Radiology, vol. 198, pp. 371-375, 1996.
[6] T. Balamugesh and F. Herth, 'Endobronchial ultrasound: A new innovation in bronchoscopy,' Lung India: Official Organ of Indian Chest Society, vol. 26, p. 17, 2009.
[7] K. Yasufuku, T. Nakajima, M. Chiyo, Y. Sekine, K. Shibuya, and T. Fujisawa, 'Endobronchial ultrasonography: current status and future directions,' Journal of Thoracic Oncology, vol. 2, pp. 970-979, 2007.
[8] H. Wada, T. Nakajima, K. Yasufuku, T. Fujiwara, S. Yoshida, M. Suzuki, et al., 'Lymph node staging by endobronchial ultrasound-guided transbronchial needle aspiration in patients with small cell lung cancer,' The Annals of thoracic surgery, vol. 90, pp. 229-234, 2010.
[9] T.-Y. Chao, C.-H. Lie, Y.-H. Chung, J.-L. Wang, Y.-H. Wang, and M.-C. Lin, 'Differentiating peripheral pulmonary lesions based on images of endobronchial ultrasonography,' CHEST Journal, vol. 130, pp. 1191-1197, 2006.
[10] C.-H. Lie, T.-Y. Chao, Y.-H. Chung, J.-L. Wang, Y.-H. Wang, and M.-C. Lin, 'New image characteristics in endobronchial ultrasonography for differentiating peripheral pulmonary lesions,' Ultrasound in medicine & biology, vol. 35, pp. 376-381, 2009.
[11] P. Nguyen, F. Bashirzadeh, J. Hundloe, O. Salvado, N. Dowson, R. Ware, et al., 'Grey scale texture analysis of endobronchial ultrasound mini probe images for prediction of benign or malignant aetiology,' Respirology, vol. 20, pp. 960-966, 2015.
[12] K. Fukushima and S. Miyake, 'Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition,' in Competition and cooperation in neural nets, ed: Springer, 1982, pp. 267-285.
[13] Y. LeCun, Y. Bengio, and G. Hinton, 'Deep learning,' Nature, vol. 521, pp. 436-444, 2015.
[14] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, 'Gradient-based learning applied to document recognition,' Proceedings of the IEEE, vol. 86, pp. 2278-2324, 1998.
[15] C.-K. Shie, C.-H. Chuang, C.-N. Chou, M.-H. Wu, and E. Y. Chang, 'Transfer representation learning for medical image analysis,' in Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, 2015, pp. 711-714.
[16] J.-Z. Cheng, D. Ni, Y.-H. Chou, J. Qin, C.-M. Tiu, Y.-C. Chang, et al., 'Computer-aided diagnosis with deep learning architecture: applications to breast lesions in US images and pulmonary nodules in CT scans,' Scientific reports, vol. 6, p. 24454, 2016.
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' in Advances in neural information processing systems, 2012, pp. 1097-1105.
[18] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, 'Imagenet: A large-scale hierarchical image database,' in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, 2009, pp. 248-255.
[19] Y. Bar, I. Diamant, L. Wolf, and H. Greenspan, 'Deep learning with non-medical training used for chest pathology identification,' in Proc. SPIE, 2015, p. 94140V.
[20] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., 'Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,' IEEE transactions on medical imaging, vol. 35, pp. 1285-1298, 2016.
[21] R. Girshick, J. Donahue, T. Darrell, and J. Malik, 'Rich feature hierarchies for accurate object detection and semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[22] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, 'Overfeat: Integrated recognition, localization and detection using convolutional networks,' arXiv preprint arXiv:1312.6229, 2013.
[23] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, 'CNN features off-the-shelf: an astounding baseline for recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp. 806-813.
[24] C. Cortes and V. Vapnik, 'Support-vector networks,' Machine learning, vol. 20, pp. 273-297, 1995.
[25] B. Athiwaratkun and K. Kang, 'Feature representation in convolutional neural networks,' arXiv preprint arXiv:1507.02313, 2015.
[26] J. Salamon and J. P. Bello, 'Deep convolutional neural networks and data augmentation for environmental sound classification,' IEEE Signal Processing Letters, vol. 24, pp. 279-283, 2017.
[27] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, et al., 'Convolutional neural networks for medical image analysis: Full training or fine tuning?,' IEEE transactions on medical imaging, vol. 35, pp. 1299-1312, 2016.
[28] J. Schmidhuber, 'Deep learning in neural networks: An overview,' Neural networks, vol. 61, pp. 85-117, 2015.
[29] J. Donahue, 'Caffenet,' ed, 2016.
[30] M. D. Zeiler and R. Fergus, 'Visualizing and understanding convolutional networks,' in European conference on computer vision, 2014, pp. 818-833.
[31] V. Vapnik, The nature of statistical learning theory: Springer science & business media, 2013.
[32] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, et al., 'Caffe: Convolutional architecture for fast feature embedding,' in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 675-678.
[33] R. Kohavi, 'A study of cross-validation and bootstrap for accuracy estimation and model selection,' in Ijcai, 1995, pp. 1137-1145.
[34] R. M. Haralick and K. Shanmugam, 'Textural features for image classification,' IEEE Transactions on systems, man, and cybernetics, pp. 610-621, 1973.
[35] K. Simonyan and A. Zisserman, 'Very deep convolutional networks for large-scale image recognition,' arXiv preprint arXiv:1409.1556, 2014.
[36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al., 'Going deeper with convolutions,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9.
[37] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[38] H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., 'Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning,' IEEE Transactions on Medical Imaging, vol. 35, pp. 1285-1298, 2016.
[39] D.-X. Xue, R. Zhang, H. Feng, and Y.-L. Wang, 'CNN-SVM for microvascular morphological type recognition with data augmentation,' Journal of Medical and Biological Engineering, vol. 36, pp. 755-764, 2016.
[40] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, et al., 'Imagenet large scale visual recognition challenge,' International Journal of Computer Vision, vol. 115, pp. 211-252, 2015.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2366-
dc.description.abstract肺癌在美國是死亡人數最高的癌症,及早的治療,可有效提升肺癌的存活率。支氣管超音波影像由於他的即時性、低輻射、較好的偵測能力,並且可與穿刺搭配常用來做肺部疾病檢查以及肺部病灶的良惡性診斷,近年來成為一個肺癌重要的診斷工具之一。不過目前病灶的支氣管超音波圖像判斷以醫生主觀統整特徵做判斷參考為主。電腦輔助診斷有運用灰階影像特徵做分類,但仍先需有醫生專業從影像上取樣進行分析,屬於半自動化輔助。因此,此篇研究主要的目的是希望藉由卷積神經網路來達成全自動化輔助。首先,調整每張EBUS影像成神經網絡所需的影像輸入尺寸,接著藉由旋轉、翻轉影像做訓練資料數的擴充。欲作為使用的卷積神經網絡CaffeNet 遷移了預先已在ImageNet 訓練過的模型參數,而後再藉由訓練資料訓練來做網絡的參數優化。接著從第七層的全連階層取出 4096 維度的特徵,利用SVM分類器進行病灶的良惡性分類。在此次研究中採用164個病例,包含56個良性病灶以及108個惡性病灶,研究結果顯示,使用遷移學習的卷積神經網絡特徵作為分類使用,比特徵上使用GLCM (gray-level co-occurrence matrix)更較具有分辨率,可達到準確率85.4% (140/164)、靈敏性87.0% (94/108)、特異性 82.1% (46/56),以及ROC曲線面積0.8705。從結果上來看,使用卷積神經網絡作為支氣管超音波良惡性分類很具有潛力。zh_TW
dc.description.abstractIn the United States, lung cancer is the leading cause of cancer death. The survival rate could increase by early detection. In recent years, the endobronchial ultrasonography (EBUS) images have been utilized to differentiate between benign and malignant lesions and guide transbronchial needle aspiration because it is real-time, radiation-free and has better performance. However, the diagnosis depends on the subjective judgement from doctors. There was a study which using the greyscale image textures of the EBUS images to classify the lung lesions but it belonged to semi-automated system which still need the experts to select a part of the lesion first. Therefore, the main purpose of the study was to achieve full automation assistance by using convolution neural network. First of all, the EBUS images resized to the input size of convolution neural network (CNN). And then, the training data were rotated and flipped. The parameters of the model trained with ImageNet previously were transferred to the CaffeNet used to classify the lung lesions. And then, the parameter of the CaffeNet was optimized by the EBUS training data. The features with 4096 dimension were extracted from the 7th fully connected layer and the support vector machine (SVM) was utilized to differentiate benign and malignant. This study was validated with 164 cases including 56 benign and 108 malignant. According to the experiment results, applying the classification by the features from the CNN with transfer learning had better performance than the conventional method with Gray Level Co-Occurrence Matrix (GLCM) features. The accuracy, sensitivity, specificity, and the area under ROC achieved 85.4% (140/164), 87.0% (94/108), 82.1% (46/56), and 0.8705, respectively. From the experiment results, it has potential to diagnose EBUS images with CNN.en
dc.description.provenanceMade available in DSpace on 2021-05-13T06:39:29Z (GMT). No. of bitstreams: 1
ntu-106-R02943137-1.pdf: 1379380 bytes, checksum: a45a3fe1064845c74c2e06eb08867df8 (MD5)
Previous issue date: 2017
en
dc.description.tableofcontents口試委員會審定書 i
致謝 ii
摘要 iii
Abstract iv
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Material 5
Chapter 3 EBUS Images Diagnosis System Using Convolutional Neural Network 6
3.1. Data Augmentation 7
3.2. Feature Extraction based on Fine-tuned CNN 8
3.2.1. Convolutional Neural Network 9
3.2.2. Fine-tuning the CNN 13
3.2.3. Feature Extraction 13
3.3. Classification 14
3.3.1. SVM 14
Chapter 4 Experiment Results and Discussion 16
4.1. Experiment Environment 16
4.2. Results 17
4.3. Discussion 25
Chapter 5 Conclusion and Future Work 27
References 28
dc.language.isoen
dc.subject遷移學習zh_TW
dc.subject卷積神經網絡zh_TW
dc.subject支氣管超音波zh_TW
dc.subject肺癌zh_TW
dc.subjectconvolutional neural networken
dc.subjectEBUSen
dc.subjectlung canceren
dc.subjecttransfer learningen
dc.title應用卷積神經網絡於支氣管超音波影像診斷zh_TW
dc.titleEndobronchial Ultrasound Images Diagnosis Using Convolutional Neural Networken
dc.typeThesis
dc.date.schoolyear105-2
dc.description.degree碩士
dc.contributor.oralexamcommittee李百祺,羅崇銘
dc.subject.keyword肺癌,支氣管超音波,卷積神經網絡,遷移學習,zh_TW
dc.subject.keywordlung cancer,EBUS,convolutional neural network,transfer learning,en
dc.relation.page31
dc.identifier.doi10.6342/NTU201702994
dc.rights.note同意授權(全球公開)
dc.date.accepted2017-08-11
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept生醫電子與資訊學研究所zh_TW
顯示於系所單位:生醫電子與資訊學研究所

文件中的檔案:
檔案 大小格式 
ntu-106-1.pdf1.35 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved