請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72653
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳中平(Chung-Ping Chen) | |
dc.contributor.author | Pu-Hsien Fong | en |
dc.contributor.author | 馮溥賢 | zh_TW |
dc.date.accessioned | 2021-06-17T07:02:50Z | - |
dc.date.available | 2024-08-05 | |
dc.date.copyright | 2019-08-05 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-07-30 | |
dc.identifier.citation | [1]R. L. Siegel, K. D. Miller, and A. Jemal, “Cancer statistics, 2019,” CA Cancer J. Clin., vol. 69, no. 1, pp. 7-30, 2019.
[2]J. S Lin, M.A. Piper, L.A. Perdue et al., “Screening for colorectal cancer: Updated evidence report and systematic review for the US Preventive Services Task Force,” JAMA,vol.315, no.23, pp. 2576–2594, 2016. [3]A.M. Leufkens, M. G. H. van Oijen, F. P. Vleggaar et al., “Factors influencing the miss rate of polyps in a back-to-back colonoscopy study,” Endoscopy, vol. 44, no. 5, pp. 470-475, 2012. [4]S. Hwang, J. Oh, W. Tavanapong et al., “Polyp detection in colonoscopy video using elliptical shape feature,” in Proc. IEEE Int. Conf. Image Process., vol. 2, pp. II-465 - II-468, 2007. [5]S. Ameling, S. Wirth, D. Paulus et al, “Texture-based polyp detection in colonoscopy,” in Bildverarbeitung für die Medizin.Berlin, Germany: Springer, pp. 346-350, 2009. [6]J. Bernal, J. Sánchez, and F. Vilariño, “Impact of image preprocessing methods on polyp localization in colonoscopy frames,” in Proc. 35th Annu. Int. Conf. IEEE EMBC, pp. 7350-7354, 2013. [7]J. Bernal, J. Sánchez, and F. Vilariño, “Towards automatic polyp detection with a polyp appearance model,” Pattern Recognit., vol. 45, no. 9, pp. 3166-3182, 2012. [8]H.R. Roth, L. Lu, J. Liu et al. “Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation,” IEEE Transactions on Medical Imaging., vol. 35, no. 5, 2016. [9]N. Tajbakhsh, S. R. Gurudu, and J. Liang. “Automatic polyp detection in colonoscopy videos using an ensemble of convolutional neural networks,” in IEEE 12th Int. Symp. Biomedical Imaging, 2015. [10]K. B. Olofson, A.P. Miraflor, C.M. Nicka et al., “Deep learning for classification of colorectal polyps on whole-slide images,” J Pathol Inform, pp. 8-30 ,2017. [11]E. Ribeiro, A. Uhl, G. Wimmer, et al., “Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification,” Computational and Mathematical Methods in Medicine, vol. 2016, Article ID 6584725, 16 pages, 2016. [12]R. Zhang, Y. Zheng, T.W.C. Mak et al., “Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain,” IEEE Biomedical and Health Informatics, vol. 21, no.1, 2017. [13]S. Park and D. Sargent, “Colonoscopic polyp detection using convolutional neural networks,” Proc. SPIE, vol. 9785, pp. 978528, 2016. [14]N. Kajbakhsh, J. Y. Shin, S. Gurudu et al., “Convolutional neural networks for medical image analysis: Full training or fine tuning?” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1299-1312, 2016. [15]D. K. Rex, P. S. Schoenfeld, J. Cohen et al., “Quality indicators for colonoscopy,” Gastrointestinal Endoscopy, vol.81, no.1, Pages 31–53, 2015. [16]N.N. Baxter, R. Sutradhar, S.S. Forbes et al. “Analysis of administrative data finds endoscopist quality measures associated with post-colonoscopy colorectal cancer,” Gastroenterology, vol. 140, no. 65, pp. 65-72, 2011. [17]G.C. Harewood, V.K. Sharma, and P. de Garmo. “Impact of colonoscopy preparation quality on detection of suspected colonic neoplasia,” Gastrointestinal Endoscopy, vol.58, no.1, pp. 76-79, 2003. [18]D.K. Rex. “Colonoscopic withdrawal technique is associated with adenoma miss rates,” Gastrointestinal Endoscopy, vol.51, no.1, pp. 33-36, 2000. [19]歐吉性, 余方榮, 許文鴻 et al. “大腸直腸癌的篩檢與監測及台灣施行現況,”台灣醫界, vol.55, no.3, 2012. [20]P.Y. Lu, “Cecum Classification of Colonoscopy Images using Deep Learning Algorithm,” Thesis of National Taiwan University, Jun. 2017. [21]V. Sze, Y.H. Chen, T.J. Yang et al. “Efficient processing of deep neural networks: A tutorial and survey.” Proceedings of the IEEE, vol. 105, no.12, pp. 2295-2329, 2017. [22]K Fukushima. “Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position,” Biological Cybernetics, vol. 36, no.4, pp. 193-202, 1980. [23]Y. LeCun, L. Bottou, Y. Bengio et al., “Handwritten digit recognition: Applications of neural network chips and automatic learning,” IEEE Commun. Mag., vol. 27, no. 11, pp. 41–46, 1989. [24]A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. NIPS, Vol. 1, pp. 1097–1105, 2012. [25]J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL http://www.image-net.org/challenges/LSVRC/2012/ . [26]C. Szegedy, W. Liu, Y. Jia, et al. “Going deeper with convolutions,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. [27]S. Ioffe and C. Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” In Proceedings of The 32nd International Conference on Machine Learning, pp. 448–456, 2015. [28]C. Szegedy, V. Vanhoucke, S. Ioffe et al. “Rethinking the inception architecture for computer vision.” arXiv preprint arXiv:1512.00567, 2015. [29]C. Szegedy, S. Ioffe, and V. Vanhoucke. “Inception-v4, inception-resnet and the impact of residual connections on learning.” arXiv preprint arXiv:1602.07261, 2016. [30]K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. [31]F. Chollet., “Xception: Deep Learning with Depthwise Separable Convolutions,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800-1807. [32]Y. Liu, K. Gadepalli, M. Norouzi et al. “Detecting Cancer Metastases on Gigapixel Pathology Images,” 8 Mar. 2017, arxiv.org/abs/1703.02442 [33]Y. Shin, H. A. Qadir, L. Aabakken et al. 'Automatic Colon Polyp Detection Using Region Based Deep CNN and Post Learning Approaches,' in IEEE Access, vol. 6, pp. 40950-40962, 2018. [34]J. Wang and L. Perez, “The Effectiveness of Data Augmentation in Image Classification using Deep Learning.” arXiv preprint arXiv:1712.04621, 2017. [35]T. Tieleman and G. Hinton. “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.” COURSERA: neural networks for machine learning, 4(2):26–31, 2012. [36]J.Duchi, E. Hazan, and Y. Singer. “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” Journal of Machine Learning Research, 12:2121–2159, 2011. [37]F. Chollet. Keras. https://github.com/fchollet/keras, 2015. [38]M. Abadi, A. Agarwal, P. Barham et al. “Tensor- Flow: Large-scale machine learning on heterogeneous systems,” 2015. Software available from tensorflow.org. [39]J. Bernal, F. J. Sánchez, G. Fernández-Esparrach et al. “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians.” Computerized Medical Imaging and Graphics, 43, 99-111, 2015. [40]K. Pogorelov, K.R. Randel, T. Lange et al. “Nerthus: A Bowel Preparation Quality Video Dataset,” In MMSys'17 Proceedings of the 8th ACM on Multimedia Systems Conference, Taipei, Taiwan, pp. 170-174, 2017. [41]K. Pogorelov, K. R. Randel, C. Griwodz et al. “Kvasir: A Multi-Class Image Dataset for Computer Aided Gastrointestinal Disease Detection,” In MMSys'17 Proceedings of the 8th ACM on Multimedia Systems Conference (MMSYS), Taipei, Taiwan, pp.164-169, 2017. [42]K. Simonyan and A. Zisserman. “Very deep convolutional networks for large-scale image recognition.” In ICLR, 2015. [43]M. Tan and Q. V. Le. “Efficientnet: Rethinking model scaling for convolutional neural networks.” arXiv preprint arXiv:1905.11946, 2019. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/72653 | - |
dc.description.abstract | 在大腸鏡檢查中,有多項關於手術品質的指標,如盲腸到達率(Cecal Intubation Rate) 、腸道準備(Bowel Preparation) 、腺瘤偵測率(Adenoma Detection Rate) 、退出時間(Withdrawal Time) 等,研究發現,盲腸到達率高的醫生其病人得到大腸癌的機率相對較低,然而,現今僅能透過人工的方式,逐一地檢查手術報告中的每張照片是否為盲腸影像,人力成本高昂且耗時。因此,在本論文中,我們針對盲腸到達率提出一套新的盲腸偵測方法,透過深度學習和卷積神經網路,自動化判讀盲腸影像,藉此減輕醫師們的工作量,同時增加大腸鏡手術的品質。實驗結果顯示,我們提出的模型較先前的方法減少了84%的參數量,且提升了6%以上的準確率、24%的靈敏度、4.7%的特異度。除此之外,我們使用了三份大腸的開放資料來測試我們所訓練出來的分類器,最後達到超過99%的準確率,證明了我們的分類器沒有過適(overfitting)的問題。 | zh_TW |
dc.description.abstract | In colonoscopy examination, there are several quality assessment indicators, such as Cecal Intubation Rate, Bowel Preparation, Adenoma Detection Rate, Withdrawal Time, etc. It has been proved that examiners with higher Cecal Intubation Rate often have lower chance of missing polyps which might turn into colorectal cancer. However, it is a time-consuming and labor-intensive work for reviewers to check if the colonoscopy examination reaches the cecum. Therefore, in this paper, we propose a novel methodology which adopts deep learning and convolutional neural network(CNN) techniques to detect cecum images. The proposed methodology automatically distinguishes cecum images from other colonoscopy images, thereby reducing the workload of doctors and increasing the quality of colonoscopy. Comparing with our previous work, the proposed methodology reduces 84% of parameter numbers of CNN architecture while improving the detecting accuracy by over 6%, sensitivity by 24%, and specificity by 4.7%. In addition, we test our model with 3 different open source datasets and reach over 99% accuracy, which means that the overfitting problem has been improved. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T07:02:50Z (GMT). No. of bitstreams: 1 ntu-108-R05943100-1.pdf: 2996735 bytes, checksum: 9b017288940b8e1f42ff80a451fe4764 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 致謝 i
摘要 ii Abstract iii Table of Contents iv List of Figures vi List of Tables viii Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 3 1.3 Contribution 4 1.4 Organization 5 Chapter 2 Prior Works 6 2.1 Deep Convolutional Neural Networks 6 2.2 Colonoscopy Object Detection 14 Chapter 3 Datasets and Evaluation 16 3.1 Datasets Descriptions 16 3.2 Evaluation Metrics 20 Chapter 4 Proposed Methods 26 4.1 Image Pre-processing 27 4.2 Image Augmentation 28 4.3 Training Strategies 31 4.3.1 Loss Function 31 4.3.2 Optimizer 32 4.3.3 Learning Rate Scheduling 33 4.3.4 Model Architecture 34 Chapter 5 Experimental Results 37 5.1 Training Infrastructure 38 5.2 Classification Performance 38 5.2.1 Image Aumentation Validation Results 40 5.2.2 Test Results 41 5.2.3 Open Source Datasets Test Results 45 Chapter 6 Conclusion and Future Work 47 Reference 49 | |
dc.language.iso | en | |
dc.title | 基於深度學習之盲腸到達率的電腦輔助盲腸偵測方法 | zh_TW |
dc.title | Computer-aided Cecum Detection Methodology for Cecal Intubation Rate Using Deep learning | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 李建模(Chien-Mo Li),邱瀚模(Han-Mo Chiu),陳文翔(Wen-Shiang Chen) | |
dc.subject.keyword | 盲腸,盲腸到達率,大腸鏡影像,影像辨識,分類器,深度學習,卷積神經網路, | zh_TW |
dc.subject.keyword | Cecum,cecal incubation rate,colonoscopy,image recognition,classifier,deep learning,convolutional neural network, | en |
dc.relation.page | 55 | |
dc.identifier.doi | 10.6342/NTU201902050 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-07-31 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電子工程學研究所 | zh_TW |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 2.93 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。