Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 生醫電子與資訊學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78744
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor宋孔彬zh_TW
dc.contributor.advisorKung-Bin Sungen
dc.contributor.author張勵揚zh_TW
dc.contributor.authorLi-Yang Changen
dc.date.accessioned2021-07-11T15:16:16Z-
dc.date.available2024-08-15-
dc.date.copyright2019-08-19-
dc.date.issued2019-
dc.date.submitted2002-01-01-
dc.identifier.citation[1] World Health Organization. (2018). The Global Paradigm Shift Screening for Colorectal Cancer. Retrieved from http://gco.iarc.fr/today/online-analysis-table?v=2018&mode=cancer&mode_population=continents&population=900&populations=900&key=asr&sex=0&cancer=39&type=0&statistic=5&prevalence=0&population_group=0&ages_group%5B%5D=0&ages_group%5B%5D=17&nb_items=5&group_cancer=1&include_nmsc=1&include_nmsc_other=1
[2] Cunningham D, Atkin W, Lenz HJ et al., "Colorectal cancer," Lancet. 375 (9719):
1030–47. 2010.
[3] Chiu SY, Chuang SL, Chen SL, et al., "Faecal haemoglobin concentration
influences risk prediction of interval cancers resulting from inadequate
colonoscopy quality: analysis of the Taiwanese Nationwide Colorectal Cancer.
[4] En-Shuo Chang, "AdaBoost-Based Cecum Recognition System in Accordance
with Boston Bowel Preparation Scale," Thesis of National Taiwan University, Jun. 2016.
[5] Po-Yen Lu, "Cecum Classification of Colonoscopy Images using Deep Learning Algorithm" Thesis of National Taiwan University, July. 2017.
[6] Y. LeCun, Y. Bengio, and G. Hinton., "Deep learning," Nature, 521:436–444, May 2015.
[7] Mikolov, T., Deoras, A., Povey et al., "Strategies for training large scale neural network language models," In Proc. Automatic Speech Recognition and
Understanding, 196–201. 2011.
[8] Sainath, T., Mohamed, A.-R., Kingsbury, B. et al., "Deep convolutional neural networks for LVCSR," In Proc. Acoustics, Speech and Signal Processing, 8614–
8618. 2013.
[9] Krizhevsky, A., Sutskever, I. and Hinton, G., "ImageNet classification with deep convolutional neural networks," In Proc. Advances in Neural Information
Processing Systems, 25 1090–1098. 2012.
[10] Leung, M. K., Xiong, H. Y. and Lee, L. J. et al., "Deep learning of the
tissue-regulated splicing code," Bioinformatics 30 (12), i121–i129. 2014
[11] Xiong, H. Y. et al., "The human splicing code reveals new insights into the genetic determinants of disease," Science 347, 6218. 2015.
[12] Ciresan, D., Meier, U. Masci, J. and Schmidhuber, J., "Multi-column deep neural network for traffic sign classification," Neural Networks 32, 333–338. 2012.
[13] Sermanet, Pierre, and Yann LeCun., "Traffic Sign Recognition with Multi Scale Networks," Courant Institute of Mathematical Sciences, New York University.
2011.
[14] Taigman, Y., Yang, M., Ranzato, M. and Wolf, L., "Deepface: closing the gap to human-level performance in face verification," In Proc. Conference on Computer Vision and Pattern Recognition 1701–1708. 2014.
[15] Wikipedia. (2019). Convolutional neural network.Retrieved from
https://en.wikipedia.org/wiki/Convolutional_neural_network
[16] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna., "Rethinking the Inception architecture for computer vision," arXiv preprint, 1512.00567. 2015
[17] Hadsell, R. et al., "Learning long-range vision for autonomous off-road driving," J. Field Robot. 26, 120–144. 2009.
[18] Karpathy, Andrej. "Neural Networks Part 1: Setting Up the Architecture." Notes for CS231n Convolutional Neural Networks for Visual Recognition, Stanford
University. 2015. Retrieved from http://cs231n.github.io/neural-networks-1/.
[19] Youtube. (2019). "Convolutional Neural Network (CNN) " Retrieved from
https://www.youtube.com/watch?v=EPFQ3z2xIQ8
[20] Quora.(2017). Max Pooling. Retrieved from
https://www.quora.com/What-is-max-pooling-in-convolutional-neural-networks
[21] Machinethink.(2017). Fully-Connected Layer. Retrieved from
https://machinethink.net/blog/mps-matrix-multiplication/
[22] Mropengate. (2017). Activation Layer. Retrieved from
https://mropengate.blogspot.com/2017/02/deep-learning-role-of-activation.html
[23] Ian Goodfellow, Yoshua Bengio, and Aaron Courville., "Deep Learning," 2016.
Retrieved from http://www.deeplearningbook.org.
[24] Towardsdatascience. (2019). Unsupervised Learning. Retrieved from
https://towardsdatascience.com/unsupervised-learning-with-python-173c51dc7f03
[25] Sinno Jialin Pan and Qiang Yang Fellow., "A Survey on Transfer Learning,"
2009.
[26] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna., "Rethinking the
Inception architecture for computer vision," arXiv preprint, 1512.00567. 2015
[27] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. "Going deeper with convolutions," CoRR,
abs/1409.4842, 2014.
[28] Tibshirani R., "Regression shrinkage and selection via the Lasso," Journal of the Royal Statistical Society Series B-Methodological. 58(1):267–288. 1996.
[29] Hoerl AE, Kennard RW., "Ridge Regression - Biased Estimation For Nonorthogonal Problems," Technometrics. 12(1):55. doi: 10.2307/1267351. 1970.
[30] Stanford university. (2001). Bio-design. Retrieved from
http://biodesign.stanford.edu/about-us/process.html
[31] Wikipedia. (2019). DICOM. Retrieved from
https://zh.wikipedia.org/wiki/DICOM
[32] Wikipedia. (2019). PACS. Retrieved from.
https://zh.wikipedia.org/wiki/%E9%86%AB%E7%99%82%E5%BD%B1%E5%83%8F%E5%84%B2%E5%82%B3%E7%B3%BB%E7%B5%B1
[33] Wikipedia. (2019). MS SQL. Retrieved from
https://zh.wikipedia.org/wiki/Microsoft_SQL_Server"
[34] Kingma, Diederik P. and Ba, Jimmy., "Adam: A Method for Stochastic Optimization," arXiv:1412.6980 [cs.LG], December 2014.
[35] Alex Krizhevsky, Ilya Sutskever. and Geoffrey E. Hinton., "ImageNet
Classification with Deep Convolutional Neural Networks" 2010
[36] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun., "Deep Residual
Learning for Image Recognition," In CVPR, 2015.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78744-
dc.description.abstract大腸癌是近年來世界上一個重要的問題。受苦的人數和醫療費用的成本仍在增加。為了預防大腸癌,定期檢查和早期治療是最有效的方法。因此,標準和高品質的大腸鏡檢查是非常重要的。因此在本論文中,我們專注於大腸鏡檢查的其中一項重要指標:盲腸到達率。
鑑於台灣目前的情況,大腸鏡檢查是否到達盲腸仍然是基於醫生的聲明,沒有客觀的評價方法。目前的內視鏡醫師難以有額外的人力來檢查每個大腸鏡檢查影像是否到達盲腸且大腸鏡檢查影像的品質是否符合標準。為了更有效地監測盲腸到達率和照片品質,因此提出了一種自動識別盲腸的系統。醫生將大腸鏡檢查圖像上傳到該系統後,系統可以將圖像自動區分為盲腸或非盲腸,以實現盲腸到達率的自動計算。希望這種自動化模式可以節省人力讓醫務人員可以花更多時間
在病人身上。
該系統基於我們實驗室先前研究的圖像分析方法。它首先評估大腸鏡檢查的腸道準備並區分腸道是否乾淨與否。乾淨的影像才會進行預處理和資料擴增。接著使用卷積神經網絡演算法,如GoogLenet ,VGGNet網絡和其他著名的神經網絡架構最終選擇了最好的網絡作為我們的模型。除了來自台大醫院大量的大腸鏡檢查圖像。我們另外使用GPU來加速訓練,然後讓計算機自動學習盲腸與非盲腸圖像的差異。另外為了知道機器學習到甚麼,因此讓機器呈現以視覺上可理解的方式識別特徵,然後識別影像是否已到達盲腸,最後統計各項數據後將病人相關資料以及辨識結果寫入MSSQL資料庫,使用網頁平台方式來呈現相關統計數據。
此外,為了促進這種管理系統遍布全國各大醫院,我們還與醫生合作,收集了NTU醫院以外的其他10家醫院的大腸鏡檢查照片。它可以增加照片的多樣性,增強我們系統的穩健性。因為可以應用大量的照片作遷移式學習來突破準確性的瓶頸並最終通過接收器操作特性曲線,選擇最佳閾值來獲得優化模型。在未來,該系統有望幫助醫生確定他們是否真的在大腸鏡檢查期間進入盲腸,
作為第三方公正評大腸鏡檢查的品質,同時減輕醫生肉眼觀看照片的負擔。
zh_TW
dc.description.abstractCRC is an important issue in the world. The number of people suffering and the cost of medical expenses is still increasing. In order to preventing colorectal cancer, regular examination and early treatment is the most effective method. Therefore, standard and high quality colonoscopy screening is necessary. In this thesis, we focus on an indicator of quality colonoscopy: cecal intubation rate (CIR)。
In view of the current situation in Taiwan, whether the colonoscopy reaches the cecum is still based on the physician's declaration and there is no objective evaluation method. It is difficult for the current endoscopic physician to have the extra manpower to examine whether each colonoscopy image reaches the cecum and the quality of the colonoscopy image meets the standard or not. In order to monitor the cecal intubation rate and photo quality more efficiently, a system for automatically identifying the cecum was proposed. After the doctor uploads the colonoscopy image to the system, the system can distinguish the image as a cecum or a non-cecum to achieve automatic calculation of the cecal intubation rate. It is hoped that this automated mode can save manpower and allow the medical staff to spend more time on the patient.
This system is based on the image analysis method previously study of our lab. It first evaluates the bowel preparation of the colonoscopy and distinguishes whether the bowel is clean or not. Image pre-processing and data augmentation are implemented on clean images. Next use the convolutional neural network algorithm such as GoogLenet VGG, Residual Network and other well-known neural network architectures finally select the best network architecture as our model. In addition to this the large amount of colonoscopy images from the NTU hospital are also needed .We use the GPU to accelerate the computer train the model then let the computer automatically learn the features differences of the cecum and non-cecum images. We also want to know what machine learns so we want to present the features in a visually understandable way then identify whether the photo has been photographed into the cecum or not. Finally,
the patient-related data and the photo classification result are written into the MSSQL database, and the web platform is used to present relevant statistics.
In addition, in order to promote this system spread across major hospitals across the country, we also collaborated with doctors to collect colonoscopy photos from 10 other hospitals other than the NTU Hospital. It can increase the diversity of photos and enhancing the robustness of our system. Because large amount of photos transfer learning can be applied to break through the bottleneck of the accuracy and finally through the receiver operating characteristic curve, select the optimal threshold to get the optimization model. In the future, this system is expected to assist doctors in determining whether they have actually entered the cecum during colonoscopy as a third party to fairly assess the quality of colonoscopy, while reducing the burden of manual viewing of photos.
en
dc.description.provenanceMade available in DSpace on 2021-07-11T15:16:16Z (GMT). No. of bitstreams: 1
ntu-108-R06945002-1.pdf: 3561471 bytes, checksum: 53605bb31320a51fcf05c3dc8701f860 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents口試委員會審定書...................................................................................................#
誌謝 i
中文摘要 ii
ABSTRACT iii
CONTENTS v
LIST OF FIGURES viii
LIST OF TABLES xi
Chapter 1 Introduction 1
1.1 Background 1
1.2 Motivation and Objective 3
Chapter 2 Overview of Related Research 5
2.1 Previous Cecum Recognition Algorithm 6
2.1.1 Bowel Preparation Evaluation 7
2.1.2 CNN Classifier 9
2.2 Deep Learning 14
2.2.1 Supervised Learning and Unsupervised Learning 15
2.2.2 Transfer Learning 16
2.2.3 Backpropagation 17
2.3 Convolutional Networks for Image Classification 18
2.3.1 Early Stopping 18
2.3.2 Data Augmentation 19
2.3.3 Dropout 20
2.3.4 Regularization 20
Chapter 3 Bio-Design 22
3.1 Identify 23
3.2 Invent 25
3.3 Implement 26
3.4 Hospital Information System 27
3.4.1 MS SQL Database 30
3.4.2 Python Connect to SQL Database 31
3.4.3 Web Platform 32
Chapter 4 Proposed Cecum Recognition System 34
4.1 Image Pre-processing 35
4.1.1 Image Normalization 35
4.2 Cecum Recognition Model 37
4.2.1 Inception Module Architecture Model 37
4.2.2 Training Setting 41
4.2.3 Preventing Overfitting in Cecum Recognition 42
4.2.4 Fine-tune and Optimization of Model 43
4.3 Cecum Recognition Prediction 47
4.3.1 High Confident Prediction 47
Chapter 5 Experiment Result 48
5.1 Performance of Deep Learning 48
5.1.1 Experiment Workflow 48
5.1.2 Training Environment 49
5.1.3 Training and Validation Result 49
5.1.4 Comparison with Previous Experiment Result 53
5.1 Cecum Recognition Model 54
5.2.1 ROC Curve Optimization 57
5.2.2 Visualization of Model 59
5.2.3 HMC 4282 Patient Cases 60
Chapter 6 Conclusion and Future Work 62
6.1 Conclusion 62
6.2 Future Work 62
REFERENCE 64
-
dc.language.isozh_TW-
dc.subjectInception架構zh_TW
dc.subject影像前處理zh_TW
dc.subject資料庫zh_TW
dc.subject網頁平台zh_TW
dc.subject盲腸到達率zh_TW
dc.subject特徵可視化zh_TW
dc.subject大腸鏡影像zh_TW
dc.subject神經網路zh_TW
dc.subject遷移式學習zh_TW
dc.subject接收者操作曲線zh_TW
dc.subject深度學習zh_TW
dc.subjectInceptionen
dc.subjectDeep learningen
dc.subjectTransfer learningen
dc.subjectCNNen
dc.subjectColonoscopy imageen
dc.subjectFeature visualizationen
dc.subjectCecal-intubation rateen
dc.subjectROCen
dc.subjectImage preprocessingen
dc.subjectSQL databaseen
dc.subjectWeb platformen
dc.title基於深度學習之盲腸影像辨識技術應用於醫療資訊系統zh_TW
dc.titleNovel Deep Learning-Based Cecum Recognition Technique Applied on Hospital Information Systemen
dc.typeThesis-
dc.date.schoolyear107-2-
dc.description.degree碩士-
dc.contributor.coadvisor陳中平;邱瀚模zh_TW
dc.contributor.coadvisorChung-Ping Chen;Han-Mo Chiuen
dc.contributor.oralexamcommittee魏安祺;陳文翔zh_TW
dc.contributor.oralexamcommitteeAn-Chi Wei;Chen Wen-shiangen
dc.subject.keyword深度學習,遷移式學習,神經網路,大腸鏡影像,特徵可視化,盲腸到達率,接收者操作曲線,Inception架構,影像前處理,資料庫,網頁平台,zh_TW
dc.subject.keywordDeep learning,Transfer learning,CNN,Colonoscopy image,Feature visualization,Cecal-intubation rate,ROC,Inception,Image preprocessing,SQL database,Web platform,en
dc.relation.page68-
dc.identifier.doi10.6342/NTU201901902-
dc.rights.note未授權-
dc.date.accepted2019-07-25-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept生醫電子與資訊學研究所-
dc.date.embargo-lift2024-08-19-
顯示於系所單位:生醫電子與資訊學研究所

文件中的檔案:
檔案 大小格式 
ntu-107-2.pdf
  未授權公開取用
3.48 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved