請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20395
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳中平(Chung-Ping Chen) | |
dc.contributor.author | Yueh-Ying Song | en |
dc.contributor.author | 宋岳穎 | zh_TW |
dc.date.accessioned | 2021-06-08T02:47:19Z | - |
dc.date.copyright | 2017-08-24 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-08-20 | |
dc.identifier.citation | REFERENCE
[1] 衛生福利部中央健康保險署,'癌症登記報告', http://www.nhi.gov.tw/. [2] 衛生福利部中央健康保險署,' 103年各類癌症健保前10大醫療支出統 計”,http://www.nhi.gov.tw/ [3] Robert L. Barclay, Joseph J. Vicari, Andrea S. Doughty et al. 'Colonoscopywithdrawal times and adenoma detection during screening colonoscopy,' N Engl J Med, vol.355, No.24, pp.2533-2541, Dec. 2006. [4] Michal F. Kaminski, Jaroslaw Regula, Ewa Kraszewska et al. 'Quality indicators for colonoscopy and the risk of interval cancer,' N Engl J Med, vol.362, No.19, pp.1795-1803, May. 2010. [5] Nancy N. Baxter, Rinku Sutradhar, Shawn S. Forbes et al. 'Analysis of administrative data finds endoscopist quality measures associated with post colonoscopy colorectal cancer,' Gastroenterology, vol.140, No.1, pp.65-72, Sep. 2011. [6] Brenner H, Chang-Claude J, Seiler CM et al. 'Interval cancers after negative colonoscopy: population-based case-control study,' Gut, Vol.61, No.11, pp.1576-1582, Dec. 2012. [7] Brenner H, Chang-Claude J, Jansen L, et al. 'Role of colonoscopy and polyp characteristics in colorectal cancer after colonoscopic polyp detection: a population-based case-control study,' Ann Intern Med, Vol.157, No.4, pp.225-232, Aug. 2012. [8] 邱瀚模,李宜家,'如何提升大腸內視鏡品質-實證與指引', 2013, pp2, pp66. [9] Litjens, G. et al. A survey on deep learning in medical image analysis. arXiv preprint arXiv:1702.05747 (2017) [10] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193–202, 1980 [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [12] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. [13] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. ImageNet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014 [14] Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., Rueckert, D., Glocker, B., 2017. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis 36, 61–78. [15] Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2015b. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567. [16] C. Shie, C. Chuang, C.Chou, M. Wu, and Edward Y. Chang “Transfer Representation Learning for Medical Image Analysis” http://infolab.stanford.edu/~echang/HTC_OM_Final.pdf [17] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842, 2014. 1, 2, 3, 5 [18] Sze, V. (2017, Mar 27). Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Retrieved from arXiv.org: https://arxiv.org/abs/1703.09039 [19] V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in ICML, 2010. [20] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. [21] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016 [22] J. Zhang, W. Li, P. Ogunbona “Transfer Learning For Cross-Dataset Recognition: A Survey” [23] Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big Data, 3(1):1–40 [24] https://en.wikipedia.org/wiki/Semi-supervised_learning [25] Rosenberg, C., Hebert, M., & Schneiderman, H. (2005). Semi-supervised selftraining of object detection models. Seventh IEEE Workshop on Applications of Computer Vision [26] X. Zhu, ‘‘Semi-supervised learning literature survey,’’ Dept. Comput. Sci., Univ. Wisconsin, Madison, WI, Tech. Rep. 1530, 2005 [27] A. Telea. “An image inpainting technique based on the fast marching method.” Journal of graphics tools., 9(1):23–34, 2004 [28] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. “Image Inpainting.”In Proceedings SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, edited by Kurt Akeley, pp. 417—424, Reading, MA: Addison-Wesley, 2000 [29] M. Oliveira, B. Bowen, R. McKenna, and Y. -S. Chang. “Fast Digital Image Inpainting.” In Proc. VIIP 2001, pp. 261—266, 2001 [30] Sebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. Understanding data augmentation for classification: when to warp? arXiv preprint arXiv:1609.08764, 2016. [31] https://keras.io/preprocessing/image/ [32] E. Chang, “AdaBoost-Based Cecum Recognition System in Accordance with Boston Bowel Preparation Scale” 2016 [33] Y. Bengio, “Practical Recommendations for Gradient-Based Training of Deep Architectures,” Neural Networks: Tricks of the Trade, K.-R. Mu¨ller, G. Montavon, and G.B. Orr, eds., Springer 2013. [34] Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning (2016). arXiv:1602.07261 [35] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn. ´arXiv:1703.06870,2017.V. Nair and G. E. Hinton, “Rectified Linear Units ImproveRestricted Boltzmann Machines,” in ICML, 2010. [36] Tchoulack, S.; Langlois, J.M.P.; Cheriet, F., 'A video stream processor for real-time detection and correction of specular reflections in endoscopic images,' Circuits and Systems and TAISA Conference, 2008. NEWCAS-TAISA 2008. 2008 Joint 6th International IEEE Northeast Workshop on, vol., no., pp.49,52, 22-25 June 2008 | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/20395 | - |
dc.description.abstract | 本論文延續過去針對自動辨識照片上的盲腸特徵來減輕醫師負擔的研究,藉由嘗試不同的機器學習演算法來提升準確率和改變系統。
據最新國健署調查,大腸癌已經連續九年佔據國內十大癌症發生率首位。該疾病成因眾多、早期無明顯症狀,但第零、一期的大腸癌經過治療後五年存活率可高達九成以上過去數據統計,所以最好的解決方式是定期做健保給付的篩檢。篩檢先透過糞便潛血檢查,如果陽性反應再透過大腸鏡檢查來確定。藉由定期的大腸鏡檢查來在大腸癌早期時開始治療,能大幅度的提高藥物治療效果與降低大腸癌的死亡風險。 大腸鏡檢查的品質攸關著是否能確實早期發現大腸癌,因此除了國人要養成定期到醫院篩檢的習慣,也需要確保大腸鏡檢查擁有一定的品質,經研究發現醫生若在每次檢查都能確實深入到盲腸(盲腸到達率高),病人罹患大腸癌的機率相對較低,換句話說盲腸到達率是評估大腸鏡品質不可或缺的重要指標。目前醫院是以人工的方式,交換大腸鏡照片給不同醫師來評估大腸鏡品質,而此評估需要專業的知識和經驗只有醫生能執行,需要的人事成本其實不低。 我們用新一代的機器學習演算法改良之前的自動辨識盲腸系統,幫助醫師更為快速 輕鬆地檢視大量大腸鏡照片同時計算盲腸到達率。此系統會先用去瑕疵(Inpaint)演算法大量去除影響辨識照片的亮點,再針對處理後的照片判斷是否含有回盲瓣(Ileocecal Valve, ICV)、三叉紋路(Triradiate Fold)或闌尾口(Appendiceal Orifice)等盲腸的特徵,使用數據增強的演算法大幅擴充大腸鏡的影像資料,最後利用遷移學習和半監督學習的演算法擷取這些盲腸特徵來辨識照片是否有拍攝到盲腸,我們在平均辨識準確率上達到 84.7%,最高辨識準確率達到 85.5%,未來便可利用這套系統判斷醫師是否有確實在大腸鏡檢查中進入盲腸,做為一個公正評估大腸鏡品質的第三方,同時減少醫生為了建立這套系統標註和檢測照片的負擔,並能用遷移學習的方式幫助相似的醫學影像辨識。 | zh_TW |
dc.description.abstract | In this thesis, we continue to improve the system which can automatically recognize the cecum image from colonoscopy photos based on the variability of human intestinal. This system can assist doctors to check the colonoscopy photos and reduce the burden on doctors. According to the research from Health Promotion Administration, the colorectal cancer is the top one cancer on incidence rate and medical expenses in Taiwan for late 9 years. There are many reasons which cause it and it is really hard to find symptom in the early stage of colorectal cancer. Fortunately, early treatment of colorectal cancer in T is and T1 can increase the survival rate of patient effectively. In order to detect the early stage of colorectal cancer, the colonoscopy examination regularly is very important. The colonoscopy quality is closely related to the detection of early cancer. We focus on Cecal Intubation Rate (CIR), Bowel Preparation (BP. In order to evaluate CIR, doctors need to view the great amount of colonoscopy photos with concentrations and difficult medical knowledge. Therefore, we propose a cecum recognition system to help doctors to evaluate CIR automatically. The system will remove all the specular on the training images, then use data augmentation to increase the dataset largely, then the system extracts features of cecum from the images with good BP by image processing, and we use transfer learning and semi-supervised learning algorithm to recognize cecum images. Our method achieves the average accuracy rate of 84.0% and the best accuracy rate of 87.1%. | en |
dc.description.provenance | Made available in DSpace on 2021-06-08T02:47:19Z (GMT). No. of bitstreams: 1 ntu-106-R97943145-1.pdf: 2663366 bytes, checksum: 0e1f0c175dc402dbd7300e958c619b5b (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 中文摘要 iii Abstract iv CONTENTS v LIST OF FIGURES viii LIST OF TABLES x Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation and Objective 5 Chapter 2 Overview of Related Knowledge 6 2.1 Inception Net 7 2.1.1 Convolutional Neural Networks 7 2.1.2 Inception Net 14 2.2 Transfer learning 19 2.3 Semi-supervised Learning 22 2.3.1 Self learning 24 Chapter 3 Proposed Technique 26 3.1 Specularity Removal 26 3.1.1 TEELA Algorithm 27 3.1.2 Navier-Stokes Algorithm 28 3.1.3 Experiment 29 3.2 Data Augmentation 32 Chapter 4 Experiment Result 35 4.1 Performance of Transfer Learning 35 4.1.1 Training Environment 35 4.1.2 Random Validation Result 35 4.2 Performance of Self Learning 36 4.2.1 Training Environment 38 4.2.2 Random Validation Result 38 4.3 Performance of Previous work 39 4.3.1 Training Environment 39 4.3.2 Random Validation Result 39 Chapter 5 Conclusion and Future Work 41 5.1 Conclusion 41 5.2 Future Work 41 REFERENCE 43 | |
dc.language.iso | en | |
dc.title | 盲腸辨識系統基於遷移學習和半監督學習 | zh_TW |
dc.title | Cecum Recognition System by Transfer Learning and Semi-Supervised Learning | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 江介宏(Jie-Hong Jiang),方劭云(Shao-Yun Fang) | |
dc.subject.keyword | 盲腸,回盲瓣,三叉紋路,闌尾口,影像處理,機器學習,半監督學習,遷移學習, | zh_TW |
dc.subject.keyword | Cecum,Feature,Image processing,Machine learning,Transfer learning,Specular free, | en |
dc.relation.page | 46 | |
dc.identifier.doi | 10.6342/NTU201702508 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2017-08-21 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電子工程學研究所 | zh_TW |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf 目前未授權公開取用 | 2.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。