Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 生醫電子與資訊學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/50572
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳中平(Chung-Ping Chen)
dc.contributor.authorEn-Shuo Changen
dc.contributor.author張恩碩zh_TW
dc.date.accessioned2021-06-15T12:46:47Z-
dc.date.available2021-08-03
dc.date.copyright2016-08-03
dc.date.issued2016
dc.date.submitted2016-07-23
dc.identifier.citation[1] 衛生福利部中央健康保險署,'癌症登記報告', http://www.nhi.gov.tw/.
[2] 衛生福利部中央健康保險署,'103 年各類癌症健保前10 大醫療支出統計',
http://www.nhi.gov.tw/
[3] Robert L. Barclay, Joseph J. Vicari, Andrea S. Doughty et al. 'Colonoscopic
withdrawal times and adenoma detection during screening colonoscopy,' N Engl J
Med, vol.355, No.24, pp.2533-2541, Dec. 2006.
[4] Michal F. Kaminski, Jaroslaw Regula, Ewa Kraszewska et al. 'Quality indicators
for colonoscopy and the risk of interval cancer,' N Engl J Med, vol.362, No.19,
pp.1795-1803, May. 2010.
[5] Nancy N. Baxter, Rinku Sutradhar, Shawn S. Forbes et al. 'Analysis of
administrative data finds endoscopist quality measures associated with
postcolonoscopy colorectal cancer,' Gastroenterology, vol.140, No.1, pp.65-72,
Sep. 2011.
[6] Brenner H, Chang-Claude J, Seiler CM et al. 'Interval cancers after negative
colonoscopy: population-based case-control study,' Gut, Vol.61, No.11,
pp.1576-1582, Dec. 2012.
[7] Brenner H, Chang-Claude J, Jansen L, et al. 'Role of colonoscopy and polyp
characteristics in colorectal cancer after colonoscopic polyp detection: a
population-based case-control study,' Ann Intern Med, Vol.157, No.4, pp.225-232,
Aug. 2012.
[8] 邱瀚模,李宜家,'如何提升大腸內視鏡品質-實證與指引', 2013, pp2, pp66.
[9] Hsiao-Chuan Chen, 'Auto-Recognition System of Cecum,' Thesis of National
Taiwan University, Jun. 2015.
[10] Brand J., and Mason J. 'A comparative assessment of three approaches to
pixel-level human skin-detection,' In Proc. of the International Conference on
Pattern Recognition, Vol. 1, 1056-1069, 2000.
[11] Jones M. J., and Rehg J. M., 'Statistical color models with application to skin
detection,' In Proc. of the CVPR ’99, Vol. 1, 274-280, Dec. 1998.
[12] Skarbek W., and Koschan A., 'Colour image segmentation –a survey–,' Tech. rep.,
Institute for Technical Informatics, Technical University of Berlin, Oct. 1994.
[13] Wiki: https://en.wikipedia.org/wiki/HSL_and_HSV
[14] Zarit B. D., Super B. J., and Quek F. K. H., 'Comparison of five color models in
skin pixel classification,' In ICCV’99 Int’l Workshop in recognition, analysis and
tracking of faces and gestures in Real-Time System, 58-63, 1999.
[15] Mckenna S., Gong S., and Raja Y., 'Modeling facial colour and identity with
gaussian mixtures,' Pattern Recognition 31, 12, 1883-1892, 1998.
[16] Sigal L., Sclaroff S., and Athitsos V., 'Estimation and prediction of evolving color
distributions for skin segmentation under varying illumination,' In Proc. IEEE
Conf. on Computer vision and Pattern Recognition, Vol. 2, 152-159, 2000.
[17] Birchfield S., 'Elliptical head tracking using intensity gradients and color
histograms,' In Proceedings of CVPR ’98, 232-237, 1998.
[18] Jordao L., Perrone M., Costeira J., et al., 'Active face and feature tracking,' In
Proceedings of the 10th International Conference on Image Analysis and
Processing, 572-577, 1999.
[19] Terrillon J. C., Shirazi M. N., Fukamachi H., et al., 'Comparative performance of
different skin chrominance models and chrominance spaces for the automatic
detection of human faces in color images,' In Proc. of the International
Conference on Face and Gesture Recognition, 54-61, 2000.
[20] Phung S. L., Bouzerdoum A., and Chai D., 'A novel skin color model in ycbcr
color space and its application to human face detection,' In IEEE International
Conference on Image processing (ICIP’2002), Vol. 1, 289-292, 2002.
[21] Menser B., and Wien M., 'Segmentation and tracking of facial regions in color
image sequences,' In Proc. SPIE Visual Communications and Image Processing
2000, 731-740, 2000.
[22] Hsu R. L., Abdel-Mottaleb M., and Jain A. K., 'Face detection in color images.
IEEE Trans,' Pattern Analysis and Machine Intelligence 24, 5, 696-706, 2002.
[23] Ahlberg J., 'A system for face localization and facial feature extraction,' Tech.
Rep. LiTH-ISY-R-2172, Linkoping University, 1999.
[24] Wiki: https://en.wikipedia.org/wiki/Lab_color_space
[25] Peer P., Kovac J., and Solina F., 'Human skin colour clustering for face
detection,' In submitted to Eurocon 2003 – International Conference on
Computer as a tool, 2003.
[26] Gemoz G., and Morales E., 'Automatic feature construction and a simple rule
induction algorithm for skin detection,' In Proc. of the ICML Workshop on
Machine Learning in Computer Vision, 31-38, 2002.
[27] Chen Q., Wu H., and Yachida M., 'Face detection by fuzzy pattern matching,' In
Proc. of the Fifth International Conference on Computer Vision, 591-597, 1995.
[28] Schumeyer R., and Barner K., 'A color-based classifier for region identification in
video,' In Visual Communicaions and Image Processing 1998, SPIE, Vol. 3309,
189-200, 1998.
[29] Soriano M., Huovinen S., Martinkauppi B., and Laaksonen M., 'Skin detection in
video under changing illumination conditions,' In Proc. 15th International
Conference in Pattern Recognition, Vol. 1, 839-842, 2000.
[30] Brown D., Craw I., and Lewthwaite J., 'A SOM based approach to skin detection
with application in real time systems,' In Proc. of the British Machine Vision
Conference, 2001.
[31] Canny John, 'A computational approach to edge detection,' IEEE Trans. Pattern
Analysis and Machine Intelligence, Vol. 8, No. 6, pp. 679-698, Nov. 1986.
[32] P. V. C. Hough, 'Method and means for recognizing complex patterns,' US Patent
3,069,654, Dec. 1962.
[33] Phil Simon, Too big to Ignore: the business case for big data, pp. 89, Mar. 2013.
[34] Breiman Leo, 'Random forests', Machine Learning, Vol. 45, Issue 1, pp. 5-32,
2001.
[35] Breiman Leo, 'Bagging predictors', Machine Learning, Vol. 24, Issue 2, pp.
123-140, Aug. 1996.
[36] Tin Kam Ho, 'Random decision forests', Proceedings of Third International
Conference on Document Analysis and Recognition, Vol. 1, pp. 278-282, Aug.
1995.
[37] Yoav Freund, Robert E. Schapire, 'A decision-theoretic Generalization of on-line
learning and an application to boosting', Journal of Computer and System
Sciences, Vol. 55, Issue 1, pp. 119-139, Aug. 1997.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/50572-
dc.description.abstract在此篇論文中,我們提出一個可以從各式各樣大腸鏡照片中,自動辨識任一照片是否拍攝到盲腸的系統,藉此減輕大腸鏡醫師檢視大量照片的負擔。
大腸直腸癌在國人近年來癌症的罹患人數及醫療支出費用都名列前茅,值得慶幸的是第0, 1 期的大腸癌經過治療後,五年存活率可達九成以上,但是早期大腸癌並無明顯症狀,必須透過定期做大腸鏡檢查來提早發現,便能有效地提高治療效果,並降低大腸癌致死的風險。
大腸鏡檢查的品質攸關著是否能確實早期發現大腸癌,因此除了病人定期到醫院篩檢,也需要確保大腸鏡檢查擁有一定的品質,經研究發現醫生若在每次檢查都能確實深入到盲腸(盲腸到達率高),病人罹患大腸癌的機率相對較低,也就是說盲腸到達率是用來評估大腸鏡品質的重要指標。
目前醫院是以人工的方式,交換大腸鏡照片給不同醫師來評估大腸鏡品質,因此我們提出一套自動辨識盲腸的系統,幫助醫師檢視大量照片並計算盲腸到達率,此系統會先利用腸道與糞便的色彩差異評估大腸鏡照片清腸不潔的程度是否影響大腸鏡品質,再針對清腸較乾淨的照片判斷是否含有回盲瓣(Ileocecal Valve,ICV)、三叉紋路(Triradiate Fold)或闌尾口(Appendiceal Orifice)等盲腸的特徵,我們利用各種影像處理的演算法擷取這些盲腸特徵後,再利用機器學習的方式來辨識照片是否有拍攝到盲腸,最後我們在平均辨識準確率上達到94.0%,最高辨識準確率達到96.9%,未來便可利用這套系統判斷醫師是否有確實在大腸鏡檢查中進入盲腸,做為一個公正評估大腸鏡品質的第三方,同時減少人工檢視照片的負擔。
zh_TW
dc.description.abstractIn this thesis, we proposed a system which can automatically recognize the cecum image from colonoscopy photos based on the variability of human intestinal. This
system can assist doctors to check the colonoscopy photos and reduce the load on doctors.
In recent years, the colorectal cancer is the top one cancer on incidence rate and medical expenses in Taiwan. Fortunately, early treatment of colorectal cancer in Tis and T1 can increase the survival rate of patient effectively. However, there is no symptom in the early stage of colorectal cancer. In order to detect the early stage of colorectal cancer, the colonoscopy examination regularly is very important.
The colonoscopy quality is closely related to the detection of early cancer. There are some quality indicators for colonoscopy: Cecal Intubation Rate (CIR), Bowel Preparation (BP), Adenoma Detection Rate (ADR), and Withdrawal Time (WT). In this thesis, we focus on CIR and BP.
In order to evaluate CIR, doctors need to view great amount of colonoscopy photos. Therefore we propose a cecum recognition system to help doctors to evaluate CIR
automatically. The system will assess BP if so bad that we cannot get information and features in the image. Then, the system extracts features of cecum from the images with good BP by image processing, and we use machine learning algorithm to recognize cecum images. Our method achieves the average accuracy rate of 94.0% and the best accuracy rate of 96.9%.
en
dc.description.provenanceMade available in DSpace on 2021-06-15T12:46:47Z (GMT). No. of bitstreams: 1
ntu-105-R03945002-1.pdf: 3972137 bytes, checksum: 338e417cccda41eb6004dc48661e6f3e (MD5)
Previous issue date: 2016
en
dc.description.tableofcontents口試委員會審定書...........................................................................................................#
誌謝 ................................................................................................................................. iii
中文摘要...........................................................................................................................v
ABSTRACT ................................................................................................................... vii
CONTENTS .....................................................................................................................ix
LIST OF FIGURES....................................................................................................... xiii
LIST OF TABLES...........................................................................................................xv
Chapter 1 Introduction................................................................................................1
1.1 Background ..............................................................................................1
1.2 Motivation and Objective........................................................................4
Chapter 2 Overview of Related Knowledge...............................................................7
2.1 Cecum Recognition System.....................................................................7
2.1.1 Area Ratio.......................................................................................................8
2.1.2 Parameters Optimization ..............................................................................10
2.1.3 Line Structure Detection...............................................................................12
2.2 Skin Color Classification.......................................................................14
2.2.1 Color spaces used for skin classification ......................................................14
2.2.2 Skin modeling...............................................................................................19
2.3 Digital Image Processing .......................................................................20
2.3.1 Canny Edge Detector ....................................................................................21
2.3.2 Line and Curve Detection.............................................................................24
2.4 Machine Learning..................................................................................25
2.4.1 Random Forest..............................................................................................26
2.4.2 Adaptive Boosting (AdaBoost) ....................................................................28
Chapter 3 Proposed Techniques................................................................................31
3.1 Bowel Preparation Evaluation..............................................................32
3.1.1 Histogram Analysis.......................................................................................33
3.1.2 Stool and Opaque Liquid Segmentation.......................................................35
3.2 Lightness-Based Feature Extraction ....................................................36
3.2.1 Multiple Y’ Thresholds for Largest Area Ratio ............................................37
3.2.2 Shape Analysis for Largest Black Area ........................................................38
3.3 Edge-Based Feature Extraction............................................................40
3.3.1 Adaptive Threshold for Canny Edge Detection............................................41
3.3.2 Curve Fitting.................................................................................................43
3.4 Cecum Classifier ....................................................................................46
3.4.1 Random Forest..............................................................................................46
3.4.2 AdaBoost-Stump ..........................................................................................48
Chapter 4 Experiment Result....................................................................................51
4.1 Performance of RF and AdaBoost-Stump ...........................................51
4.1.1 Training Environment...................................................................................51
4.1.2 Validation Result...........................................................................................52
4.2 Validation of AdaBoost-Stump .............................................................54
4.2.1 Random Validation .......................................................................................54
4.2.2 10-Fold Cross-Validation..............................................................................56
Chapter 5 Conclusion and Future Work ...................................................................57
5.1 Conclusion ..............................................................................................57
5.2 Future Work ...........................................................................................58
REFERENCE ..................................................................................................................59
dc.language.isoen
dc.subject回盲瓣zh_TW
dc.subject盲腸zh_TW
dc.subject回盲瓣zh_TW
dc.subject三叉紋路zh_TW
dc.subject闌尾口zh_TW
dc.subject影像處理zh_TW
dc.subject機器學習zh_TW
dc.subject機器學習zh_TW
dc.subject影像處理zh_TW
dc.subject闌尾口zh_TW
dc.subject三叉紋路zh_TW
dc.subject盲腸zh_TW
dc.subjectAdaBoosten
dc.subjectCecumen
dc.subjectFeatureen
dc.subjectImage processingen
dc.subjectMachine learningen
dc.subjectAdaBoosten
dc.subjectCecumen
dc.subjectFeatureen
dc.subjectImage processingen
dc.subjectMachine learningen
dc.title符合波士頓清腸指標之盲腸辨識系統基於自適應增強算法zh_TW
dc.titleAdaBoost-Based Cecum Recognition System in Accordance with Boston Bowel Preparation Scaleen
dc.typeThesis
dc.date.schoolyear104-2
dc.description.degree碩士
dc.contributor.oralexamcommittee邱瀚模(Han-Mo Chiu),傅楸善(Chiou-Shann Fuh)
dc.subject.keyword盲腸,回盲瓣,三叉紋路,闌尾口,影像處理,機器學習,zh_TW
dc.subject.keywordCecum,Feature,Image processing,Machine learning,AdaBoost,en
dc.relation.page62
dc.identifier.doi10.6342/NTU201601249
dc.rights.note有償授權
dc.date.accepted2016-07-25
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept生醫電子與資訊學研究所zh_TW
顯示於系所單位:生醫電子與資訊學研究所

文件中的檔案:
檔案 大小格式 
ntu-105-1.pdf
  未授權公開取用
3.88 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved