請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86030
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 李百祺 | zh_TW |
dc.contributor.advisor | Pai-Chi Li | en |
dc.contributor.author | 張元嚴 | zh_TW |
dc.contributor.author | Yuan-Yen Chang | en |
dc.date.accessioned | 2023-03-19T23:33:36Z | - |
dc.date.available | 2023-12-26 | - |
dc.date.copyright | 2022-10-08 | - |
dc.date.issued | 2022 | - |
dc.date.submitted | 2002-01-01 | - |
dc.identifier.citation | References
[1] H. Luo et al., "Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study," Lancet Oncol, vol. 20, no. 12, pp. 1645-1654, Dec 2019, doi: 10.1016/S1470-2045(19)30637-0. [2] T. Aoki et al., "Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network," Gastrointest Endosc, vol. 89, no. 2, pp. 357-363 e2, Feb 2019, doi: 10.1016/j.gie.2018.10.027. [3] S. Attardo et al., "Artificial intelligence technologies for the detection of colorectal lesions: The future is now," World J Gastroenterol, vol. 26, no. 37, pp. 5606-5616, Oct 7 2020, doi: 10.3748/wjg.v26.i37.5606. [4] A. R. Pimenta-Melo, M. Monteiro-Soares, D. Libanio, and M. Dinis-Ribeiro, "Missing rate for gastric cancer during upper gastrointestinal endoscopy: a systematic review and meta-analysis," Eur J Gastroenterol Hepatol, vol. 28, no. 9, pp. 1041-9, Sep 2016, doi: 10.1097/MEG.0000000000000657. [5] J. M. Park, S. M. Huo, H. H. Lee, B. I. Lee, H. J. Song, and M. G. Choi, "Longer Observation Time Increases Proportion of Neoplasms Detected by Esophagogastroduodenoscopy," Gastroenterology, vol. 153, no. 2, pp. 460-469 e1, Aug 2017, doi: 10.1053/j.gastro.2017.05.009. [6] J. M. Park et al., "The effect of photo-documentation of the ampulla on neoplasm detection rate during esophagogastroduodenoscopy," Endoscopy, vol. 51, no. 2, pp. 115-124, Feb 2019, doi: 10.1055/a-0662-5523. [7] J. Cohen and I. M. Pike, "Defining and measuring quality in endoscopy," Gastrointest Endosc, vol. 81, no. 1, pp. 1-2, Jan 2015, doi: 10.1016/j.gie.2014.07.052. [8] J. F. Rey, R. Lambert, and E. Q. A. Committee, "ESGE recommendations for quality control in gastrointestinal endoscopy: guidelines for image documentation in upper and lower GI endoscopy," Endoscopy, vol. 33, no. 10, pp. 901-3, Oct 2001, doi: 10.1055/s-2001-42537. [9] S. Marques, M. Bispo, P. Pimentel-Nunes, C. Chagas, and M. Dinis-Ribeiro, "Image Documentation in Gastrointestinal Endoscopy: Review of Recommendations," GE Port J Gastroenterol, vol. 24, no. 6, pp. 269-274, Nov 2017, doi: 10.1159/000477739. [10] M. F. Kaminski et al., "Performance measures for lower gastrointestinal endoscopy: a European Society of Gastrointestinal Endoscopy (ESGE) Quality Improvement Initiative," Endoscopy, vol. 49, no. 4, pp. 378-397, Apr 2017, doi: 10.1055/s-0043-103411. [11] C. J. Rees et al., "UK key performance indicators and quality assurance standards for colonoscopy," Gut, vol. 65, no. 12, pp. 1923-1929, Dec 2016, doi: 10.1136/gutjnl-2016-312044. [12] M. K. Rizk et al., "Quality indicators common to all GI endoscopic procedures," Gastrointest Endosc, vol. 81, no. 1, pp. 3-16, Jan 2015, doi: 10.1016/j.gie.2014.07.055. [13] S. Muthukuru et al., "Quality of Colonoscopy: A Comparison Between Gastroenterologists and Nongastroenterologists," Dis Colon Rectum, vol. 63, no. 7, pp. 980-987, Jul 2020, doi: 10.1097/DCR.0000000000001659. [14] Y. Y. Chang et al., "Deep learning-based endoscopic anatomy classification: an accelerated approach for data preparation and model validation," Surg Endosc, Sep 29 2021, doi: 10.1007/s00464-021-08698-2. [15] Y. Y. Chang et al., "Upper Endoscopy Photodocumentation Quality Evaluation with Novel Deep Learning System," Dig Endosc, Oct 30 2021, doi: 10.1111/den.14179. [16] R. Bisschops et al., "Overcoming the barriers to dissemination and implementation of quality measures for gastrointestinal endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) and United European Gastroenterology (UEG) position statement," Endoscopy, Jan 7 2021, doi: 10.1055/a-1312-6389. [17] J. Deng, W. Dong, R. Socher, L. Li, L. Kai, and F.-F. Li, "ImageNet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 20-25 June 2009 2009, pp. 248-255, doi: 10.1109/CVPR.2009.5206848. [18] H. Borgli et al., "HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy," Sci Data, vol. 7, no. 1, p. 283, Aug 28 2020, doi: 10.1038/s41597-020-00622-y. [19] A. Vaswani et al., "Attention is all you need," in Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, 2017: Curran Associates Inc., pp. 6000–6010, doi: 10.5555/3295222.3295349. [20] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding." [Online]. Available: https://arxiv.org/abs/1810.04805 [21] A. Dosovitskiy et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," presented at the International Conference on Learning Representations, Addis Ababa, ETHIOPIA, October, 2020. [Online]. Available: https://arxiv.org/abs/2010.11929. [22] Z. Liu et al., "Swin transformer: Hierarchical vision transformer using shifted windows," arXiv preprint arXiv:2103.14030, 2021. [23] L. Chen, P. Bentley, K. Mori, K. Misawa, M. Fujiwara, and D. Rueckert, "Self-supervised learning for medical image analysis using image context restoration," Medical Image Analysis, vol. 58, p. 101539, 2019/12/01/ 2019, doi: https://doi.org/10.1016/j.media.2019.101539. [24] K. Ohri and M. Kumar, "Review on self-supervised image recognition using deep neural networks," KNOWLEDGE-BASED SYSTEMS, vol. 224, JUL 19 2021, doi: 10.1016/j.knosys.2021.107090. [25] F. Haghighi, M. R. H. Taher, Z. W. Zhou, M. B. Gotway, and J. M. Liang, "Transferable Visual Words: Exploiting the Semantics of Anatomical Patterns for Self-Supervised Learning," IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 40, no. 10, pp. 2857-2868, OCT 2021, doi: 10.1109/TMI.2021.3060634. [26] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the Ieee, vol. 86, no. 11, pp. 2278-2324, Nov 1998, doi: 10.1109/5.726791. [27] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Communications of the Acm, vol. 60, no. 6, pp. 84-90, Jun 2017, doi: 10.1145/3065386. [28] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," arXiv 1409.1556, 09/04 2014. [29] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90. [30] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9. [31] S. Ioffe and C. Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," arXiv e-prints, p. arXiv:1502.03167. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2015arXiv150203167I [32] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," arXiv e-prints, p. arXiv:1512.00567. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2015arXiv151200567S [33] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning," arXiv e-prints, p. arXiv:1602.07261. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2016arXiv160207261S [34] Y. Bengio, P. Simard, and P. Frasconi, "Learning long-term dependencies with gradient descent is difficult," IEEE transactions on neural networks, vol. 5, no. 2, pp. 157-166, 1994. [35] R. K. Srivastava, K. Greff, and J. Schmidhuber, "Training very deep networks," in Advances in neural information processing systems, 2015, pp. 2377-2385. [36] G. Larsson, M. Maire, and G. Shakhnarovich, "Fractalnet: Ultra-deep neural networks without residuals," arXiv preprint arXiv:1605.07648, 2016. [37] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," CoRR, vol. abs/1512.03385, / 2015. [Online]. Available: http://arxiv.org/abs/1512.03385. [38] X. Li, W. Wang, X. Hu, and J. Yang, "Selective Kernel Networks," p. arXiv:1903.06586. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2019arXiv190306586L [39] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization," in 2017 IEEE International Conference on Computer Vision (ICCV), 22-29 Oct. 2017 2017, pp. 618-626, doi: 10.1109/ICCV.2017.74. [40] M. T. Ribeiro, S. Singh, and C. Guestrin, "Anchors: High-Precision Model-Agnostic Explanations," 2018. [Online]. Available: https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982. [41] B. Xie, Y. Mu, D. Tao, and K. Huang, "m-SNE: Multiview Stochastic Neighbor Embedding," IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 4, pp. 1088-1096, 2011, doi: 10.1109/TSMCB.2011.2106208. [42] T. G. Dietterich, "Approximate statistical tests for comparing supervised classification learning algorithms," (in English), Neural Comput, vol. 10, no. 7, pp. 1895-1923, Oct 1 1998, doi: Doi 10.1162/089976698300017197. [43] H. Zhang et al., "ResNeSt: Split-Attention Networks," p. arXiv:2004.08955. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2020arXiv200408955Z [44] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal Loss for Dense Object Detection," p. arXiv:1708.02002. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2017arXiv170802002L [45] K. Bishay et al., "Associations between endoscopist feedback and improvements in colonoscopy quality indicators: a systematic review and meta-analysis," Gastrointest Endosc, vol. 92, no. 5, pp. 1030-1040 e9, Nov 2020, doi: 10.1016/j.gie.2020.03.3865. [46] M. D. Rutter et al., "The European Society of Gastrointestinal Endoscopy Quality Improvement Initiative: developing performance measures," Endoscopy, vol. 48, no. 1, pp. 81-9, Jan 2016, | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86030 | - |
dc.description.abstract | 消化道內視鏡檢查過程中的照片記錄是檢查品質的指標之一,但是該指標在內視鏡檢查中心難以自動測量和審核,而新興的人工智慧技術可能可幫忙解決此問題。首先,我們將利用深度學習(Deep Learning, DL)依據歐洲胃腸內鏡學會指引對上消化道內視鏡影像進行分成八個特定解剖位置,然後再評估是否記錄了所有解剖位置的影像,以照片記錄率的完整性作為內視鏡檢查的自動化品質評估指標。同時,上消化道影像分類和品質指標系統也將擴展到下消化道內視鏡檢查。然而,一個好的 DL 模型需要大量的訓練數據來進行模型開發,為了減少醫師標記時間,我們開發了一種加速數據準備,在此提出的方法中,先使用較小的已標記數據集來訓練基礎模型,然後由基礎模型對另一個較大的尚未標記數據集進行分類,醫生將可以快速查看和修改由基礎模型分類的結果,隨後可以使用校正過的數據集重新訓練增強模型以提高性能,完成的基礎模型和增強模型的準確率分別達到 96.29% 和 96.64%。在開發了好的分類模型後,我們將利用 12 位內視鏡醫師進行的 472 次內視鏡檢查進行品質評估指標實驗,可發現腺瘤檢出率較高的內視鏡醫師從咽部到十二指腸(60.0% vs 38.7%,p<0.0001)和從食道到十二指腸(83.0% vs 65.7%,p<0.0001)有較高的完整檢查率。而在下消化道內視鏡檢查品質指標實驗中共分析了 761 個真實世界的報告和大腸鏡檢查影像,電子報告盲腸檢出率為 99.34%,而所提品質指標系統的盲腸檢出率為 98.95%;使用電子報告和品質指標系統評估息肉切除率的一致率為0.87;使用品質指標系統計算的檢查時間與醫生輸入的檢查時間存在良好的相關性(r = 0.959,p < 0.0001)。由上述實驗結果可得知本研究建立的內視鏡影像品質自動評估系統應可提升內視鏡檢查品質並為病患提供更好的照顧。 | zh_TW |
dc.description.abstract | Photodocumentation is one endoscopy quality performance indicator; however, manually auditing this indicator is challenging in clinical practice. Artificial intelligence technology may help to solve this problem. In this study, the upper gastrointestinal (GI)endoscopy images are classified into eight specific anatomical landmarks according to the society of Gastrointestinal Endoscopy (ESGE) guideline by the proposed deep learning (DL) system. Then, this classification model can be used to assess whether all images of anatomical locations are documented and the completeness of the photodocumentation rate could be used as the quality indicator. Also, the upper GI classification and quality indicator system could be extended to the lower GI endoscopy. However, a good DL model requires a large amount of training data for model development. In order to reduce the labeling time, we develop an accelerated data preparation approach. In this proposed approach, a smaller labeled data set is first used to train the base model, and then another larger unlabeled data set is classified by the base model. The base model and enhanced model achieve total accuracy of 96.29% and 96.64%, respectively. After developing a good classification model, we can use this DL system to assess whether all images of anatomical locations are documented. The photodocumentation completeness rate could be usedas the quality indicator for the endoscopist performance. A total of 472 upper GI endoscopies performed by 12 endoscopists are enrolled. The higher adenoma detection endoscopists have a higher complete examination rate (83.0% vs. 65.7%). For the proposed lower GI quality indicator system, 761 real-world examinations are analyzed. The accuracy of the proposed algorithm for the cecal intubation rate is 98.95% and the polypectomy agreement rate of the electronic reports and the DL algorithm is 0.87. A good correlation of DL withdrawal time between and that entered by the physician is found (r = 0.959). From the above experiments, the proposed DL endoscopy quality indicator system could help to improve the ndoscopy procedure's performance and provide better patient care. | en |
dc.description.provenance | Made available in DSpace on 2023-03-19T23:33:36Z (GMT). No. of bitstreams: 1 U0001-1409202216021300.pdf: 4370756 bytes, checksum: 0f0159707ac38ec213482322112b596c (MD5) Previous issue date: 2022 | en |
dc.description.tableofcontents | 口試委員審定書 I
致謝 II 摘要 III Abstract IV Table of Contents V List of Figures VIII List of Tables XI Chapter 1. Introduction 1 Chapter 2. Related Works 4 2.1. CNN Models 4 2.1.1 ResNet 4 2.1.2 ResNeXt 5 2.1.3 ResNeSt 6 2.2. Explainable AI and Performance Metrics 7 2.2.1 Explainable AI 7 2.2.2 Performance Measures 7 2.2.3 McNemar's test 8 2.3. Gastrointestinal Endoscopy 9 2.3.1 Upper GI Endoscopy 9 2.3.2 Duodenal Papilla 10 2.3.3 White Light Endoscopy and Narrowband Endoscopy 10 2.3.4 Colonoscopy Quality Indicators 11 Chapter 3. Upper Gastrointestinal Endoscopic Anatomy Classification 12 3.1. Materials & Methods 12 3.1.1 Patients and Data Preparation 12 3.1.2 Preparation of Endoscopy Images for Deep Learning 13 3.2. Results 14 3.2.1 Deep Learning Base Model Training on 1st Dataset 14 3.2.2 Training the Deep Learning Enhanced Model on the 2nd Dataset 17 3.2.3 Model Performance Evaluation for the Internal Test Dataset 19 3.3. Discussion 24 Chapter 4. Upper Endoscopy Quality Assessment 26 4.1. Materials & Methods 26 4.1.1 Patients and Data Preparation 26 4.1.2 Deep Learning Model and Endoscopy Image Processing 26 4.1.3 Complete Photodocumentation Rate Assessment 27 4.1.4 Statistical Analyses 27 4.2. Results 28 4.2.1 Endoscopy Image Data Characteristics 28 4.2.2 Characteristics of Endoscopist Performance 29 4.2.3 Comparison of Endoscopist Colonoscopy Performance and Completeness of Upper Endoscopy Examination Photodocumentation 30 4.3. Discussion 32 Chapter 5. Colonoscopy Quality Assessment 36 5.1. Materials & Methods 36 5.1.1 Patients and Data Preparation 36 5.1.2 Preparation of Endoscopy Images and Model Training 37 5.1.3 Deep Learning Model Performance with External Testing Image Data 38 5.1.4 DL model Performance with Real-world Colonoscopy Reports 38 5.1.5 Statistical Analysis 39 5.2. Results 39 5.2.1 Performance of Trained Model 39 5.2.2 Model Performances of External Test Dataset 42 5.2.3 DL Model in Assessing Real-world Colonoscopy Images and Reports 44 5.3. Discussion 46 Chapter 6. Future Works 49 Chapter 7. Conclusion 51 References 53 | - |
dc.language.iso | en | - |
dc.title | 應用深度學習於腸胃道內視鏡對解剖部位分類及自動品質評估 | zh_TW |
dc.title | Deep Learning Based Gastrointestinal Endoscopic Anatomy Classification and Automatic Quality Assessment | en |
dc.type | Thesis | - |
dc.date.schoolyear | 110-2 | - |
dc.description.degree | 博士 | - |
dc.contributor.oralexamcommittee | 張立群;曾宇鳳;楊東霖;羅崇銘 | zh_TW |
dc.contributor.oralexamcommittee | Li-Chun Chang;Yufeng Jane Tseng;T Tony Yang;Chung-Ming Lo | en |
dc.subject.keyword | 消化道內視鏡,內視鏡品質,深度學習, | zh_TW |
dc.subject.keyword | gastrointestinal endoscopy,quality in endoscopy,deep learning, | en |
dc.relation.page | 64 | - |
dc.identifier.doi | 10.6342/NTU202203401 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2022-09-19 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | - |
dc.date.embargo-lift | 2024-09-30 | - |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-110-2.pdf 此日期後於網路公開 2024-09-30 | 4.27 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。