請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84805完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 劉志文(Chih-Wen Liu) | |
| dc.contributor.author | Yu-Lun Lee | en |
| dc.contributor.author | 李育倫. | zh_TW |
| dc.date.accessioned | 2023-03-19T22:26:34Z | - |
| dc.date.copyright | 2022-09-02 | |
| dc.date.issued | 2022 | |
| dc.date.submitted | 2022-08-31 | |
| dc.identifier.citation | [1] N. Schoofs, J. Deviere, and A. Van Gossum, 'PillCam colon capsule endoscopy compared with colonoscopy for colorectal tumor diagnosis: a prospective pilot study,' Endoscopy, vol. 38, no. 10, pp. 971-977, 2006. [2] H. Gu, H. Zheng, X. Cui, Y. Huang, and B. Jiang, 'Maneuverability and safety of a magnetic-controlled capsule endoscopy system to examine the human colon under real-time monitoring by colonoscopy: a pilot study (with video),' Gastrointestinal endoscopy, vol. 85, no. 2, pp. 438-443, 2017. [3] L. Liu, S. Towfighian, and A. Hila, 'A review of locomotion systems for capsule endoscopy,' IEEE reviews in biomedical engineering, vol. 8, pp. 138-151, 2015. [4] J. C. Van Rijn, J. B. Reitsma, J. Stoker, P. M. Bossuyt, S. J. Van Deventer, and E. Dekker, 'Polyp miss rate determined by tandem colonoscopy: a systematic review,' Official journal of the American College of Gastroenterology| ACG, vol. 101, no. 2, pp. 343-350, 2006. [5] A. Leufkens, M. Van Oijen, F. Vleggaar, and P. Siersema, 'Factors influencing the miss rate of polyps in a back-to-back colonoscopy study,' Endoscopy, vol. 44, no. 05, pp. 470-475, 2012. [6] M. F. Kaminski, P. Wieszczy, M. Rupinski, U. Wojciechowska, J. Didkowska, E. Kraszewska, J. Kobiela, R. Franczyk, M. Rupinska, and B. Kocot, 'Increased rate of adenoma detection associates with reduced risk of colorectal cancer and death,' Gastroenterology, vol. 153, no. 1, pp. 98-105, 2017. [7] A. Rau, B. Bhattarai, L. Agapito, and D. Stoyanov, 'Bimodal Camera Pose Prediction for Endoscopy,' arXiv preprint arXiv:2204.04968, 2022. [8] H. Itoh, H. R. Roth, L. Lu, M. Oda, M. Misawa, Y. Mori, S.-e. Kudo, and K. Mori, 'Towards automated colonoscopy diagnosis: binary polyp size estimation via unsupervised depth learning,' in International conference on medical image computing and computer-assisted intervention, 2018: Springer, pp. 611-619. [9] D. Freedman, Y. Blau, L. Katzir, A. Aides, I. Shimshoni, D. Veikherman, T. Golany, A. Gordon, G. Corrado, and Y. Matias, 'Detecting deficient coverage in colonoscopies,' IEEE Transactions on Medical Imaging, vol. 39, no. 11, pp. 3451-3462, 2020. [10] A. Rau, P. E. Edwards, O. F. Ahmad, P. Riordan, M. Janatka, L. B. Lovat, and D. Stoyanov, 'Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy,' International journal of computer assisted radiology and surgery, vol. 14, no. 7, pp. 1167-1176, 2019. [11] D. K. Rex, 'Polyp detection at colonoscopy: Endoscopist and technical factors,' Best Practice & Research Clinical Gastroenterology, vol. 31, no. 4, pp. 425-433, 2017. [12] 黃威銘, 'Study on the Improvement of Intestinal Identification by Magnetic Controlled Capsule Endoscope Based on Data Augmentation and Deep Learning,' 國立台灣大學電機工程研究所學位論文, 2020. [13] H.-E. Huang, S.-Y. Yen, C.-F. Chu, F.-M. Suk, G.-S. Lien, and C.-W. Liu, 'Autonomous navigation of a magnetic colonoscope using force sensing and a heuristic search algorithm,' Scientific reports, vol. 11, no. 1, pp. 1-15, 2021. [14] 褚家灃, 'Control Strategy and Automatic Traction Technology for Magnetic Controlled Capsule Endoscopy,' 國立台灣大學電機工程研究所學位論文 2020. [15] G.-S. Lien, M.-S. Wu, C.-N. Chen, C.-W. Liu, and F.-M. Suk, 'Feasibility and safety of a novel magnetic-assisted capsule endoscope system in a preliminary examination for upper gastrointestinal tract,' Surgical endoscopy, vol. 32, no. 4, pp. 1937-1944, 2018. [16] J. W. Martin, B. Scaglioni, J. C. Norton, V. Subramanian, A. Arezzo, K. L. Obstein, and P. Valdastri, 'Enabling the future of colonoscopy with intelligent and autonomous magnetic manipulation,' Nature machine intelligence, vol. 2, no. 10, pp. 595-606, 2020. [17] L. Y. Korman, V. Egorov, S. Tsuryupa, B. Corbin, M. Anderson, N. Sarvazyan, and A. Sarvazyan, 'Characterization of forces applied by endoscopists during colonoscopy by using a wireless colonoscopy force monitor,' Gastrointestinal endoscopy, vol. 71, no. 2, pp. 327-334, 2010. [18] A. M. Plooy, A. Hill, M. S. Horswill, A. S. G. Cresp, M. O. Watson, S.-Y. Ooi, S. Riek, G. M. Wallis, R. Burgess-Limerick, and D. G. Hewett, 'Construct validation of a physical model colonoscopy simulator,' Gastrointestinal endoscopy, vol. 76, no. 1, pp. 144-150, 2012. [19] F. Rosenblatt, 'The perceptron: a probabilistic model for information storage and organization in the brain,' Psychological review, vol. 65, no. 6, p. 386, 1958. [20] D. C. Plaut, S. J. Nowlan,and G. E. Hinton. Experiments on learning back propagation. Technical Report CMU–CS–86–126,Carnegie–Mellon University,Pittsburgh,PA,1986. [21] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, 'Handwritten digit recognition with a back-propagation network,' Advances in neural information processing systems, vol. 2, 1989. [22] Y. LeCun and Y. Bengio, 'Convolutional networks for images, speech, and time series,' The handbook of brain theory and neural networks, vol. 3361, no. 10, p. 1995, 1995. [23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, 'Gradient-based learning applied to document recognition,' Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998. [24] A. F. Agarap, 'An architecture combining convolutional neural network (CNN) and support vector machine (SVM) for image classification,' arXiv preprint arXiv:1712.03541, 2017. [25] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [26] O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' in International Conference on Medical image computing and computer-assisted intervention, 2015: Springer, pp. 234-241. [27] J. Long, E. Shelhamer, and T. Darrell, 'Fully convolutional networks for semantic segmentation,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440. [28] H. Noh, S. Hong, and B. Han, 'Learning deconvolution network for semantic segmentation,' in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1520-1528. [29] X. Xiao, S. Lian, Z. Luo, and S. Li, 'Weighted res-unet for high-quality retina vessel segmentation,' in 2018 9th international conference on information technology in medicine and education (ITME), 2018: IEEE, pp. 327-331. [30] D. Jha, P. H. Smedsrud, M. A. Riegler, D. Johansen, T. De Lange, P. Halvorsen, and H. D. Johansen, 'Resunet++: An advanced architecture for medical image segmentation,' in 2019 IEEE International Symposium on Multimedia (ISM), 2019: IEEE, pp. 225-2255. [31] J. Hu, L. Shen, and G. Sun, 'Squeeze-and-excitation networks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141. [32] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, 'Rethinking atrous convolution for semantic image segmentation,' arXiv preprint arXiv:1706.05587, 2017. [33] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, 'Image-to-image translation with conditional adversarial networks,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125-1134. [34] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, 'Context encoders: Feature learning by inpainting,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2536-2544. [35] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther, 'Autoencoding beyond pixels using a learned similarity metric,' in International conference on machine learning, 2016: PMLR, pp. 1558-1566. [36] T. Ganokratanaa, S. Aramvith, and N. Sebe, 'Unsupervised anomaly detection and localization based on deep spatiotemporal translation network,' IEEE Access, vol. 8, pp. 50312-50329, 2020. [37] B. Curless and M. Levoy, 'A volumetric method for building complex models from range images,' in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996, pp. 303-312. [38] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, and A. Davison, 'Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera,' in Proceedings of the 24th annual ACM symposium on User interface software and technology, 2011, pp. 559-568. [39] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, 'Kinectfusion: Real-time dense surface mapping and tracking,' in 2011 10th IEEE international symposium on mixed and augmented reality, 2011: IEEE, pp. 127-136. [40] T. Whelan, H. Johannsson, M. Kaess, J. J. Leonard, and J. McDonald, 'Robust real-time visual odometry for dense RGB-D mapping,' in 2013 IEEE International Conference on Robotics and Automation, 2013: IEEE, pp. 5724-5731. [41] T. Whelan, M. Kaess, M. Fallon, H. Johannsson, J. Leonard, and J. McDonald, 'Kintinuous: Spatially extended kinectfusion,' in 3rd RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras, (Sydney, Australia), 2012. [42] D. Werner, A. Al-Hamadi, and P. Werner, 'Truncated signed distance function: experiments on voxel size,' in International Conference Image Analysis and Recognition, 2014: Springer, pp. 357-364. [43] H. J. Hemmat and E. Bondarev, 'Exploring distance-aware weighting strategies for accurate reconstruction of voxel-based 3D synthetic models,' in International Conference on Multimedia Modeling, 2014: Springer, pp. 412-423. [44] F. Mahmood and N. J. Durr, 'Deep learning-based depth estimation from a synthetic endoscopy image training set,' in Medical Imaging 2018: Image Processing, 2018, vol. 10574: International Society for Optics and Photonics, p. 1057421. [45] F. Mahmood and N. J. Durr, 'Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy,' Medical image analysis, vol. 48, pp. 230-243, 2018. [46] I. Loshchilov and F. Hutter. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101,2017 [47] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, 'Domain-adversarial training of neural networks,' The journal of machine learning research, vol. 17, no. 1, pp. 2096-2030, 2016. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84805 | - |
| dc.description.abstract | 近年來,隨著運算資源的不斷進步,圖形處理器(Graphics Process Unit,GPU)、張量處理單元(Tensor Process Unit,TPU)等專為深度神經網路所研發的AI運算引擎出現,加上物聯網裝置爆炸性的增加,大數據的取得已不是件難事,也因為這種種原因,使得各行各業陸陸續續導入AI技術,當然在醫學影像領域中也不例外,例如: X光攝影、超音波影像、電腦斷層掃描(Computed Tomography,CT)、核磁共振造影(Magnetic Resonance Imaging,MRI)的病徵辨識。 我們知道在內視鏡檢查品質指標中,有包含:盲腸到達率(Cecal Intubation Rate,CIR)、腺瘤偵測率(Adenoma detection rate,ADR)等等……,如果內視鏡檢查過程中可以獲取其深度資訊,不但可以幫助內視鏡醫生提升上述所提之內視鏡檢查品質指標,還可以透過RGB-D圖片將腸道做三維重建,進一步使醫生得知檢查過程中的覆蓋率是否足夠。 在本篇論文中,我將使用多種不同的深度學習網路並應用在大腸內視鏡之深度估計任務當中,來取得腸道之RGB-D 圖像,並比較各模型之優缺點以及做定量評估。此外我還將高質量的深度資訊透過CNN1D模型,訓練出能推測檢查過程之覆蓋率的深度模型,最後使用Truncated Signed Distance Function (TSDF)演算法,實現單幀腸道圖片的三維影像重建。 | zh_TW |
| dc.description.abstract | With the continuous improvement of computing resources for the past few years, such as the Graphics Process Unit and the Tensor Process Unit specially designed for deep learning. Additionally, with the explosive growth of IoT devices, it is no longer difficult to obtain Big data. For these reasons, all walks of life have gradually lead-in artificial technology, the field of medical imaging is no exception. For instance, symptom identification of medical radiography, ultrasonic imaging, Computed Tomography Imaging, and Magnetic Resonance Imaging. We know that the quality indicators of endoscopy include Cecal Intubation Rate, Adenoma Detection Rate, etc. if endoscopists can obtain the depth information during the endoscopy process, it can not only help endoscopists to improve the quality indicators of endoscopy mentioned above but also make 3D modeling of the colon by using the depth information, further inform endoscopists that the coverage during examination is efficient. In this paper, I will use a variety of deep learning models and apply them to the depth estimation task in colorectal endoscopy to obtain depth information. In addition, make a comparison between each model for analyzing benefits, drawbacks and make a quantitative evaluation. Otherwise, I use the high-quality depth information as the CNN-1D model’s training data to train a model which can infer the coverage of the endoscopy. Finally, I use the Truncated Signed Distance Function (TSDF) algorithm to achieve 3D image reconstruction of a single-frame intestinal image. | en |
| dc.description.provenance | Made available in DSpace on 2023-03-19T22:26:34Z (GMT). No. of bitstreams: 1 U0001-3008202216435600.pdf: 10538788 bytes, checksum: 850c7a661de323eaaf77c824298da13f (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | 口試委員會審定書 i 致謝 ii 摘要 iii ABSTRACT iv 目錄 v 圖目錄 viii 表目錄 xvi 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機及研究目的 3 1.3 文獻回顧 5 1.3.1 Google Research 研究團隊 5 1.3.2 倫敦大學研究團隊 14 1.4 章節摘要 18 第二章 磁牽引平台與磁控內視鏡之介紹 19 2.1 MFN之概況與演進 19 2.1.1 第一代磁牽引平台(MFN Platform) 19 2.1.2 第二代磁牽引平台(MFN Platform) 21 2.2 有線之磁控大腸內視鏡 30 2.2.1 MACC 2.0硬體結構 31 2.2.2 MACC 2.0系統架構 33 2.3 牽引平台之自動牽引技術 35 2.3.1 自動牽引實驗大腸模型介紹 35 2.3.2 本團隊自動牽引研究概況 36 第三章 內視鏡深度估計之深度學習網路介紹 42 3.1 多層感知器 42 3.1.1 前向傳遞(Forward propagation) 43 3.1.2 反向傳遞(Backpropagation) 45 3.1.3 使用多層感知器在影像任務上之缺點 49 3.2 卷積神經網路 50 3.2.1 卷積層 51 3.2.2 池化層 55 3.2.3 全連接層 57 3.3 殘差網路 58 3.4 ResUnet網路 64 3.5 CGAN-Pix2pix網路 73 第四章 腸道三維重建方法 83 4.1 TSDF介紹 83 4.2 TSDF之重建步驟及原理 84 4.3 TSDF參數選擇之影響 88 4.4 TSDF相關論文之實驗結果討論 89 第五章 介紹訓練數據集與實驗成果討論 93 5.1 硬體與架構 93 5.2 使用之資料庫介紹 93 5.2.1 UCL 倫敦大學團隊 - 內視鏡合成深度圖 94 5.2.2 Google Research團隊 - 內視鏡檢查覆蓋率資料庫 95 5.3 實驗成果與討論 97 5.3.1 實驗量化指標介紹 97 5.3.2 深度估計網路實驗結果 100 5.3.3 片段內視鏡檢查之覆蓋率推斷方法 133 5.3.4 內視鏡影像三維重建成果 138 第六章 結論與未來工作 142 6.1 結論 142 6.2 未來工作 144 參考文獻 145 | |
| dc.language.iso | zh-TW | |
| dc.subject | 磁控內視鏡檢查 | zh_TW |
| dc.subject | 腸道深度資訊估計 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 內視鏡檢查覆蓋率 | zh_TW |
| dc.subject | 內視鏡腸道重建 | zh_TW |
| dc.subject | Magnetic endoscopy5 | en |
| dc.subject | Coverage in endoscopy | en |
| dc.subject | Depth estimation in endoscopy | en |
| dc.subject | Deep learning | en |
| dc.subject | Endoscopic reconstruction5 | en |
| dc.title | 基於深度學習在內視鏡檢查中估計覆蓋率之研究 | zh_TW |
| dc.title | A Study of the Coverage Estimation for Colonoscopy based on Deep Learning | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 110-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 連吉時(Gi-Shih Lien),粟發滿(Fat-Moon Suk) | |
| dc.subject.keyword | 內視鏡檢查覆蓋率,腸道深度資訊估計,深度學習,內視鏡腸道重建,磁控內視鏡檢查, | zh_TW |
| dc.subject.keyword | Coverage in endoscopy,Depth estimation in endoscopy,Deep learning,Endoscopic reconstruction5,Magnetic endoscopy5, | en |
| dc.relation.page | 149 | |
| dc.identifier.doi | 10.6342/NTU202202979 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2022-08-31 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| dc.date.embargo-lift | 2022-09-02 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-3008202216435600.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 10.29 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
