請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74512
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 曾雪峰(Snow H. Tseng) | |
dc.contributor.author | Jen-Yu Huang | en |
dc.contributor.author | 黃任佑 | zh_TW |
dc.date.accessioned | 2021-06-17T08:39:59Z | - |
dc.date.available | 2020-08-15 | |
dc.date.copyright | 2019-08-15 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-08-07 | |
dc.identifier.citation | [1] B. K. Armstrong and A. Kricker, 'The epidemiology of UV induced skin cancer,' Journal of photochemistry and photobiology B: Biology, vol. 63, no. 1-3, pp. 8-18, 2001.
[2] G. Argenziano and H. P. Soyer, 'Dermoscopy of pigmented skin lesions–a valuable tool for early,' The lancet oncology, vol. 2, no. 7, pp. 443-449, 2001. [3] A. Masood and A. Ali Al-Jumaily, 'Computer aided diagnostic support system for skin cancer: a review of techniques and algorithms,' International journal of biomedical imaging, vol. 2013, 2013. [4] C.-C. Tsai et al., 'Full-depth epidermis tomography using a Mirau-based full-field optical coherence tomography,' Biomedical optics express, vol. 5, no. 9, pp. 3001-3010, 2014. [5] A. Esteva et al., 'Dermatologist-level classification of skin cancer with deep neural networks,' Nature, vol. 542, no. 7639, p. 115, 2017. [6] D. Mandache, E. Dalimier, J. Durkin, C. Boceara, J.-C. Olivo-Marin, and V. Meas-Yedid, 'Basal cell carcinoma detection in full field OCT images using convolutional neural networks,' in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018, pp. 784-787: IEEE. [7] D. Huang et al., 'Optical coherence tomography,' science, vol. 254, no. 5035, pp. 1178-1181, 1991. [8] P. H. Tomlins and R. Wang, 'Theory, developments and applications of optical coherence tomography,' Journal of Physics D: Applied Physics, vol. 38, no. 15, p. 2519, 2005. [9] S. Ogawa, T.-M. Lee, A. R. Kay, and D. W. Tank, 'Brain magnetic resonance imaging with contrast dependent on blood oxygenation,' proceedings of the National Academy of Sciences, vol. 87, no. 24, pp. 9868-9872, 1990. [10] E. Dalimier and D. Salomon, 'Full-field optical coherence tomography: a new technology for 3D high-resolution skin imaging,' Dermatology, vol. 224, no. 1, pp. 84-92, 2012. [11] R. Leitgeb, C. Hitzenberger, and A. F. Fercher, 'Performance of fourier domain vs. time domain optical coherence tomography,' Optics express, vol. 11, no. 8, pp. 889-894, 2003. [12] 陳昱彤, '全域式光學同調斷層掃描術用於動物眼睛模型之特性分析,' 碩士, 光電工程學研究所, 國立臺灣大學, 台北市, 2018. [13] M. R. Hee et al., 'Optical coherence tomography of the human retina,' Archives of ophthalmology, vol. 113, no. 3, pp. 325-332, 1995. [14] K. Doi, 'Computer-aided diagnosis in medical imaging: historical review, current status and future potential,' Computerized medical imaging and graphics, vol. 31, no. 4-5, pp. 198-211, 2007. [15] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited, 2016. [16] A. Radford, L. Metz, and S. Chintala, 'Unsupervised representation learning with deep convolutional generative adversarial networks,' arXiv preprint arXiv:1511.06434, 2015. [17] T. Miyato, S.-i. Maeda, S. Ishii, and M. Koyama, 'Virtual adversarial training: a regularization method for supervised and semi-supervised learning,' IEEE transactions on pattern analysis and machine intelligence, 2018. [18] L. P. Kaelbling, M. L. Littman, and A. W. Moore, 'Reinforcement learning: A survey,' Journal of artificial intelligence research, vol. 4, pp. 237-285, 1996. [19] C. Cortes and V. Vapnik, 'Support-vector networks,' Machine learning, vol. 20, no. 3, pp. 273-297, 1995. [20] Y. LeCun, Y. Bengio, and G. Hinton, 'Deep learning,' nature, vol. 521, no. 7553, p. 436, 2015. [21] J. Schmidhuber, 'Deep learning in neural networks: An overview,' Neural networks, vol. 61, pp. 85-117, 2015. [22] O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234-241: Springer. [23] O. Russakovsky et al., 'Imagenet large scale visual recognition challenge,' International journal of computer vision, vol. 115, no. 3, pp. 211-252, 2015. [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, 'Dropout: a simple way to prevent neural networks from overfitting,' The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014. [25] K. He, X. Zhang, S. Ren, and J. Sun, 'Deep residual learning for image recognition,' in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, 'Imagenet classification with deep convolutional neural networks,' in Advances in neural information processing systems, 2012, pp. 1097-1105. [27] M. Abadi et al., 'Tensorflow: A system for large-scale machine learning,' in 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 2016, pp. 265-283. [28] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, 'Gradient-based learning applied to document recognition,' Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998. [29] M. D. Zeiler and R. Fergus, 'Visualizing and understanding convolutional networks,' in European conference on computer vision, 2014, pp. 818-833: Springer. [30] K. Simonyan and A. Zisserman, 'Very deep convolutional networks for large-scale image recognition,' arXiv preprint arXiv:1409.1556, 2014. [31] Y. LeCun et al., 'Handwritten digit recognition with a back-propagation network,' in Advances in neural information processing systems, 1990, pp. 396-404. [32] X. Glorot and Y. Bengio, 'Understanding the difficulty of training deep feedforward neural networks,' in Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010, pp. 249-256. [33] K. He, X. Zhang, S. Ren, and J. Sun, 'Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,' in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026-1034. [34] D. P. Kingma and J. Ba, 'Adam: A method for stochastic optimization,' arXiv preprint arXiv:1412.6980, 2014. [35] S. Ruder, 'An overview of gradient descent optimization algorithms,' arXiv preprint arXiv:1609.04747, 2016. [36] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, 'Grad-cam: Visual explanations from deep networks via gradient-based localization,' in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618-626. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/74512 | - |
dc.description.abstract | 本篇論文使用全域式光學同調斷層掃描 (Full-Field Optical Coherence Tomography, FF-OCT) 所獲得之三維微米級細胞組織影像,以監督式學習和三維卷積神經網路 (Convolutional Neural Network, CNN) 來實現小鼠鱗狀細胞癌 (Squamous Cell Carcinoma, SCC) 的影像分類,我們採用經典的神經網路作為三維模型架構的參考指標,並觀察模型的泛化能力和嘗試進行模型可視化。此外,我們更進一步評估和比較二維和三維卷積神經網路在光學同調斷層掃描影像中對小鼠皮膚病變的分類性能,探討透過三維卷積神經網路分析FF-OCT醫學影像的可行性,研究結果表明,雖然三維模型能在測試集上得到很好的分類性能,但礙於目前數據量的不足,在準確率的表現上較為不穩定。 | zh_TW |
dc.description.abstract | The purpose of this study is to classify mice skin lesions in Full-Field Optical Coherence Tomography (FF-OCT) images using three-dimensional (3D) convolutional neural network (CNN). Deep CNN and supervised learning were implemented to extract features and achieve multi-class classification. We employed classic neural network architectures are trained with FF-OCT images as a reference indicator for 3D model designing. In addition, we evaluated and compared the performance of two-dimensional (2D) and 3D CNN applied to FF-OCT images to explore the feasibility to analyze FF-OCT images via 3D deep learning architecture. Research findings show that 3D CNN is effective for classification of mice squamous cell carcinoma (SCC) in test set, but the accuracy performance of 3D CNN is more unstable due to the influence of weight initialization and the amount of data. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T08:39:59Z (GMT). No. of bitstreams: 1 ntu-108-R06941097-1.pdf: 5061518 bytes, checksum: 81261e2b0e9e44e65118a1ddae03cbd5 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 誌謝 i
中文摘要 iii ABSTRACT iv 目錄 v 圖目錄 viii 表目錄 xi 第一章 緒論 1 1.1 研究動機及目的 1 1.1.1 研究動機 1 1.1.2 研究目的 2 1.2 論文架構 3 第二章 研究背景 5 2.1 全域式光學同調斷層掃描 5 2.2 電腦輔助診斷技術 8 2.3 深度學習演算法 9 2.3.1 人工智慧與深度學習 9 2.3.2 深度學習應用 11 2.4 卷積神經網路 13 2.4.1 卷積層 13 2.4.2 池化層 16 2.4.3 全連接層 17 2.4.4 激活函數 19 第三章 研究方法 24 3.1 軟硬體平台與計算環境設置 24 3.2 數據預處理 26 3.2.1 實驗數據資料 26 3.2.2 裁切影像與補零 31 3.2.3 特徵標準化 33 3.2.4 相對應類別標籤 35 3.3 神經網路模型建立 36 3.3.1 常用二維卷積神經網路架構 36 3.3.2 三維卷積神經網路設計 44 3.4 超參數設置 46 3.4.1 權重初始化 46 3.4.2 學習速率 47 3.4.3 迭代週期與批次大小 48 3.4.4 優化器 48 3.5 評估與量化指標 54 3.5.1 損失函數 54 3.5.2 過擬合與欠擬合 56 3.5.3 評估指標 58 3.6 訓練流程 61 第四章 實驗結果與討論 64 4.1 三維神經網路模型性能比較 64 4.2 評估鱗狀細胞癌分類結果 66 4.2.1 模型超參數影響 66 4.2.2 K-fold交叉驗證 69 4.2.3 Grad-CAM可視化分析 75 4.3 二維與三維神經網路架構比較 80 4.3.1 數據預處理分析 80 4.3.2 模型參數與訓練過程分析 82 4.3.3 預測準確率分析 83 4.3.4 可視化分析 87 第五章 結論與未來展望 89 5.1 結論 89 5.2 未來展望 90 參考文獻 91 | |
dc.language.iso | zh-TW | |
dc.title | 三維深度卷積神經網路分類老鼠皮膚癌之光學同調斷層掃描影像 | zh_TW |
dc.title | Classification of Mice Skin Cancer Optical Coherence Tomography Image Using 3D Convolutional Neural Network | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 黃升龍(Sheng-Lung Huang),陳宏銘(Homer H. Chen) | |
dc.subject.keyword | 深度學習,三維卷積神經網路,全域式光學同調斷層掃描,鱗狀細胞癌, | zh_TW |
dc.subject.keyword | Deep Learning,Full-Field Optical Coherence Tomography,Convolutional Neural Network,Squamous Cell Carcinoma, | en |
dc.relation.page | 93 | |
dc.identifier.doi | 10.6342/NTU201902799 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-08-08 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 光電工程學研究所 | zh_TW |
顯示於系所單位: | 光電工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-1.pdf 目前未授權公開取用 | 4.94 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。