請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78516
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 黃升龍 | |
dc.contributor.author | Sheng-Ting Tsai | en |
dc.contributor.author | 蔡昇廷 | zh_TW |
dc.date.accessioned | 2021-07-11T15:01:28Z | - |
dc.date.available | 2021-08-20 | |
dc.date.copyright | 2019-08-20 | |
dc.date.issued | 2019 | |
dc.date.submitted | 2019-08-19 | |
dc.identifier.citation | [1] 陳昱彤, “全域式光學同調斷層掃描術用於動物眼睛模型之特性分析,” 國立台灣大學光電工程學研究所, 2018.
[2] J. K. C. Chan, “The Wonderful Colors of the Hematoxylin–Eosin Stain in Diagnostic Surgical Pathology,” International Journal of Surgical Pathology, vol. 22, issue 1, pp. 12-32, 2014. [3] I. J. Goodfellow et al.,” Generative adversarial nets,” NIPS (International Conference on Neural Information Processing Systems), vol. 2, pp. 2672-2680 [4] Y. Rivenson, et al. “Virtual histological staining of unlabelled tissue autofluorescence images via deep learning,” Nature Biomedical Engineering, vol. 3, pp. 466–477, 2019. [5] X. Chapeleau et al, Measurements using Optic and RF Waves. New Jersey: John Wiley & Sons, Inc., 2013. [6] 游鈐, “Mirau全域式光學同調斷層掃描術結合近紅外光拉曼光譜用於皮膚細胞之影像與頻譜特性分析,” 國立台灣大學光電工程學研究所, 2018. [7] W. Drexler and J. G. Fujimoto, Optical Coherence Tomography – technologies and applications. Second edition. Springer, 2008 [8] C. Burrus and J. Stone, “Single-crystal fiber optical devices: A Nd: YAG fiber laser,” Applied Physics Letters, vol. 26, pp. 318-320, 1975. [9] “Optical constants of Fused silica.” [Online]. Available: https://refractiveindex.info/?shelf=glass&book=fused_silica&page=Malitson [10] 林彥宏, “Mirau 全域式光學同調斷層掃描於皮膚結構及動態特性分析,” 國立台灣大學光電工程學研究所, 2016. [11] “The Rayleigh Criterion.” [Online]. Available: http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/Raylei.html [12] “Camera Resolution.” [Online]. Available: https://www.photonics.com/a29926/Camera_Resolution_Combining_Detector_and_Optics [13] “皮膚生理學.” [Online]. Available: https://www.e-champ.com.tw/skin.php [14] “Skin structure” [Online]. Available: https://lh3.googleusercontent.com/i6W5QmVFDaNFfRE9uyYXUucW34D4ErUWdi-YMFkMdMxo6RjHnPk560Pb22xalY7g4sHaGwYB5__aHDAQk767TIr8WSPeK8lGLeElKT3eloiNIKEu3AkHAYuSuiZMKMi6qbMvyPzS [15] “皮膚的生理結構.” [Online]. Available: http://drtonywu.pixnet.net/blog/post/26010100-【醫師專欄-皮膚】-認識皮膚的生理結構-struct. [16] “Layers of epidermis.” [Online]. Available: https://www.earthslab.com/physiology/cells-layers-epidermis/ [17] “石蠟切片.” [Online]. Available: https://www.itsfun.com.tw/%E7%9F%B3%E8%A0%9F%E5%88%87%E7%89%87/wiki-8805363 [18] “冰凍切片.” [Online]. Available: https://www.itsfun.com.tw/%E5%86%B0%E5%87%8D%E5%88%87%E7%89%87/wiki-3366017-7019886 [19] “黑色素細胞.” [Online]. Available: https://smallcollation.blogspot.com/2013/09/melanocyte.html#gsc.tab=0 [20] “Langerhans cells.” [Online]. Available: https://basicmedicalkey.com/skin-5/ [21] “Lentiginous nevus. Melanocytes (grey, not pigmented) vs basal keratinocytes (brown melanin).” [Online]. Available: https://twitter.com/JMGardnerMD/status/883502151393300483 [22] D. C. Ciresan, A. Giusti, L. M. Gambardella, J. Schmidhuber,” Deep neural networks segment neuronal membranes in electron microscopy images,” NIPS (International Conference on Neural Information Processing Systems), vol. 2, pp. 2843-2851, 2012. [23] E. Shelhamer, J. Long and T. Darrell,” Fully Convolutional Networks for Semantic Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, issue 4, pp. 640-651, 2017. [24] O. Ronneberger, P. Fischer and T. Brox,” U-Net: Convolutional Networks for Biomedical Image Segmentation,” MICCAI (Medical Image Computing and Computer-Assisted Intervention), vol. 9351, pp. 234-241, 2015. [25] K. Y. T. Seet, T. A. Nieminen and A. Zyvagin,” Refractometry of melanocyte cell nuclei using optical scatter images recorded by digital Fourier microscopy,” Journal of Biomedical Optics, vol. 14, issue 4, 2009. [26] “Hung-yi Lee’s Machine learning courses, Conditional GAN, spring, 2018” [Online]. Available: http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2018/Lecture/CGAN.pdf [27] “Hung-yi Lee’s Machine learning courses, “Unsupervised Learning: Deep Auto-encoder”, fall, 2017” [Online]. Available: http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2017/Lecture/auto.pdf [28] “Hung-yi Lee’s Machine learning courses, “GAN Theory”, spring, 2018” [Online]. Available: http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2018/Lecture/GANtheory%20(v2).pdf [29] “熵(資訊理論).” [Online]. Available: https://zh.wikipedia.org/wiki/%E7%86%B5_(%E4%BF%A1%E6%81%AF%E8%AE%BA) [30] S. Nowozin, B. Cseke and R. Tomioka,” f-GAN: training generative neural samplers using variational divergence minimization,” NIPS (International Conference on Neural Information Processing Systems), pp. 271-279, 2016. [31] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” ICCV (IEEE International Conference on Computer Vision), pp.2242-2251, 2017 [32] M.-Y. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,” NIPS (International Conference on Neural Information Processing Systems), 2017. [33] H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, “Diverse image-to-image translation via disentangled representations,” ECCV (European Conference on Computer Vision), 2018. [34] A. Radford, L. Metz and S. Chintala,” Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” ICLR (International Conference on Learning Representations), 2016. [35] M. Mirza and S. Osindero, “Conditional generative adversarial nets”, 2014. [36] “What should I do if the synthetic image in target is similar the real image in source?” [Online]. Available: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/618 [37] “Unet or Resnet, which one is better for pix2pix model?” [Online]. Available: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/117 [38] P. Isola, J. Zhu, T. Zhou and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks.” CVPR (IEEE Conference on Computer Vision and Pattern Recognition), pp. 5967-5976, 2017. [39] J. Johnson, A. Alahi and F.-F. Lee, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”, ECCV (European Conference on Computer Vision), 2016. [40] “Sigmoid function.” [Online]. Available: https://stackoverflow.com/questions/49977063/role-derivative-of-sigmoid-function-in-neural-networks [41] X. Mao et al., ” Least Squares Generative Adversarial Networks.” ICCV (IEEE International Conference on Computer Vision), pp. 2813-2821, 2017. [42] “CycleGAN loss curve.” [Online]. Available: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/30 [43] S. Liu and W. Deng, “Very deep convolutional neural network based image classification using small training sample size,” ACPR (Asian Conference on Pattern Recognition), 3rd IAPR Asian Conference, pp. 730-734, 2015. [44] “ImageNet” [Online]. Available: http://www.image-net.org/ [45] “ImageNet: what is top-1 and top-5 error rate?” [Online]. Available: https://stats.stackexchange.com/questions/156471/imagenet-what-is-top-1-and-top-5-error-rate [46] M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury and C. Pal, “The Importance of Skip Connections in Biomedical Image Segmentation,” DLMIA (Workshop on Deep Learning in Medical Image Analysis), 2016. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/78516 | - |
dc.description.abstract | 在臨床上,有關皮膚方面的疾病,病理學家在判斷上主要還是依靠染色切片作為黃金標準,其中又以蘇木精與伊紅染色(H&E staining)切片最為常見,但因為其製備過程的破壞性與費時,在手術上常常會造成延誤手術時間、多餘組織被破壞等缺點。而光學同調斷層掃瞄術(OCT)所展示的皮膚三維影像具有高解析度與非侵入式等特性,能夠在不破壞組織的情況下,提供皮膚的完整精細結構,然而因OCT影像對於病理學家來說並不熟悉,因此在臨床上儘管有了OCT影像,病理學家仍然會製備染色切片來做更精確的診斷,顯示出了染色影像的必要性。因此本研究為了使病理學家能夠更直觀的利用OCT影像做診斷,建立了能夠將OCT影像轉換為類H&E染色影像的模型,稱為OCT2HE模型。
基於使用活體人類皮膚的OCT影像轉換為類H&E染色影像之目的,在影像轉換的領域上,生成對抗網路(GAN)相較於其餘的網路架構具有能夠使用非成對影像(Unpaired data)作訓練的潛力,而這對於本研究之目的也是最關鍵的,因為無法從活體組織中取得染色影像,因此本研究所提出的影像轉換模型─OCT2HE模型,其建立在以GAN為核心的CycleGAN上,並針對活體人類皮膚的OCT影像與人類皮膚的H&E影像做非成對影像的訓練,透過改善了在訓練過程中所碰到的雜訊重建問題、皮膚角質層(SC)下邊界與表皮真皮交界(DEJ)轉換上與醫生所判定的標準答案不相符的問題,以及細胞核位置轉換前後對應不正確的問題,成功達到了使活體人類皮膚的OCT影像轉換為類H&E影像的目標,並且提供了量化數值方便比對。目前的成果為在測試影像上,SC下邊界於模型輸出與輸入影像的位置誤差為±1.74 μm,DEJ於模型輸出與輸入影像的位置誤差為±2.29 μm;針對SC下邊界至DEJ間的區域,此區域之IOU於模型輸出與輸入影像為87%±3%,以及此區域的皮爾森相關係數於模型輸出與輸入影像為87%±1%。 本論文利用OCT2HE模型可將活體人類皮膚的OCT影像轉換為近似的H&E染色影像,轉換後的影像不僅針對醫生所判定的OCT影像之SC下邊界與DEJ有滿高的一致性,就連比較細節資訊的細胞核也能大致正確的轉換。此OCT2HE模型利用OCT影像的非侵入式特性與影像轉換技術的即時轉換特性,有望能改善目前臨床上染色影像所帶來的費時與必需破壞活體組織等缺點。 | zh_TW |
dc.description.abstract | For pathologists, histopathological analysis like hematoxylin and eosin dye (H&E) staining is considered as the gold standard for skin diagnosis. However, the long staining processing time and destroying the tissue cause pathologists inconvenience especially when surgery is progressing. On the other hand, optical coherence tomography (OCT) enables non-invasive, high resolution and 3-D imaging of the structure of skin tissue. Owing to the huge difference between OCT images and H&E images, pathologists are hard to diagnose disease with OCT images only. It shows the necessary of stained images when diagnosis.
In this research, we proposed a semi-supervised image-to-image translation model, called OCT2HE model. This model depends on the CycleGAN, but improve some problems for training on in vivo human skin OCT images and human skin H&E images. First, we solved the problem of noise reconstruction in training. Second, we improved the problem of the stratum corneum (SC) lower boundary and dermal-epidermal junction (DEJ) mismatched to the ground truth after translation. Third, we even improve the location of nuclei before-and-after translation. Here are some quantity values to show how accurate for the translation. For the SC lower boundary, the error between input and output of the OCT2HE is ±1.74 μm. For the DEJ, the error between input and output is ±2.29 μm. For the IOU in the region between SC lower boundary to DEJ, the value is 0.87±0.03. For the Pearson’s correlation coefficient in the region between SC lower boundary to DEJ, the value is 0.87±0.01. Our OCT2HE model provides a real time image-to-image translation between OCT and H&E stained-liked images. If combing this model with OCT technology, it could be very help for the pathologist when surgery is progressing, and it could solve the problems of H&E staining processing nowadays. | en |
dc.description.provenance | Made available in DSpace on 2021-07-11T15:01:28Z (GMT). No. of bitstreams: 1 ntu-108-R06941080-1.pdf: 8220250 bytes, checksum: 30dd077b6dbec255a4de69fb40e795d1 (MD5) Previous issue date: 2019 | en |
dc.description.tableofcontents | 致謝 I
摘要 II Abstract IV 圖目錄 VIII 表目錄 XIII 第一章 緒論 1 第二章 Mirau-based全域式光學同調斷層掃描與皮膚樣本的結構及製備介紹 3 2.1光學同調斷層掃描術理論 3 2.1.1時域式光學同調斷層掃描儀 3 2.1.2全域式時域光學同調斷層掃描術 8 2.2 Mirau-based 全域式光學同調斷層掃描系統 8 2.2.1 Mirau-based FF-OCT系統架構與特性 8 2.2.2干涉訊號處理與系統之橫向與縱向解析度 15 2.2.3用於影像轉換與影像分割之OCT影像取得與其預處理 20 2.3人類皮膚構造與蘇木精-伊紅染色(H&E)石蠟切片 24 2.3.1皮膚功能與其結構 24 2.3.2組織病理切片的製備流程 27 2.3.3用於影像轉換與影像分割之H&E影像與其預處理 33 第三章 卷積神經網路在影像分割與轉換之應用 36 3.1卷積神經網路在影像分割之應用 36 3.1.1應用於影像分割之卷積神經網路模型 37 3.1.2影像分割模型之資料集建立 40 3.2生成對抗網路應用於影像轉換 44 3.2.1卷積神經網路應用於影像轉換的模型之選擇 44 3.2.2生成對抗網路的介紹與目的 46 3.2.3生成對抗網路的原理 47 3.2.4所使用的生成對抗網路在影像轉換上的特性 54 3.2.5影像轉換之訓練集與測試集影像建立 60 第四章 影像轉換與影像分割模型之結果分析 61 4.1以CycleGAN做OCT轉類H&E之結果與問題 61 4.2修正雜訊重建問題之方法與OCT2HE模型 64 4.3應用影像分割模型判定SC下邊界與DEJ之結果 66 4.4限制SC下邊界與DEJ於OCT2HE模型 71 4.4限制細胞核位置於OCT2HE模型 73 第五張 結論與未來展望 80 5.1結論 80 5.2未來展望 81 參考文獻 82 附錄1 本論文中所使用之影像分割與影像轉換模型的超參數與訓練細節 87 CycleGAN影像轉換模型 87 OCT2HE影像轉換模型 88 OCT與H&E影像的影像分割模型 90 附錄2程式碼介紹 91 國網中心(TWGC)的訓練環境(container)建立與使用方式 91 影像轉換模型 93 影像分割模型 95 附錄3 影像資料集的介紹與使用方式 97 | |
dc.language.iso | zh-TW | |
dc.title | 從生成對抗網路轉換活體皮膚斷層影像為類H&E染色影像 | zh_TW |
dc.title | Conversion between in vivo human skin tomographic images and H&E stained-like images via generative adversarial network | en |
dc.type | Thesis | |
dc.date.schoolyear | 107-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳宏銘,邱政偉 | |
dc.subject.keyword | 影像轉換,光學同調斷層掃描,蘇木精與伊紅染色,活體人類皮膚影像,生成對抗網路, | zh_TW |
dc.subject.keyword | image-to-image translation,optical coherence tomography (OCT),hematoxylin and eosin dye staining (H&E staining),in vivo human skin images,generative adversarial network (GAN), | en |
dc.relation.page | 98 | |
dc.identifier.doi | 10.6342/NTU201904041 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2019-08-19 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 光電工程學研究所 | zh_TW |
顯示於系所單位: | 光電工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-108-R06941080-1.pdf 目前未授權公開取用 | 8.03 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。