請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77154完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 鄭振牟(Chen-Mou Cheng) | |
| dc.contributor.author | Shuo-Wen Chang | en |
| dc.contributor.author | 張碩文 | zh_TW |
| dc.date.accessioned | 2021-07-10T21:48:41Z | - |
| dc.date.available | 2021-07-10T21:48:41Z | - |
| dc.date.copyright | 2020-01-07 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2020-01-04 | |
| dc.identifier.citation | [1] Kraus, Oren Z., Jimmy Lei Ba, and Brendan J. Frey. 'Classifying and segmenting microscopy images with deep multiple instance learning.' Bioinformatics 32.12 (2016): i52-i59.
[2] Al-Kofahi, Yousef, et al. 'A deep learning-based algorithm for 2-D cell segmentation in microscopy images.' BMC bioinformatics19.1 (2018): 365. [3] Ciresan, Dan, et al. 'Deep neural networks segment neuronal membranes in electron microscopy images.' Advances in neural information processing systems. 2012. [4] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. 'V-net: Fully convolutional neural networks for volumetric medical image segmentation.' 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016. [5] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classi_cation with deep convolutional neural networks. In: NIPS. pp. 1106{1114 (2012) [6] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014), arXiv:1409.1556 [cs.CV] [7] J. Long, E. Shelhamer and T. Darrell. Fully Convolutional Networks for Semantic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015 [8] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. 'U-net: Convolutional networks for biomedical image segmentation.' International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. [9] V. Iglovikov, S. Mushinskiy and V. Osin, Satellite Imagery Feature Detection using Deep Convolutional Neural Network: A Kaggle Competition,arXiv:1706.06169, 2017. [10] T. Ching et al., Opportunities And Obstacles For Deep Learning InBiology And Medicine, www.biorxiv.org:142760, 2017. [11] V. Iglovikov, A. Rakhlin, A. Kalinin and A. Shvets, PediatricBone Age Assessment Using Deep Convolutional Neural Networks, arXiv:1712.05053, 2017. [12] https://www.kaggle.com/c/carvana-image-masking-challenge [13] YAO, Wei, et al. Pixel-wise regression using U-Net and its application on pansharpening. Neurocomputing, 2018, 312: 364-371. [14] Çiçek, Özgün, et al. '3D U-Net: learning dense volumetric segmentation from sparse annotation.' International conference on medical image computing and computer-assisted intervention. Springer, Cham, 2016. [15] Iglovikov, Vladimir, and Alexey Shvets. 'Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation.' arXiv preprint arXiv:1801.05746 (2018). [16] Iglovikov, Vladimir, et al. 'Ternausnetv2: Fully convolutional network for instance segmentation.' arXiv preprint arXiv:1806.00844 (2018). [17] ARBELLE, Assaf; RAVIV, Tammy Riklin. Microscopy Cell Segmentation via LSTM Networks. arXiv preprint arXiv:1805.11247, 2018. [18] Xingjian, S. H. I., et al. 'Convolutional LSTM network: A machine learning approach for precipitation nowcasting.' Advances in neural information processing systems. 2015.Russakovsky, Olga, et al. 'Imagenet large scale visual recognition challenge.' International journal of computer vision115.3 (2015): 211-252. [19] SIMONYAN, Karen; ZISSERMAN, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [20] Albert Cardona, Stephan Saalfeld, Stephan Preibisch, Benjamin Schmid, Anchi Cheng, Jim Pulokas, Pavel Tomancak, and Volker Hartenstein. An integrated micro- and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol, 8(10):e1000502, 10, 2010. [21] Segmentation of neuronal structures in EM stacks challenge - ISBI 2012. http://brainiac2.mit.edu/isbi_challenge/. [22] Glorot, Xavier, and Yoshua Bengio. 'Understanding the difficulty of training deep feedforward neural networks.' Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010. [23] 2018 Data Science Bowl https://www.kaggle.com/c/data-science-bowl-2018 [24] Arganda-Carreras, Ignacio, et al. 'Crowdsourcing the creation of image segmentation algorithms for connectomics.' Frontiers in neuroanatomy, 2015. [25] A. Hauptmann et al., “Model based learning for accelerated, limited-view 3D photoacoustic tomography,” ArXiv170809832 Cs Math, Aug. 2017. [26] Guan, Steven, et al. 'Fully Dense UNet for 2D sparse photoacoustic tomography artifact removal.' IEEE journal of biomedical and health informatics, 2019. [27] Zhou X, Wong STC. High content cellular imaging for drug development. IEEE Signal Proc Mag. 2006;23(2):170–4. [28] Vonesch C, Aguet F, Vonesch JL, Unser M. The colored revolution of bioimaging. IEEE Signal Proc Mag. 2006;23(3):20–31. [29] Deshmukh BS, Mankar VH. Segmentation of microscopic images: A survey. In: 2014 International Conference on Electronic Systems, Signal Processing and Computing Technologies; 2014. p. 362–4. [30] Meijering E. Cell Segmentation: 50 Years Down the Road. IEEE Signal Proc Mag. 2012;29(5):140–5. [31] Wang M, Zhou X, Li F, Huckins J, King RW, Wong STC. Novel cell segmentation and online SVM for cell cycle phase identification in automated microscopy. Bioinformatics. 2008;24(1):94–101. [32] Sharif JM, Miswan MF, Ngadi MA, Salam MSH, Jamil MMbA. Red blood cell segmentation using masking and watershed algorithm: A preliminary study. In: 2012 International Conference on Biomedical Engineering (ICoBE); 2012. p. 258–62. [33] Nath SK, Palaniappan K, Bunyak F. Cell segmentation using coupled level sets and graph-vertex coloring. In: International Conference on Medical Image Computing and Computer-Assisted Intervention; 2006. p. 101–8. [34] Dzyubachyk O, Niessen W, Meijering E. Advanced level-set based multiple-cell segmentation and tracking in time-lapse fluorescence microscopy images. In: 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2008. p. 185–8. [35] Dorini LB, Minetto R, Leite NJ. White blood cell segmentation using morphological operators and scale-space analysis. In: XX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI 2007); 2007. p. 294–304. [36] Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans Biomed Eng. 2010;57(4):841–52. [37] Wang X, He W, Metaxas D, Mathew R, White E. Cell segmentation and tracking using texture-adaptive shakes. In: 2007 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2007. p. 101–4. [38] Yin Z, Bise R, Chen M, Kanade T. Cell segmentation in microscopy imagery using a bag of local bayesian classifiers. In: 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2010. p. 125–8. [39] Su H, Yin Z, Huh S, Kanade T. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features. Med Image Anal. 2013;17(7):746–65. [40] Wählby C, Lindblad J, Vondrus M, Bengtsson E, Björkesten L. Algorithms for cytoplasm segmentation of fluorescence labelled cells. Anal Cell Pathol: J Eur Soc Anal Cell Pathol. 2002;24(2):101–11. [41] Allalou A, Wählby C. BlobFinder, a tool for fluorescence microscopy image cytometry. Comput Methods Prog Biomed. 2009;94(1):58–65. [42] Al-Kofahi Y, Lassoued W, Grama K, Nath SK, Zhu J, Oueslati R, Feldman M, Lee WMF, Roysam B. Cell-based quantification of molecular biomarkers in histopathology specimens. Histopathology. 2011;59(1): 40–54. [43] Quelhas P, Marcuzzo M, Mendonca AM, Campilho A. Cell nuclei and cytoplasm joint segmentation using the sliding band filter. IEEE Trans Med Imaging. 2010;29(8):1463–73. [44] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553): 436–44. Ulman V, Maška M, Magnusson KE, Ronneberger O, Haubold C, Harder N, Matula P, Matula P, Svoboda D, Radojevic M, et al. An objective comparison of cell-tracking algorithms. Nat Methods. 2017;14(12):1141. [45] Van Valen DA, Kudo T, Lane KM, Macklin DN, Quach NT, DeFelice MM, Maayan I, Tanouchi Y, Ashley EA, Covert MW. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS Comput Biol. 2016;12(11):1005177–124. [46] Kraus OZ, Grys BT, Ba J, Chong Y, Frey BJ, Boone C, Andrews BJ. Automated analysis of high-content microscopy data with deep learning. Mol Syst Biol. 2017;13(4):924. [47] Pärnamaa T, Parts L. Accurate classification of protein subcellular localization from high-throughput microscopy images using deep learning. G3: Gene, Genomes, Genet. 2017;7(5):1385–92. [48] Kraus OZ, Ba JL, Frey BJ. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics. 2016;32(12):52–59. [49] Song Y, Zhang L, Chen S, Ni D, Li B, Zhou Y, Lei B, Wang T. A deep learning based framework for accurate segmentation of cervical cytoplasm and nuclei. IEEE Eng Med Biol Soc. 2014;2014:2903–6. [50] 'MICCAI BraTS 2017: Scope | Section for Biomedical Image Analysis (SBIA) | Perelman School of Medicine at the University of Pennsylvania'. www.med.upenn.edu. Retrieved 2018-12-24. [51] 'SLIVER07 : Home'. www.sliver07.org. Retrieved 2018-12-24. [52] SIMONYAN, Karen; ZISSERMAN, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [53] He, Kaiming, et al. 'Deep residual learning for image recognition.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [54] Huang, Gao, et al. 'Densely connected convolutional networks.' Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [55] https://imagej.net/Topology_preserving_warping_error [56] Jain, Viren, et al. 'Boundary learning by optimization with topological constraints.' 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2010. [57] https://imagej.net/Topology_preserving_warping_error [58] E. Shelhamer et al., “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, , no. 99, pp. 1–1, 2016. [59] H. Chen et al., “Dcan: Deep contour-aware networks for accurate gland segmentation,” arXiv preprint arXiv:1604.02677, 2016. [60] Guan, Steven, et al. 'Fully Dense UNet for 2D sparse photoacoustic tomography artifact removal.' IEEE journal of biomedical and health informatics, 2019. [61] LI, Jun, et al. Probability Map Guided Bi-directional Recurrent UNet for Pancreas Segmentation. arXiv preprint arXiv:1903.00923, 2019. [62] Zeng, Tao, et al. 'Recurrent encoder-decoder networks for time-varying dense prediction.' 2017 IEEE International Conference on Data Mining (ICDM). IEEE, 2017. [63] Stollenga, Marijn F., et al. 'Parallel multi-dimensional lstm, with application to fast biomedical volumetric image segmentation.' Advances in neural information processing systems, 2015. [64] Chen, Jianxu, et al. 'Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation.' Advances in neural information processing systems, 2016. [65] Alahi, Alexandre, et al. 'Social lstm: Human trajectory prediction in crowded spaces.' Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. [66] Huang, Zhiheng, Wei Xu, and Kai Yu. 'Bidirectional LSTM-CRF models for sequence tagging.' arXiv preprint arXiv:1508.01991, 2015. [67] Wang, Yequan, Minlie Huang, and Li Zhao. 'Attention-based LSTM for aspect-level sentiment classification.' Proceedings of the 2016 conference on empirical methods in natural language processing. 2016. [68] Yu, Bing; YIN, Haoteng; ZHU, Zhanxing. ST-UNet: A Spatio-Temporal U-Network for Graph-structured Time Series Modeling. arXiv preprint arXiv:1903.05631, 2019. [69] Li, Xiaomeng, et al. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE transactions on medical imaging, 2018. [70] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [71] Drozdzal, Michal, et al. 'The importance of skip connections in biomedical image segmentation.' Deep Learning and Data Labeling for Medical Applications. Springer, Cham, 2016. 179-187. [72] Dice, Lee R. 'Measures of the amount of ecologic association between species.' Ecology 26.3 (1945): 297-302. [73] Ciresan, Dan, et al. 'Deep neural networks segment neuronal membranes in electron microscopy images.' Advances in neural information processing systems. 2012. [74] Liu, Ting, et al. 'Watershed merge tree classification for electron microscopy image segmentation.' Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). IEEE, 2012. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77154 | - |
| dc.description.abstract | 由於活細胞的時間行為,顯微鏡影像切割至今仍然是非常具有挑戰性的問題,Unet架構已被認為是細胞影像分割的強大方法。 一般而言,人類專家在標註細胞資料時,經常依賴於時空間的連貫性,以便準確地分離相鄰的細胞並檢測部分可見的細胞。然而,利用時間、空間的特性來處理有時序性細胞影像分割的state of the art Unet,依然沒有明確被提出。本研究利用來處理具時序性空間特徵卷積長短期記憶運算(CLSTM),以及密集連接(dense connection)的方法取代部分原本Unet的2D convolution的計算,達成state of the art的結果。在本研究中,我們比較了八種UNet編碼器網路(encoder network)架構,發現VGG13的模型架構在testing(公開的資料集)上最終獲得最佳的影像分割結果,並且我們將這些模型的性能以多次實驗數據、學習曲線圖、損失函數曲線相互比較,並從這些實驗中發現CLSTM被證明有助於提升測試階段的準確率。第一個實驗中,我們比較了隨機初始化的權重與預訓練的權重作為初始權重,從Resnet18和VGG16、VGG19三個UNet網路中,預訓練的權重都在測試中有最佳的表現;在實驗II中,我們以相同的實驗環境、訓練驗證的資料與相同的初始權重,測試並了八種神經網路模型作為Encoder的準確度:到CLSTM作者的結論所啟發,實驗III接續實驗II,希望將CLSTM放入不同網路組成的encoder中,討論影像分割結果的進步成效:每一個網路我們分別做了五次訓練、驗證與測試,畫成一束學習曲線圖,得知CLSTM對不同網路convolution計算與學習進程上的進步程度,本實驗亦獲得一個重要的結論,即densely connection 較residual connection更適合做為short skip connection。為了防止over-fitting的問題,在每次的訓練時我們均使用dropout和early-stopping。為了更進一步取得具有公信力的數據,實驗IV將上述模型均繳交競賽的Testing資料,我們再一次印證了個模型的準確率與前述實驗結果相符,此外,也證明了我們提的方法可以登上最新的leader group board,表現出細緻的segmentation。實驗V中,我們更進一步用densely-connected的short skip connection方法改進,最後取得競賽前幾名的成績。 | zh_TW |
| dc.description.abstract | Due to the dynamic movement and variable shape of living cells, the microscopy image analysis remains a challenging task. Unet, FCN based network is normally considered as vital reference for models to perform segmentation of biomedical images. However, there is still no clear way to for Unet to deal with the temporal and spatial characteristics of the cuts of time-series cell images. This study uses the weight pre-trained on a large scale of data as initial weights and CLSTM, which replace most of the pure convolution operation in the original Unet, to process time-series spatial features and obtain better performance than many predecessors' networks. In this study, we compared eight Unet encoder network architectures. We found that the among all the networks of the encoder, the architecture of VGG13 obtain the best image segmentation results on testing data, and we evaluated the performance of these models with multiple experimental data, learning curves, loss function curves.
Next, we conducted a series of experiments and found that CLSTM has been proven to help improve the segmentation accuracy: In the first experiment, we compared the randomly initialized weights and the weights of pre-trained model as the initial weights of Unet models. From the three UNet networks: Resnet18 encoder Unet, VGG16 encoder Unet and VGG19 encoder Unet, the pre-trained weights all show better performance in testing phase. In the second experiments, we evaluated the eight encoder networks of Unet models with the intersection over union (IoU) with the same experimental environment, training validation testing data. Inspired by the idea of integrating the CLSTM into encoder structure, we continued experiment 2 and discussed the improvement of using CLSTM to address the spatial temporal information in the eight models in the third experiment. For each network, we have done five trainings, validation and testing. We have drawn a bunch of learning curves and discover that the CLSTM operation can significantly improve the accuracy of segmentation and shorten the learning procedure of VGG encoder Unet. To prevent over-fitting, we use dropout and early stopping during each training. In order to further obtain credible result, in the fourth experiment, we submitted the testing segmentation result of the above models to competition system. From the error score sent by the system, we once again confirmed that the accuracy of the model is consistent with the above experimental results. In addition, it also proves our new method: KUNet, can be on the latest top leader board, showing detailed segmentation. In the fifth experiment, we further improved KUNet by referencing the densely connected method, and finally achieved top place in the competition. | en |
| dc.description.provenance | Made available in DSpace on 2021-07-10T21:48:41Z (GMT). No. of bitstreams: 1 ntu-108-R06921008-1.pdf: 7597664 bytes, checksum: 714a9b3a16f9fb9e8f913b4ae1272a8a (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 口試委員會審定書
誌謝 i ACKNOWLEDGMENT ii 中文摘要 iiii ABSTRACT v CONTENTS viii LIST OF FIGURES ix LIST OF TABLES xiii Chapter 1 Introduction 1 1.1 Motivation And Probem Definition 4 1.1.1 ImageNet 7 1.1.2 Convolutional LSTM (CLSTM) 8 1.1.3 Pre-training and Fine tunning 10 1.1.4 2-D cell microscopy images, the history and competitiveness of the competition 13 1.2 Related Work of the Cell Image Segmentation 15 1.3 Our Approach 19 1.4 Thesis Overview 20 Chapter 2 Architecture Design 21 2.1 Improved UNet models 21 2.2 Network Archietecture of KUNet 22 2.3 Network Archietecture of DenseUNet 25 Chapter 3 Methology 27 Chapter 4 Experiments and Result 29 4.1 Experiment Environment and Data 29 4.2 Evaluation Methods 31 4.2.1 Jaccard Index (IoU) and loss function definition 31 4.2.2 Error score after thining 33 4.3 Result and Discussion 36 4.3.1 Experiment I 36 4.3.2 Experiment II 40 4.3.3 Experiment III 45 4.3.4 Experiment IV 55 4.3.5 Experiment V 56 Chapter 5 Conclusion 60 Chapter 6 Future work 62 Chapter 7 Appendix: The Output Segmentation Masks, Analysis and Submission 64 7.1 PhC-U373 data set 64 7.2 DIC-Hela data set 69 7.3 Fluo-MSC data set 74 7.4 Competition Submission 78 7.5 Trained and Validated models on 2018DataScienceBowl 91 REFERENCE 92 | |
| dc.language.iso | en | |
| dc.subject | Unet | zh_TW |
| dc.subject | 電腦視覺 | zh_TW |
| dc.subject | 深度學習 | zh_TW |
| dc.subject | 生醫影像分割 | zh_TW |
| dc.subject | 卷積類神經網路 | zh_TW |
| dc.subject | Biomedical Image Processing | en |
| dc.subject | Unet | en |
| dc.subject | Deep learning | en |
| dc.subject | Convolutional Neural Network | en |
| dc.subject | Computer Vision | en |
| dc.subject | Biomedical Image Segmentation | en |
| dc.title | KUNet: 基於改良式深層U-Net於顯微鏡影像分割之時空網路 | zh_TW |
| dc.title | Microscopy Image Segmentation with Deep U-Net Based Spatiotemporal Networks: KUNet | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 108-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 張瑞鋒(Ruey-Feng Chang),葉羅堯(Lo-Yao Yeh),廖世偉(Shih-Wei Liao) | |
| dc.subject.keyword | 電腦視覺,深度學習,生醫影像分割,卷積類神經網路,Unet, | zh_TW |
| dc.subject.keyword | Computer Vision,Biomedical Image Segmentation,Deep learning,Convolutional Neural Network,Biomedical Image Processing,Unet, | en |
| dc.relation.page | 99 | |
| dc.identifier.doi | 10.6342/NTU202000018 | |
| dc.rights.note | 未授權 | |
| dc.date.accepted | 2020-01-06 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 電機工程學研究所 | zh_TW |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-R06921008-1.pdf 未授權公開取用 | 7.42 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
