請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69305
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 魏安祺(An-Chi Wei) | |
dc.contributor.author | Chan-Min Hsu | en |
dc.contributor.author | 許展銘 | zh_TW |
dc.date.accessioned | 2021-06-17T03:12:31Z | - |
dc.date.available | 2020-08-24 | |
dc.date.copyright | 2020-08-24 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-18 | |
dc.identifier.citation | [1] J. Nunnari and A. Suomalainen, “Mitochondria: in sickness and in health.,” Cell, vol. 148, no. 6, pp. 1145–1159, Mar. 2012. [2] D. H. Margineantu and D. M. Hockenbery, “Mitochondrial functions in stem cells.,” Curr. Opin. Genet. Dev., vol. 38, pp. 110–117, Jun. 2016. [3] M. R. Duchen, “Roles of mitochondria in health and disease.,” Diabetes, vol. 53 Suppl 1, pp. S96–102, Feb. 2004. [4] D. C. Chan, “Mitochondria: dynamic organelles in disease, aging, and development.,” Cell, vol. 125, no. 7, pp. 1241–1252, Jun. 2006. [5] J. Gao, L. Wang, J. Liu, F. Xie, B. Su, and X. Wang, “Abnormalities of mitochondrial dynamics in neurodegenerative diseases.,” Antioxidants (Basel), vol. 6, no. 2, Apr. 2017. [6] R. J. Giedt, P. Fumene Feruglio, D. Pathania, K. S. Yang, A. Kilcoyne, C. Vinegoni, T. J. Mitchison, and R. Weissleder, “Computational imaging reveals mitochondrial morphology as a biomarker of cancer phenotype and drug response.,” Sci. Rep., vol. 6, p. 32985, Sep. 2016. [7] F. E. Lennon, G. C. Cianci, R. Kanteti, J. J. Riehm, Q. Arif, V. A. Poroyko, E. Lupovitch, W. Vigneswaran, A. Husain, P. Chen, J. K. Liao, M. Sattler, H. L. Kindler, and R. Salgia, “Unique fractal evaluation and therapeutic implications of mitochondrial morphology in malignant mesothelioma.,” Sci. Rep., vol. 6, p. 24578, Apr. 2016. [8] D. J. Stephens and V. J. Allan, “Light microscopy techniques for live cell imaging.,” Science, vol. 300, no. 5616, pp. 82–86, Apr. 2003. [9] V. Jevtic, P. Kindle, and S. V. Avilov, “SYBR Gold dye enables preferential labelling of mitochondrial nucleoids and their time-lapse imaging by structured illumination microscopy.,” PLoS One, vol. 13, no. 9, p. e0203956, Sep. 2018. [10] K. Mitra and J. Lippincott-Schwartz, “Analysis of mitochondrial dynamics and functions using imaging approaches.,” Curr. Protoc. Cell Biol., vol. Chapter 4, p. Unit 4.25.1–21, Mar. 2010. [11] A. Ettinger and T. Wittmann, “Fluorescence live cell imaging.,” Methods Cell Biol., vol. 123, pp. 77–94, 2014. [12] P. J. Cranfill, B. R. Sell, M. A. Baird, J. R. Allen, Z. Lavagnino, H. M. de Gruiter, G.-J. Kremers, M. W. Davidson, A. Ustione, and D. W. Piston, “Quantitative assessment of fluorescent proteins.,” Nat. Methods, vol. 13, no. 7, pp. 557–562, May 2016. [13] S. Skylaki, O. Hilsenbeck, and T. Schroeder, “Challenges in long-term imaging and quantification of single-cell dynamics.,” Nat. Biotechnol., vol. 34, no. 11, pp. 1137–1144, Nov. 2016. [14] J. Selinummi, P. Ruusuvuori, I. Podolsky, A. Ozinsky, E. Gold, O. Yli-Harja, A. Aderem, and I. Shmulevich, “Bright field microscopy as an alternative to whole cell fluorescence in automated analysis of macrophage images.,” PLoS One, vol. 4, no. 10, p. e7497, Oct. 2009. [15] C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy.,” Nat. Methods, vol. 15, no. 11, pp. 917–920, Sep. 2018. [16] E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In silico labeling: predicting fluorescent labels in unlabeled images.,” Cell, vol. 173, no. 3, pp. 792–803.e19, Apr. 2018. [17] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI), vol. 9351, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds. Cham: Springer International Publishing, 2015, pp. 234–241. [18] R. H. Webb, “Confocal optical microscopy,” Rep. Prog. Phys., vol. 59, no. 3, pp. 427–471, Mar. 1996. [19] J. Huff, “The Airyscan detector from ZEISS: confocal imaging with improved signal-to-noise ratio and super-resolution,” Nat. Methods, vol. 12, no. 12, pp. i–ii, Dec. 2015. [20] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proceedings of the IEEE …, 2015. [21] J. Dai, Y. Li, K. He, and J. Sun, “R-fcn: Object detection via region-based fully convolutional networks,” Advances in neural information processing …, 2016. [22] A. Dosovitskiy and J. T. Springenberg, “Discriminative unsupervised feature learning with convolutional neural networks,” Advances in neural …, 2014. [23] B.-C. Chen, W. R. Legant, K. Wang, L. Shao, D. E. Milkie, M. W. Davidson, C. Janetopoulos, X. S. Wu, J. A. Hammer, Z. Liu, B. P. English, Y. Mimori-Kiyosue, D. P. Romero, A. T. Ritter, J. Lippincott-Schwartz, L. Fritz-Laylin, R. D. Mullins, D. M. Mitchell, J. N. Bembenek, A.-C. Reymann, R. Böhme, S. W. Grill, J. T. Wang, G. Seydoux, U. S. Tulu, D. P. Kiehart, and E. Betzig, “Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution.,” Science, vol. 346, no. 6208, p. 1257998, Oct. 2014. [24] C. Farabet, C. Couprie, L. Najman, and Y. Lecun, “Learning hierarchical features for scene labeling.,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1915–1929, Aug. 2013. [25] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015. [26] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [27] A. Paszke, S. Gross, F. Massa, and A. Lerer, “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural …, 2019. [28] S. Dodge and L. Karam, “Understanding how image quality affects deep neural networks,” … eighth international conference on quality …, 2016. [29] Q. Zhong, A. G. Busetto, J. P. Fededa, J. M. Buhmann, and D. W. Gerlich, “Unsupervised modeling of cell morphology dynamics for time-lapse microscopy.,” Nat. Methods, vol. 9, no. 7, pp. 711–713, May 2012. [30] S. K. Sadanandan, P. Ranefall, S. Le Guyader, and C. Wählby, “Automated training of deep convolutional neural networks for cell segmentation.,” Sci. Rep., vol. 7, no. 1, p. 7860, Aug. 2017. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/69305 | - |
dc.description.abstract | 近年來顯微鏡技術的快速發展,讓科學家們能以更微觀的角度觀察細胞及細胞內的結構。在這之中,螢光顯微鏡因為能夠透過針對特定胞器進行螢光染色,藉此來觀察活細胞內的胞器活動,因而成為現今觀測活細胞時所主要使用的技術。然而透過雷射激發螢光的方式卻容易產生光漂白或是光毒性等問題,造成觀測上的困難。相比之下,單純的穿透光照射,雖然無法清晰看到細胞結構等細節,但便宜、不用染色的優點,讓它得以有不同的用途。本篇論文使用我們實驗室所拍攝的AC16心臟細胞穿透光及螢光影像,透過深度學習的方法並且採納Allen Institute for Cell Science發表的模型及方法來訓練,來實現三維及時間序列的螢光影像預測。相對於傳統的機器學習方法,卷積神經網路等深度學習方法近年來成功在影像辨識及切割上取得重大成果。因此,能夠透過訓練類似的神經網路模型,學習穿透光與螢光影像的相關性,最後成功從新的穿透光影像預測出其對應的螢光影像。在這裡,我們的實驗將著重在使用共軛焦顯微鏡及Airyscan激光共聚焦顯微鏡所拍攝的高解析粒線體影像及其DNA影像,進行不同條件(三維、時間序)的預測。總體來說,有了穿透光預測的螢光影像結果,將能有效縮短未來準備樣本的時間、增加細胞在螢光顯微鏡下可供觀測的時間,並讓研究者能更仔細的分析粒線體及粒線體DNA的形狀與動態。 | zh_TW |
dc.description.abstract | Advancements in microscopic techniques allow insight into the world of cells and cellular structure. One such innovation is fluorescence microscopy, which enables us to analyze the subcellular structure of a living cell with the advantage of specific labeling. However, this technique comes with the potential problem of phototoxicity. The presence of advantage and disadvantage also holds for transmitted light microscopy (TL), which is a low-cost and label-free technology that nonetheless fails to easily distinguish targeted subcellular objects. In this thesis, we adopted the label-free method developed by the Allen Institute of Cell Science in using our TL microscopic images of cardiac myocyte-derived cell line AC16 to train and predict 3D (z-stack) and time-series fluorescence images. Convolutional neural networks (CNNs) have shown significant success in image recognition and segmentation compared with traditional machine learning methods. Based on a CNN-like U-Net architecture, the model can effectively predict fluorescence images from new TL input by learning the relationships between live-cell TL and fluorescence images for different kinds of subcellular structures. We specifically focused on building corresponding models of subcellular mitochondrial structures using the aforementioned CNN technology and compared the prediction results derived from confocal microscopic, Airyscan microscopic, z-stack, and time-series images. With the multi-model combined prediction, it is possible to generate integrated images through only TL input, reduce the time required for sample preparation, increase the time scales to enable visualization and measurement, and understand the morphology and dynamics of mitochondria and mtDNA. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T03:12:31Z (GMT). No. of bitstreams: 1 U0001-1808202015272300.pdf: 3405552 bytes, checksum: bc766203dfaf1ec9642e778c44e52657 (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | Table of Contents Acknowledgments i 摘要 ii Abstract iii Table of Contents v List of Figures vii List of Tables ix Chapter Ⅰ: Introduction 1 Section 1-1: Background and Motivation 1 Section 1-2: Literature Review 5 Section 1-3: Specific Aims 9 Chapter Ⅱ: Methods and Materials 12 Section 2-1: Cell Culture and Labeling 12 Section 2-2: Cell Imaging 12 Section 2-3: Data Preprocessing for Training and Evaluation 17 Section 2-4: Model Architecture 19 Section 2-5: Model Performance Analysis 23 Chapter Ⅲ: Results 24 Section 3-1: Time-series Prediction 24 Section 3-2: Z-stack Prediction 28 Section 3-3: High-resolution Image Prediction (Without downscaling) 29 Section 3-4: Airyscan Prediction and Confocal Prediction 32 Section 3-5: Prediction from General Model 37 Chapter Ⅳ: Discussion 41 Section 4-1: Label-free Prediction on Mitochondria 41 Section 4-2: Result Analysis 42 Section 4-3: Experimental Difficulties 45 Section 4-4: Limitations 47 Chapter Ⅴ: Conclusion and Future Work 50 Reference 51 | |
dc.language.iso | en | |
dc.title | 免標註顯微鏡影像上以卷積神經網路預測粒線體結構 | zh_TW |
dc.title | Mitochondrial Structure Prediction in Label-free Microscopy Images Using Convolutional Neural Networks | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 張壯榮(Chuang-Rung Chang),何亦平(Yi-Ping Ho),劉彥良(Yen-Liang Liu) | |
dc.subject.keyword | 粒線體結構,卷積神經網路,U-Net,顯微鏡影像預測,三維螢光影像,時間序列螢光影像, | zh_TW |
dc.subject.keyword | mitochondrial structure,convolutional neural networks,U-net,microscope image prediction,3D fluorescence images,time-series fluorescence images, | en |
dc.relation.page | 54 | |
dc.identifier.doi | 10.6342/NTU202003983 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2020-08-19 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 生醫電子與資訊學研究所 | zh_TW |
顯示於系所單位: | 生醫電子與資訊學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1808202015272300.pdf 目前未授權公開取用 | 3.33 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。