Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 生醫電子與資訊學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/102304
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor魏安祺zh_TW
dc.contributor.advisorAn-Chi Weien
dc.contributor.author吳鎧帆zh_TW
dc.contributor.authorKAI-FAN WUen
dc.date.accessioned2026-04-30T16:24:12Z-
dc.date.available2026-05-01-
dc.date.copyright2026-04-30-
dc.date.issued2026-
dc.date.submitted2026-04-07-
dc.identifier.citation[1] P. Friedl and D. Gilmour, ‘Collective cell migration in morphogenesis, regeneration and cancer’, Nat. Rev. Mol. Cell Biol., vol. 10, no. 7, pp. 445–457, Jul. 2009, doi: 10.1038/nrm2720.
[2] K. Y. Chan et al., ‘Skin cells undergo asynthetic fission to expand body surfaces in zebrafish’, Nature, vol. 605, no. 7908, pp. 119–125, May 2022, doi: 10.1038/s41586-022-04641-0.
[3] K. He, X. Zhang, S. Ren, and J. Sun, ‘Deep Residual Learning for Image Recognition’, Dec. 10, 2015, arXiv: arXiv:1512.03385. doi: 10.48550/arXiv.1512.03385.
[4] K. Simonyan and A. Zisserman, ‘Very Deep Convolutional Networks for Large-Scale Image Recognition’, Apr. 10, 2015, arXiv: arXiv:1409.1556. doi: 10.48550/arXiv.1409.1556.
[5] S. Mohammad, A. Roy, A. Karatzas, S. L. Sarver, I. Anagnostopoulos, and F. Chowdhury, ‘Deep Learning Powered Identification of Differentiated Early Mesoderm Cells from Pluripotent Stem Cells’, Cells, vol. 13, no. 6, p. 534, Mar. 2024, doi: 10.3390/cells13060534.
[6] J. Turley, I. V. Chenchiah, P. Martin, T. B. Liverpool, and H. Weavers, ‘Deep learning for rapid analysis of cell divisions in vivo during epithelial morphogenesis and repair’, eLife, vol. 12, Mar. 2024, doi: 10.7554/eLife.87949.2.
[7] A. Durrmeyer, J.-C. Palauqui, and P. Andrey, ‘Deep learning of geometrical cell division rules’, Jul. 30, 2025, arXiv: arXiv:2507.22587. doi: 10.48550/arXiv.2507.22587.
[8] D. Khatri, P. Negi, and C. A. Athale, ‘Classification of first embryonic division stages of multiple Caenorhabditis species by deep learning’, Npj Syst. Biol. Appl., vol. 11, no. 1, p. 97, Aug. 2025, doi: 10.1038/s41540-025-00566-2.
[9] S.-R. Yang, M. Liaw, A.-C. Wei, and C.-H. Chen, ‘Deep learning models link local cellular features with whole-animal growth dynamics in zebrafish’, Life Sci. Alliance, vol. 8, no. 8, Aug. 2025, doi: 10.26508/lsa.202503319.
[10] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, ‘Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization’, Int. J. Comput. Vis., vol. 128, no. 2, pp. 336–359, Feb. 2020, doi: 10.1007/s11263-019-01228-7.
[11] M. Pachitariu and C. Stringer, ‘Cellpose 2.0: how to train your own model’, Nat. Methods, vol. 19, no. 12, pp. 1634–1641, Dec. 2022, doi: 10.1038/s41592-022-01663-4.
[12] C. Stringer and M. Pachitariu, ‘Cellpose3: one-click image restoration for improved cellular segmentation’, Nat. Methods, vol. 22, no. 3, pp. 592–599, Mar. 2025, doi: 10.1038/s41592-025-02595-5.
[13] J. Bertels et al., ‘Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory and Practice’, in Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, D. Shen, T. Liu, T. M. Peters, L. H. Staib, C. Essert, S. Zhou, P.-T. Yap, and A. Khan, Eds, Cham: Springer International Publishing, 2019, pp. 92–100. doi: 10.1007/978-3-030-32245-8_11.
[14] B. Ma et al., ‘Data augmentation in microscopic images for material data mining’, Npj Comput. Mater., vol. 6, no. 1, p. 125, Aug. 2020, doi: 10.1038/s41524-020-00392-6.
[15] A. Buslaev, A. Parinov, E. Khvedchenya, V. I. Iglovikov, and A. A. Kalinin, ‘Albumentations: fast and flexible image augmentations’, Information, vol. 11, no. 2, p. 125, Feb. 2020, doi: 10.3390/info11020125.
[16] S. You, N. Barnes, and J. Walker, ‘Perceptually Consistent Color-to-Gray Image Conversion’, May 06, 2016, arXiv: arXiv:1605.01843. doi: 10.48550/arXiv.1605.01843.
[17] J.-W. Seo and S. D. Kim, ‘Novel PCA-based color-to-gray image conversion’, in 2013 IEEE International Conference on Image Processing, Sep. 2013, pp. 2279–2283. doi: 10.1109/ICIP.2013.6738470.
[18] T. A. Weissman and Y. A. Pan, ‘Brainbow: New Resources and Emerging Biological Applications for Multicolor Genetic Labeling and Analysis’, Genetics, vol. 199, no. 2, pp. 293–306, Feb. 2015, doi: 10.1534/genetics.114.172510.
[19] J. Hou and H. Yuan, ‘Efficient and Accurate Hypergraph Matching’, in 2021 IEEE International Conference on Multimedia and Expo (ICME), Jul. 2021, pp. 1–6. doi: 10.1109/ICME51207.2021.9428156.
[20] L. St»hle and S. Wold, ‘Analysis of variance (ANOVA)’, Chemom. Intell. Lab. Syst., vol. 6, no. 4, pp. 259–272, Nov. 1989, doi: 10.1016/0169-7439(89)80095-4.
[21] G. Sunandini, R. Sivanpillai, V. Sowmya, and V. V. Sajith Variyar, ‘Significance of Atrous Spatial Pyramid Pooling (ASPP) in Deeplabv3+ for Water Body Segmentation’, in 2023 10th International Conference on Signal Processing and Integrated Networks (SPIN), Mar. 2023, pp. 744–749. doi: 10.1109/SPIN57001.2023.10116882.
[22] Y. Deng, X. Lin, R. Li, and R. Ji, ‘Multi-scale Gem Pooling with N-Pair Center Loss for Fine-Grained Image Search’, in 2019 IEEE International Conference on Multimedia and Expo (ICME), Jul. 2019, pp. 1000–1005. doi: 10.1109/ICME.2019.00176.
[23] Y. Yuan and Y. Cheng, ‘Medical image segmentation with UNet-based multi-scale context fusion’, Sci. Rep., vol. 14, no. 1, p. 15687, Oct. 2024, doi: 10.1038/s41598-024-66585-x.
[24] D. Morales-Brotons, T. Vogels, and H. Hendrikx, ‘Exponential Moving Average of Weights in Deep Learning: Dynamics and Benefits’, Nov. 27, 2024, arXiv: arXiv:2411.18704. doi: 10.48550/arXiv.2411.18704.
[25] J. Howard and S. Ruder, ‘Universal Language Model Fine-tuning for Text Classification’, May 23, 2018, arXiv: arXiv:1801.06146. doi: 10.48550/arXiv.1801.06146.
[26] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, ‘Focal Loss for Dense Object Detection’, IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 318–327, Feb. 2020, doi: 10.1109/TPAMI.2018.2858826.
[27] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, ‘Densely Connected Convolutional Networks’, Jan. 28, 2018, arXiv: arXiv:1608.06993. doi: 10.48550/arXiv.1608.06993.
[28] H. Liang, W. Fu, and F. Yi, ‘A Survey of Recent Advances in Transfer Learning’, in 2019 IEEE 19th International Conference on Communication Technology (ICCT), Oct. 2019, pp. 1516–1523. doi: 10.1109/ICCT46805.2019.8947072.
[29] M. I. Alfatih and S. A. Wibowo, ‘Star Classifier Head on Deformable Attention Vision Transformer for Small Datasets’, IEEE Access, vol. 13, pp. 145680–145689, 2025, doi: 10.1109/ACCESS.2025.3599111.
[30] L. Wang, C. Wang, Z. Sun, S. Cheng, and L. Guo, ‘Class Balanced Loss for Image Classification’, IEEE Access, vol. 8, pp. 81142–81153, 2020, doi: 10.1109/ACCESS.2020.2991237.
[31] R. Shimizu, K. Asako, H. Ojima, S. Morinaga, M. Hamada, and T. Kuroda, ‘Balanced Mini-Batch Training for Imbalanced Image Data Classification with Neural Network’, in 2018 First International Conference on Artificial Intelligence for Industries (AI4I), Sep. 2018, pp. 27–30. doi: 10.1109/AI4I.2018.8665709.
[32] P. Nabila and E. B. Setiawan, ‘Adam and AdamW Optimization Algorithm Application on BERT Model for Hate Speech Detection on Twitter’, in 2024 International Conference on Data Science and Its Applications (ICoDSA), Jul. 2024, pp. 346–351. doi: 10.1109/ICoDSA62899.2024.10651619.
[33] M. Kimura, ‘Understanding Test-Time Augmentation’, Feb. 10, 2024, arXiv: arXiv:2402.06892. doi: 10.48550/arXiv.2402.06892.
[34] S. Ioffe and C. Szegedy, ‘Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift’, Mar. 02, 2015, arXiv: arXiv:1502.03167. doi: 10.48550/arXiv.1502.03167.
[35] S. Hamzehei, J. Bai, G. Raimondi, R. Tripp, L. Ostroff, and S. Nabavi, ‘Advanced Feature Extraction and Outlier Detection for 3D Biological/Biomedical Image Registration’, IEEE Trans. Comput. Biol. Bioinforma., vol. 22, no. 4, pp. 1335–1346, Jul. 2025, doi: 10.1109/TCBBIO.2024.3517596.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/102304-
dc.description.abstract斑馬魚表皮細胞(superficial epidermal cells, SECs)為已分化且不再進入典型細胞週期的上皮細胞,但仍可透過非合成性分裂(asynthetic fission)進行增殖。結合 Palmskin/Brainbow 譜系追蹤系統所提供的穩定顏色遺傳特性,可藉由影像分析細胞群組大小與排列,以推測近期的細胞分裂活動。然而,從高解析度且具高噪訊與色彩變異的影像中自動化擷取此類資訊仍具挑戰。

本研究建立一套完整的斑馬魚 SEC 影像分析流程,包含:(1)單細胞分割模型、(2)細胞群組合併演算法,以及(3)灰階影像分割與機器學習/深度學習預測,首先以 736 張人工標註影像微調 Cellpose cyto3 模型,使平均 Dice score 由 0.7456 提升至 0.9171,經適度資料增強後進一步提升至 0.9369,此外,為探討低對比條件下的分割能力,提出感知亮度與 PCA 投影兩種灰階化方法,以僅依賴形態資訊進行模型訓練與評估。

在細胞群組重建方面,本研究整合鄰近性、CIE Lab 色彩距離以及 SEC 分裂次數的生物限制,設計細胞合併演算法,在 70 張測試影像上達到 0.078 的低錯誤率。

在細胞分裂辨識方面,比較 U-Net、DenseNet 與 Vision Transformer(ViT)等模型。結果顯示,在三分類任務中整體 F1 score 約為 0.49–0.53,而在二分類(是否發生分裂)任務中可達約 0.75,顯示細胞形態確實包含與分裂相關的可辨識特徵,整體而言DenseNet 在效能與穩定性之間取得最佳平衡。

本研究結合生物先驗知識與深度學習技術,建立了一套可擴展的 SEC 自動化影像分析流程,能有效重建細胞族群結構並量化分裂行為,為未來整合時間序列、三維影像與分子標記之研究奠定基礎。
zh_TW
dc.description.abstractZebrafish superficial epidermal cells (SECs) are differentiated epithelial cells that no longer enter the canonical cell cycle. However, they can still proliferate through asynthetic fission. When combined with the stable color inheritance provided by the Palmskin/Brainbow lineage-tracing system, image analysis of cell group size and spatial arrangement can be used to infer recent cell division events. Nevertheless, automatically extracting such information from high-resolution images with substantial noise and color variation remains challenging.

In this study, we developed a complete SEC image analysis pipeline. The pipeline includes (1) a single-cell segmentation model, (2) a cell-group merging algorithm, and (3) grayscale-based segmentation with machine learning and deep learning prediction. First, the Cellpose cyto3 model was fine-tuned using 736 manually annotated images. This step improved the average Dice score from 0.7456 to 0.9171. After applying appropriate data augmentation, the Dice score further increased to 0.9369. To evaluate segmentation performance under low-contrast conditions, two grayscale conversion methods were introduced: perceived luminance transformation and PCA projection. These methods allow the model to be trained and evaluated using morphological information only.

For cell-group reconstruction, this study integrated spatial proximity, CIE Lab color distance, and biological constraints on SEC division events to design a cell-merging algorithm. The algorithm achieved a low error rate of 0.078 on 70 test images.

For cell division identification, several models were compared, including U-Net, DenseNet, and Vision Transformer (ViT). In the three-class classification task, the overall F1 score ranged from 0.49 to 0.53. In the binary classification task (division vs. non-division), the F1 score reached approximately 0.75. These results indicate that cell morphology contains identifiable features related to division behavior. Overall, DenseNet achieved the best balance between performance and stability.

This study integrates biological prior knowledge with deep learning techniques to establish a scalable SEC automated image analysis pipeline. The framework can effectively reconstruct cell population structures and quantify division behavior. It also provides a foundation for future studies integrating time-series data, three-dimensional imaging, and molecular markers.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2026-04-30T16:24:12Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2026-04-30T16:24:12Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgment i
摘要 ii
ABSTRACT iv
目次 vi
List of Figures viii
List of Tables xi
Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.1.1 Background 1
1.1.2 Motivation 5
1.2 Literature Review 6
1.2.1 Deep Learning in Biomedical Image Analysis and Cell Dynamics 6
1.2.2 Prediction of Division Stages and Geometrical Rules 7
1.2.3 Deep Learning in Zebrafish Epidermal Cell Analysis 9
1.3 Significance and Contribution 12
Chapter 2 Method 16
2.1 Single-Cell Segmentation of Zebrafish SEC Images 16
2.1.1 Overview of the Single-Cell Segmentation Pipeline 16
2.1.2 Cellpose-Based SEC Segmentation Model 17
2.1.3 Segmentation Evaluation Using the Dice Coefficient 19
2.1.4 Data Augmentation Strategies for SEC Segmentation 20
2.2 Grayscale Simulation of SEC Imaging Conditions 22
2.2.1 Overview of the Grayscale Simulation of SEC Imaging 22
2.2.2 Color-Variance–Preserving Perceptual Luminance Grayscale Transformation 23
2.2.3 PCA-Based Grayscale Projection 24
2.2.4 Grayscale in Single Cell Segmentation 27
2.3 Merging Algorithm for Parent–Daughter Cell Associations 29
2.3.1 Overview of the Merging Algorithm 29
2.3.2 Identification of Merge Areas Based on Spatial, Color, and Biological Constraints 30
2.3.3 Validation of the Accuracy of the Merge Code 39
2.3.4 Exclusion Criteria for Ambiguous Merge Regions 41
2.4 Cell Division Count Recognition Model 42
2.4.1 Statistical Analysis of Morphological Differences Across Cell Division Groups 42
2.4.2 U-Net-Based Model for Cell Division Count Prediction 43
2.4.3 Vision Transformer Model for Cell Division Count Prediction 48
2.4.4 DenseNet Model for Cell Division Count Prediction 62
2.4.5 Advanced Training and Evaluation Strategies 77
2.5 Hardware/Environment and Software Version 80
Chapter 3 Results 81
3.1 Comparison of Single-Cell Segmentation Performance under Different Training Strategies 81
3.2 Effect of Different Grayscale Conversion Methods on Single-Cell Segmentation Performance 91
3.3 Validation of a High-Accuracy Single-Cell Mask Merging Algorithm 96
3.4 Statistical Analysis of Morphological Features across Different Cell Division States 106
3.5 Comparison of Deep Learning Architectures for Cell Division Recognition 111
Chapter 4 Discussion 132
4.1 Interpretation of the SEC Segmentation Performance 132
4.2 Discussion of the Cell-Cluster Merging Strategy 132
4.3 Implications of Grayscale and PCA-Based Image Representations 134
4.4 Discussion of Cell Division Count Prediction 135
4.5 Future Directions 137
Chapter 5 Conclusion 138
REFERENCES 140
APPENDIX 144
-
dc.language.isoen-
dc.subject斑馬魚-
dc.subject表層表皮細胞-
dc.subject細胞分裂分類-
dc.subject影像分割-
dc.subject合併演算法-
dc.subject深度學習-
dc.subject視覺化類別激活圖-
dc.subject形態特徵-
dc.subjectZebrafish-
dc.subjectSuperficial Epidermal Cells-
dc.subjectCell Division Classification-
dc.subjectImage Segmentation-
dc.subjectMerge Algorithm-
dc.subjectDeep Learning-
dc.subjectGrad-CAM-
dc.subjectMorphological Features-
dc.title斑馬魚胚胎表皮細胞影像辨識與細胞分裂分析zh_TW
dc.titleImage Recognition and Cell Division Analysis of Zebrafish Embryonic Epidermal Cellsen
dc.typeThesis-
dc.date.schoolyear114-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee周佳靓;陳振輝zh_TW
dc.contributor.oralexamcommitteeChia-Ching Chou;CHEN-HUI CHENen
dc.subject.keyword斑馬魚,表層表皮細胞細胞分裂分類影像分割合併演算法深度學習視覺化類別激活圖形態特徵zh_TW
dc.subject.keywordZebrafish,Superficial Epidermal CellsCell Division ClassificationImage SegmentationMerge AlgorithmDeep LearningGrad-CAMMorphological Featuresen
dc.relation.page145-
dc.identifier.doi10.6342/NTU202600910-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2026-04-07-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept生醫電子與資訊學研究所-
dc.date.embargo-lift2031-04-06-
顯示於系所單位:生醫電子與資訊學研究所

文件中的檔案:
檔案 大小格式 
ntu-114-2.pdf
  此日期後於網路公開 2031-04-06
4.81 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved