請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67383
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 徐宏民(Winston H. Hsu) | |
dc.contributor.author | Min-Sheng Wu | en |
dc.contributor.author | 吳旻昇 | zh_TW |
dc.date.accessioned | 2021-06-17T01:30:03Z | - |
dc.date.available | 2020-08-24 | |
dc.date.copyright | 2020-08-24 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-17 | |
dc.identifier.citation | [1] S. Al-Janabi, A. Huisman, and P. J. Van Diest. Digital pathology: current status and future perspectives. Histopathology, 61(1):1–9, 2012. [2] C. Arteta, V. Lempitsky, J. A. Noble, and A. Zisserman. Learning to detect cells using non-overlapping extremal regions. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 348–356. Springer, 2012. [3] L. P. Coelho, A. Shariff, and R. F. Murphy. Nuclear segmentation in microscope cell images: a hand-segmented dataset and comparison of algorithms. In 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pages 518–521. IEEE, 2009. [4] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764–773, 2017. [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. [6] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org, 2017. [7] Y. Goltsev, N. Samusik, J. Kennedy-Darling, S. Bhate, M. Hale, G. Vazquez, S. Black, and G. P. Nolan. Deep profiling of mouse splenic architecture with codex multiplexed imaging. Cell, 174(4):968–981, 2018. [8] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [10] T.-I. Hsieh, Y.-C. Lo, H.-T. Chen, and T.-L. Liu. One-shot object detection with coattention and co-excitation. In Advances in Neural Information Processing Systems, pages 2721–2730, 2019. [11] J. W. Johnson. Adapting mask-rcnn for automatic nucleus segmentation. arXiv preprint arXiv:1805.00500, 2018. [12] C. F. Koyuncu, R. Cetin-Atalay, and C. Gunduz-Demir. Object-oriented segmentation of cell nuclei in fluorescence microscopy images. Cytometry Part A, 93(10):1019–1028, 2018. [13] N. Kumar, R. Verma, S. Sharma, S. Bhargava, A. Vahadane, and A. Sethi. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE transactions on medical imaging, 36(7):1550–1560, 2017. [14] X. Li, J. Song, L. Gao, X. Liu, W. Huang, X. He, and C. Gan. Beyond rnns: Positional self-attention with co-attention for video question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8658–8665, 2019. [15] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117–2125, 2017. [16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. [17] V. Ljosa, K. L. Sokolnicki, and A. E. Carpenter. Annotated high-throughput microscopy image sets for validation. Nature methods, 9(7):637–637, 2012. [18] C. Michaelis, I. Ustyuzhaninov, M. Bethge, and A. S. Ecker. One-shot instance segmentation. arXiv preprint arXiv:1811.11507, 2018. [19] P. Naylor, M. Laé, F. Reyal, and T. Walter. Segmentation of nuclei in histopathology images by deep regression of the distance map. IEEE transactions on medical imaging, 38(2):448–459, 2018. [20] A. Nichol, J. Achiam, and J. Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018. [21] K. Nishimura, R. Bise, et al. Weakly supervised cell instance segmentation by propagating from detection response. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 649–657. Springer, 2019. [22] L. Pantanowitz. Digital images and the future of digital pathology. Journal of pathology informatics, 1, 2010. [23] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017. [24] S. E. A. Raza, L. Cheung, M. Shaban, S. Graham, D. Epstein, S. Pelengaris, M. Khan, and N. M. Rajpoot. Micro-net: A unified model for segmentation of various objects in microscopy images. Medical image analysis, 52:160–173, 2019. [25] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015. [26] O. Ronneberger, P. Fischer, and T. Brox. U-net: convolutional networks for biomedical image segmentation. In Proc. MICCAI, pages 234–241, 2015. [27] J. Shu, G. Qiu, M. Ilyas, and P. Kaye. Biomarker detection in whole slide imaging based on statistical color models. In MICCAI 2010 Workshop on Computational Imaging Biomarkers for Tumors: From Qualitative to Quantitative. 13th International Conference on Medical Image Computing and Computer Assisted Intervention, Beijing, China, 2010. [28] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pages 4077–4087, 2017. [29] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199–1208, 2018. [30] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016. [31] A. O. Vuola, S. U. Akram, and J. Kannala. Mask-rcnn and u-net ensembled for nuclei segmentation. In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pages 208–212. IEEE, 2019. [32] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7794–7803, 2018. [33] J. Yi, P. Wu, D. J. Hoeppner, and D. Metaxas. Pixel-wise neural cell instance segmentation. In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 373–377. IEEE, 2018. [34] Z. Yu, J. Yu, Y. Cui, D. Tao, and Q. Tian. Deep modular co-attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6281–6290, 2019. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67383 | - |
dc.description.abstract | 精準地分割出單一細胞或細胞核對於數位病理影像分析是至關重要的環節。近年來歸功於深度神經網路的進步,數位病理影像實例分割(instance segmentation)已取得大幅度的進展。然而,傳統的深度學習網路無法在從沒見過的新染色型態影像上順利地辨認物體。再者,對於醫療或生物科學領域,蒐集巨量的標註資料往往耗時且費工。為了解決這些挑戰,我們提出一個基於類別無關(class-agnostic)小樣本學習(few-shot learning)的方法來執行數位病理影像的實例分割任務:給定一個屬於任意一種染色型態的目標(target)影像,只需額外提供一個或少量的參考(reference)影像作為引導,這些參考影像具有與目標影像同樣染色型態的細胞或細胞核,我們的模型就能夠透過類別無關(class-agnostic)的方式單獨地分割出目標影像中相對應的細胞或細胞核。我們提出的模型利用在多尺度別(multi-scale)的共注意(co-attention)機制以及 class-agnostic relation R-CNN 來取得目標影像與參考影像的多尺度相關特徵。另外,我們也使用可變形卷積(deformable convolution)以捕捉細胞排列的幾何結構。我們更廣泛地蒐羅 11 個公開的數位病理影像資料集以建構出新的小樣本實例分割資料集。透過實驗的結果也顯示出我們的小樣本學習方法勝過先前的方法而達到當前最優的結果。此外,我們的方法在性能表現上也拉近只需部分資料訓練的小樣本學習模型與使用全部的資料訓練的傳統深度學習模型之間的差距。這些令人信服的結果揭示了我們的方法的優勢,該方法只需提供少量標註的樣本便能有效地對於數位病理影像進行實例分割,大幅減輕人為標註的困難。 | zh_TW |
dc.description.abstract | Accurately segmenting individual cells or nuclei is a crucial step in the digital pathological image analysis procedure. In recent years, it has yielded substantial progress attributed to the advances in deep neural networks. However, traditional deep learning networks cannot smoothly recognize the staining modalities never seen. Besides, in the medical or bio-scientific domain, collecting tremendous annotated data is time-consuming and labor-intensive. To address the challenges, we present a class-agnostic few-shot learning for the digital pathological image instance segmentation: given a target image from any staining modality, our model can individually segment the cells or nuclei within target in a class-agnostic manner by providing one or very few reference images as guidance. We design a multi-scale co-attention mechanism and a class-agnostic relation R-CNN to leverage the correlated multi-scale features and utilize the deformable convolution to capture the geometric structures. Moreover, we establish a new few-shot instance segmentation dataset by widely gathering 11 public digital pathological image datasets. The experimental results demonstrate the superiority of our method in comparison with the state-of-the-art. Furthermore, our approach closes the performance gap between few-shot learning model trained with partial data and the Oracle trained with all fully labelled data. Such convincing evidences reveal the advantage of our approach that can conduct the cell instance segmentation with very few annotations to mitigate the difficulty of the manual annotation. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T01:30:03Z (GMT). No. of bitstreams: 1 U0001-1608202000145200.pdf: 13855006 bytes, checksum: 2cba2dcb10538b2ebc779f3d8dfbb999 (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 誌謝 iii 摘要 iv Abstract v 1 Introduction 1 2 Related Works 5 2.1 Cell Instance Segmentation 5 2.2 Few-shot Learning 6 3 Method 7 3.1 Problem Definition 7 3.2 Overall Architecture 8 4 Experiments 14 4.1 Dataset 14 4.2 Dataset Splitting 15 4.3 Target-reference Pairs Generation 15 4.4 Network Settings 16 5 Results 17 6 Discussion 21 7 Conclusion 25 Bibliography 26 | |
dc.language.iso | en | |
dc.title | 基於類別無關小樣本學習之數位病理影像實例分割 | zh_TW |
dc.title | Class-agnostic Few-shot Instance Segmentation of Digital Pathological Images | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳文進(WC Chen),葉梅珍(Mei-Chen Yeh),蘇東弘(Tung-Hung Su),李志國(Chih-Kuo Lee) | |
dc.subject.keyword | 小樣本學習,實例分割,數位病理影像, | zh_TW |
dc.subject.keyword | Few-shot Learning,Instance Segmentation,Pathological Images, | en |
dc.relation.page | 29 | |
dc.identifier.doi | 10.6342/NTU202003549 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2020-08-19 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1608202000145200.pdf 目前未授權公開取用 | 13.53 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。