請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94457完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳祝嵩 | zh_TW |
| dc.contributor.advisor | Chu-Song Chen | en |
| dc.contributor.author | 蔡順先 | zh_TW |
| dc.contributor.author | Shun-Xian Cai | en |
| dc.date.accessioned | 2024-08-16T16:10:03Z | - |
| dc.date.available | 2024-08-17 | - |
| dc.date.copyright | 2024-08-16 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-13 | - |
| dc.identifier.citation | M. Amgad, H. Elfandy, H. Hussein, L. A. Atteya, M. A. Elsebaie, L. S. Abo El- nasr, R. A. Sakr, H. S. Salem, A. F. Ismail, A. M. Saad, et al. Structured crowd-sourcing enables convolutional segmentation of histology images. Bioinformatics, 35(18):3461–3467, 2019.
Y. B. Can, K. Chaitanya, B. Mustafa, L. M. Koch, E. Konukoglu, and C. F. Baumgartner. Learning to segment medical images with scribble-supervision alone. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4, pages 236–244. Springer, 2018. J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou. Transunet: Transformers make strong encoders for medical image segmentation, 2021. Y.-J. Chen, X. Hu, Y. Shi, and T.-Y. Ho. Ame-cam: Attentive multiple-exit cam for weakly supervised segmentation on mri brain tumor. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 173–182. Springer, 2023. J. Cheng, J. Ye, Z. Deng, J. Chen, T. Li, H. Wang, Y. Su, Z. Huang, J. Chen, L. Jiang, H. Sun, J. He, S. Zhang, M. Zhu, and Y. Qiao. Sam-med2d, 2023. J. Fu, T. Lu, S. Zhang, and G. Wang. Um-cam: Uncertainty-weighted multiresolution class activation maps for weakly-supervised fetal brain segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 315–324. Springer, 2023. J. Gamper, N. A. Koohbanani, K. Benes, S. Graham, M. Jahanifar, S. A. Khurram, A. Azam, K. Hewitt, and N. Rajpoot. Pannuke dataset extension, insights and base-lines. arXiv preprint arXiv:2003.10778, 2020. L. Grady. Random walks for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 28(11):1768–1783, 2006. S. Graham, Q. D. Vu, S. E. A. Raza, A. Azam, Y. W. Tsang, J. T. Kwak, and N. Rajpoot. Hover-net: Simultaneous segmentation and classification of nuclei in multitissue histology images. Medical image analysis, 58:101563, 2019. P. He, A. Qu, S. Xiao, and M. Ding. Detisseg: A dual-encoder network for tissue semantic segmentation of histopathology image. Biomedical Signal Processing and Control, 87:105544, 2024. F. Hörst, M. Rempe, L. Heine, C. Seibold, J. Keyl, G. Baldini, S. Ugurel, J. Siveke, B. Grünwald, J. Egger, et al. Cellvit: Vision transformers for precise cell segmentation and classification. Medical Image Analysis, 94:103143, 2024. H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y. Chen, and J. Wu. Unet 3+: A full-scale connected unet for medical image segmentation. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1055–1059, 2020. F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. Maier-Hein. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods, 18:203 – 211, 2020. H. Kervadec, J. Dolz, M. Tang, E. Granger, Y. Boykov, and I. B. Ayed. Constrainedcnn losses for weakly supervised segmentation. Medical image analysis, 54:88–99, 2019. A. R. Khan and A. Khan. Maxvit-unet: Multi-axis attention for medical image segmentation. arXiv preprint arXiv:2305.08396, 2023. A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár. Panoptic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9404–9413, 2019. H. Lee and W.-K. Jeong. Scribble2label: Scribble-supervised cell segmentation via self-generating pseudo-labels with consistency. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part I 23, pages 14–23. Springer, 2020. D. Lin, J. Dai, J. Jia, K. He, and J. Sun. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3159–3167, 2016. X. Luo, M. Hu, W. Liao, S. Zhai, T. Song, G. Wang, and S. Zhang. Scribblesupervised medical image segmentation via dual-branch network and dynamically mixed pseudo labels supervision. In L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, pages 528–538, Cham, 2022. Springer Nature Switzerland. K. Nishimura and R. Bise. Weakly supervised cell-instance segmentation with two types of weak labels by single instance pasting. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3185–3194, 2023. K. Nishimura, C. Wang, K. Watanabe, R. Bise, et al. Weakly supervised cell instance segmentation under various conditions. Medical Image Analysis, 73:102182, 2021. H. Qu, P. Wu, Q. Huang, J. Yi, G. M. Riedlinger, S. De, and D. N. Metaxas. Weakly supervised deep nuclei segmentation using points annotation in histopathology images. In M. J. Cardoso, A. Feragen, B. Glocker, E. Konukoglu, I. Oguz, G. Unal, and T. Vercauteren, editors, Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, volume 102 of Proceedings of Machine Learning Research, pages 390–400. PMLR, 08–10 Jul 2019. H. Qu, P. Wu, Q. Huang, J. Yi, Z. Yan, K. Li, G. M. Riedlinger, S. De, S. Zhang, and D. N. Metaxas. Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images. IEEE transactions on medical imaging, 39(11):3655–3666, 2020. O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing. C. Rother, V. Kolmogorov, and A. Blake. ” grabcut” interactive foreground extraction using iterated graph cuts. ACM transactions on graphics (TOG), 23(3):309–314, 2004. K. Sofiiuk, I. A. Petrov, and A. Konushin. Reviving iterative training with mask guidance for interactive segmentation. In 2022 IEEE International Conference on Image Processing (ICIP), pages 3141–3145. IEEE, 2022. R. Strudel, R. Garcia, I. Laptev, and C. Schmid. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7262–7272, 2021. M. Tang, F. Perazzi, A. Djelouah, I. Ben Ayed, C. Schroers, and Y. Boykov. On regularized losses for weakly-supervised cnn segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 507–522, 2018. H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pages 10347–10357. PMLR, 2021. G. Valvano, A. Leo, and S. A. Tsaftaris. Learning to segment from scribbles using multi-scale adversarial attention gates. IEEE Transactions on Medical Imaging, 40(8):1990–2001, 2021. R. Verma, N. Kumar, A. Patil, N. C. Kurian, S. Rane, S. Graham, Q. D. Vu, M. Zwa- ger, S. E. A. Raza, N. Rajpoot, et al. Monusac2020: A multi-organ nuclei seg- mentation and classification challenge. IEEE Transactions on Medical Imaging, 40(12):3413–3423, 2021. V. Vezhnevets and V. Konouchine. Growcut: Interactive multi-label nd image segmentation by cellular automata. In proc. of Graphicon, volume 1, pages 150–156. Citeseer, 2005. G. Wang, M. A. Zuluaga, R. Pratt, M. Aertsen, T. Doel, M. Klusmann, A. L. David, J. Deprest, T. Vercauteren, and S. Ourselin. Slic-seg: A minimally interactive segmentation of the placenta from sparse and motion-corrupted fetal mri in multiple views. Medical image analysis, 34:137–147, 2016. J. Wei, Y. Hu, S. Cui, S. K. Zhou, and Z. Li. Weakpolyp: You only look bounding box for polyp segmentation. In H. Greenspan, A. Madabhushi, P. Mousavi, S. Salcudean, J. Duncan, T. Syeda-Mahmood, and R. Taylor, editors, Medical Image Computing and Computer Assisted Intervention – MICCAI 2023, pages 757–766, Cham, 2023. Springer Nature Switzerland. H. E. Wong, M. Rakic, J. Guttag, and A. V. Dalca. Scribbleprompt: Fast and flexible interactive segmentation for any medical image. arXiv preprint arXiv:2312.07381, 2023. Y. Xu, M. Zhou, Y. Feng, X. Xu, H. Fu, R. S. M. Goh, and Y. Liu. Minimalsupervised medical image segmentation via vector quantization memory. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 625–636. Springer, 2023. I. Yoo, D. Yoo, and K. Paeng. Pseudoedgenet: Nuclei segmentation only with point annotations. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part I 22, pages 731–739. Springer, 2019. H. Zhang, L. Burrows, Y. Meng, D. Sculthorpe, A. Mukherjee, S. E. Coupland, K. Chen, and Y. Zheng. Weakly supervised segmentation with point annotations for histopathology images via contrast-based variational model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15630– 15640, 2023. T. Y. Zhang and C. Y. Suen. A fast parallel algorithm for thinning digital patterns. Communications of the ACM, 27(3):236–239, 1984. M. Zhou, Z. Xu, K. Zhou, and R. K.-y. Tong. Weakly supervised medical image segmentation via superpixel-guided scribble walking and class-wise contrastive regularization. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 137–147. Springer, 2023. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang. Unet+ +: A nested u-net architecture for medical image segmentation. In D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, A. Martel, L. Maier-Hein, J. M. R. Tavares, A. Bradley, J. P. Papa, V. Belagiannis, J. C. Nascimento, Z. Lu, S. Conjeti, M. Moradi, H. Greenspan, and A. Madabhushi, editors, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 3–11, Cham, 2018. Springer International Publishing. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94457 | - |
| dc.description.abstract | 在影像切割任務中,通常都需要人工去花費很大的心力標注出像素級別的標籤,來執行全監督式的學習訓練。在醫學影像中,由於這樣的標注任務通常需要專業人員來對影像進行標注,因此標注的成本也隨之提升。為了減緩在醫學影像標注的成本,在這篇論文中我們提出了基於點監督或塗鴉監督訓練的影像切割模型。在第一階段中我們藉助了在大量醫學影像上預訓練過的segment anything model(SAM) 來協助從點或塗鴉的標籤產生出像素級別的偽標籤,並在其中加入了VQ memory bank來儲存特徵與類別的關係,協助產生更高品質的偽標籤。在第二階段為了擺脫SAM在使用時需要輸入點或塗鴉標籤的依賴,另外訓練一個切割模型來基於偽標籤以及點或塗鴉標籤做學習。此外,為了因應訓練資料會隨時間增加,我們也加入了持續性學習的架構。這樣就能夠隨著低成本的標籤數量增加,持續更新切割模型。在BCSS乳癌切割資料集的實驗結果以及在PanNuke, MoNuSAC細胞切割資料集上的實驗結果都證明了我們提出的方法的效果,與其他基於塗鴉標籤以及點標籤的方法相比,都有所提升。 | zh_TW |
| dc.description.abstract | Image segmentation tasks require significant manual effort to label pixel-level annotations for fully supervised learning. In medical image segmentation, such annotation tasks usually need professional annotations, which increases the cost of labeling. To alleviate the burden of annotation in medical imaging, we introduce an image segmentation method based on point-supervised or scribble-supervised learning in this work. In the first phase, the proposed approach utilizes a pre-trained Segment Anything Model (SAM) on a large amount of medical imaging data to assist in generating pixel-level pseudo labels from point or scribble annotations. In addition, we incorporate a vector-quantizatoin memory bank to store the relationship between features and classes, aiding in the generation of pseudo labels. In the second phase, to eliminate the dependency on inputting point or scribble annotations when using SAM, we train a segmentation model based on the pseudo labels and point or scribble annotations. Furthermore, to accommodate the increasing amount of training data over time, we also incorporate a continual learning framework in this study. This allows the segmentation model to be continuously updated as more labels are added. Experimental results on the BCSS breast cancer segmentation dataset as well as the PanNuke and MoNuSAC nuclei segmentation datasets demonstrate the effectiveness of our method, showing improvements over other scribble-based and point-based approaches. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-16T16:10:03Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-16T16:10:03Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 摘要 i
Abstract ii Contents iV List of Figures vi List of Tables vii Chapter 1 Introduction 1 Chapter 2 Relative Work 4 2.1 Pointly-supervised Segmentation 4 2.2 Scribble-supervised Segmentation 5 Chapter 3 Method 7 3.1 Overview 7 3.2 Stage 1: Generate Pseudo Label 7 3.2.1 SAMMed-2D[5] 8 3.2.2 VQ Memory Bank[36] 9 3.3 Stage 2: Train Segmentation Model 12 3.4 Continual Leaning Framework 14 Chapter 4 Experiment 16 4.1 Datasets, Data Preparation 16 4.2 Performance Comparisons 17 4.3 Ablation Study 22 Chapter 5 Conclusion 24 References 25 | - |
| dc.language.iso | en | - |
| dc.subject | 塗鴉監督 | zh_TW |
| dc.subject | 影像切割 | zh_TW |
| dc.subject | 醫學影像 | zh_TW |
| dc.subject | 持續性 | zh_TW |
| dc.subject | 弱監督 | zh_TW |
| dc.subject | 點監督 | zh_TW |
| dc.subject | Scribble-supervised | en |
| dc.subject | Weakly-supervised | en |
| dc.subject | Continual Learning | en |
| dc.subject | Medical Image | en |
| dc.subject | Image Segmentation | en |
| dc.subject | Point-supervised | en |
| dc.title | 醫學影像分割的弱監督持續學習 | zh_TW |
| dc.title | Weakly-supervised Continual Learning for Medical Image Segmentation | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 楊惠芳;劉耿豪 | zh_TW |
| dc.contributor.oralexamcommittee | Huei-Fang Yang;Keng-Hao Liu | en |
| dc.subject.keyword | 弱監督,持續性,醫學影像,影像切割,點監督,塗鴉監督, | zh_TW |
| dc.subject.keyword | Weakly-supervised,Continual Learning,Medical Image,Image Segmentation,Point-supervised,Scribble-supervised, | en |
| dc.relation.page | 31 | - |
| dc.identifier.doi | 10.6342/NTU202402387 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2024-08-14 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| dc.date.embargo-lift | 2025-01-01 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf | 2.72 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
