Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89915
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳祝嵩zh_TW
dc.contributor.advisorChu-Song Chenen
dc.contributor.author魏廷芸zh_TW
dc.contributor.authorTing-Yun Weien
dc.date.accessioned2023-09-22T16:39:45Z-
dc.date.available2023-11-09-
dc.date.copyright2023-09-22-
dc.date.issued2023-
dc.date.submitted2023-08-09-
dc.identifier.citation[1] R. Aljundi, P. Chakravarty, and T. Tuytelaars. Expert gate: Lifelong learning with a network of experts. In CVPR, pages 3366–3375, 2017.
[2] R. Aljundi, K. Kelchtermans, and T. Tuytelaars. Task-free continual learning. In CVPR, pages 11254–11263, 2019.
[3] J. Bang, H. Kim, Y. Yoo, J.-W. Ha, and J. Choi. Rainbow memory: Continual learning with a memory of diverse samples. In CVPR, pages 8218–8227, 2021.
[4] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A. Raffel. Mixmatch: A holistic approach to semi-supervised learning. NeurIPS, 32, 2019.
[5] F. Cermelli, A. Geraci, D. Fontanel, and B. Caputo. Modeling missing annotations for incremental learning in object detection. In CVPRW, pages 3700–3710, 2022.
[6] A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny. Efficient lifelong learning with a-gem. In ICLR, 2019.
[7] B. Chen, W. Chen, S. Yang, Y. Xuan, J. Song, D. Xie, S. Pu, M. Song, and Y. Zhuang. Label matching semi-supervised object detection. In CVPR, pages 14381–14390, 2022.
[8] T.-T. Chuang, T.-Y. Wei, Y.-H. Hsieh, C.-S. Chen, and H.-F. Yang. Continual cell instance segmentation of microscopy images. In ICASSP, pages 1–5. IEEE, 2023.
[9] M. De Lange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. IEEE TPAMI, 44(7):3366–3385, 2021.
[10] J. Dong, L. Wang, Z. Fang, G. Sun, S. Xu, X. Wang, and Q. Zhu. Federated class-incremental learning. In CVPR, pages 10164–10173, June 2022.
[11] N. Dvornik, J. Mairal, and C. Schmid. Modeling visual context is key to augmenting object detection datasets. In ECCV, pages 364–380, 2018.
[12] D. Dwibedi, I. Misra, and M. Hebert. Cut, paste and learn: Surprisingly easy syn thesis for instance detection. In ICCV, pages 1301–1310, 2017.
[13] G. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T.-Y. Lin, E. D. Cubuk, Q. V. Le, and B. Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation. In CVPR, pages 2918–2928, 2021.
[14] Y. Gu, C. Deng, and K. Wei. Class-incremental instance segmentation via multi-teacher networks. In AAAI, volume 35, pages 1478–1486, 2021.
[15] A. Gupta, R. Gupta, S. Gehlot, and S. Goswami. Segpc2021: Segmentation of multiple myeloma plasma cells in microscopic images, 2021.
[16] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask R-CNN. In ICCV, pages 2961–2969, 2017.
[17] M. Kang, J. Park, and B. Han. Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation. In CVPR, 2022.
[18] B. Kim, J. Jeong, D. Han, and S. J. Hwang. The devil is in the points: Weakly semi-supervised instance segmentation via point-guided mask representation. In CVPR, pages 11360–11370, 2023.
[19] S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. ICLR, 2016.
[20] Z. Li and D. Hoiem. Learning without forgetting. IEEE TPAMI, 40(12):2935–2947, 2017.
[21] Y. Liu, Y. Su, A.-A. Liu, B. Schiele, and Q. Sun. Mnemonics training: Multi-class incremental learning without forgetting. In CVPR, pages 12245–12254, 2020.
[22] Y.-C. Liu, C.-Y. Ma, Z. He, C.-W. Kuo, K. Chen, P. Zhang, B. Wu, Z. Kira, and P. Vajda. Unbiased teacher for semi-supervised object detection. ICLR, 2021.
[23] Z. Luo, Y. Liu, B. Schiele, and Q. Sun. Class-incremental exemplar compression for class-incremental learning. In CVPR, pages 11371–11380, June 2023.
[24] A. Mallya, D. Davis, and S. Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In ECCV, pages 67–82, 2018.
[25] A. Mallya and S. Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In CVPR, pages 7765–7773, 2018.
[26] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. iCaRL: Incremental classifier and representation learning. In CVPR, pages 2001–2010, 2017.
[27] K. Saito, P. Hu, T. Darrell, and K. Saenko. Learning to detect every thing in an open world. In ECCV, pages 268–284. Springer, 2022.
[28] K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, E. D. Cubuk, A. Kurakin, and C.-L. Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. NeurIPS, 33:596–608, 2020.
[29] W. Sun, Q. Li, J. Zhang, W. Wang, and Y. ao Geng. Decoupling learning and remembering: a bilevel memory framework with knowledge projection for task-incremental learning. In CVPR, June 2023.
[30] A. Tarvainen and H. Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. NeurIPS, 30, 2017.
[31] Z. Wang, Y. Li, and S. Wang. Noisy boundaries: Lemon or lemonade for semi-supervised instance segmentation? In CVPR, pages 16826–16835, 2022.
[32] T.-Y. Wu, G. Swaminathan, Z. Li, A. Ravichandran, N. Vasconcelos, R. Bhotika, and S. Soatto. Class-incremental learning with strong pre-trained models. In CVPR, pages 9601–9610, 2022.
[33] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
[34] Q. Xie, Z. Dai, E. Hovy, T. Luong, and Q. Le. Unsupervised data augmentation for consistency training. NeurIPS, 33:6256–6268, 2020.
[35] S. Yan, J. Xie, and X. He. DER: Dynamically expandable representation for class incremental learning. 2021.
[36] Y. Zhou, X. Wang, J. Jiao, T. Darrell, and F. Yu. Learning saliency propagation for semi-supervised instance segmentation. In CVPR, pages 10307–10316, 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89915-
dc.description.abstract由於病理圖像的標註需要高度專業化,標註成本非常高,而且全標註病理數據集的規模通常遠小於自然圖像數據集。 在本文中,我們提出了一個基於Mask R-CNN的實例分割器。 為了減少標註的工作量,我們考慮了一種情境,即數據集只有部分標註,並提供了一些額外的未標註數據。在這種情況下,我們發現通過複製黏貼合成數據和BackErace增強,部分標記的數據可以更適當且有效地在訓練階段中被利用。此外,我們將半監督和持續性學習的蒸餾技術結合到一個框架中,使得我們的模型可以同時使用(部分)標記和未標記的數據來持續學習新的數據。我們在SegPC-2021病理學數據集上進行了實驗,結果顯示我們的方法能充分利用所有可用的數據,從而提高了性能。zh_TW
dc.description.abstractSince the annotation of pathological images needs to be highly specialized, the annotation cost is very high, and the scale of fully annotated pathology datasets is usually much smaller than that of natural image datasets. In this study, we introduce a novel instance segmentation model, which is built upon the foundation of Mask R-CNN. To reduce the annotation effort, we consider a scenario where a dataset is only partially labeled with some extra unlabeled data provided. Under this scenario, we found that through copy-paste synthetic data and BackErace augmentation, partially labeled data can be more properly and effectively utilized in the training phase. In addition, we combine distillation techniques from semi-supervised and incremental learning into one framework, so that our model can continually learn new data using both (partially) labeled and unlabeled data simultaneously. We conducted experiments on SegPC-2021 pathology dataset, and the results demonstrate that our approach can sufficiently utilize all available data, leading to an improvement in performance.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-22T16:39:45Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-09-22T16:39:45Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents vi
List of Figures viii
List of Tables ix
Chapter 1 Introduction 1
Chapter 2 Related Work 4
2.1 Instance Segmentation 4
2.2 Incremental Learning 4
2.3 Incremental Learning for Instance Segmentation 6
2.4 Semi-Supervised Learning 7
2.5 Semi-Supervised Instance Segmentation 9
2.6 Augmentation for Instance Segmentation 9
Chapter 3 Method 12
3.1 Problem Formulation 12
3.2 Semi-Supervised Incremental Learning Framework 13
3.2.1 Output-level Distillation 16
3.2.2 Feature-level Distillation 20
3.2.3 Pseudo Labeling 20
3.2.4 Overall Learning Objective 21
3.3 Copy-Paste Synthetic Data 22
3.4 BackErase Augmentation 23
Chapter 4 Result and Discussion 26
4.1 SegPC-2021 Dataset 26
4.2 Implementation Details 27
4.3 Results on the SegPC-2021 Dataset 27
4.4 Ablation Study 31
Chapter 5 Conclusion 32
References 33
-
dc.language.isoen-
dc.subject醫學影像zh_TW
dc.subject持續性學習zh_TW
dc.subject半監督學習zh_TW
dc.subject部分標註資料集zh_TW
dc.subject實例分割zh_TW
dc.subjectSemi-supervised learningen
dc.subjectIncremental learningen
dc.subjectMedical imageen
dc.subjectInstance segmentationen
dc.subjectPartially annotated dataseten
dc.title應用於部分標註顯微鏡影像之半監督持續性學習實例分割器zh_TW
dc.titleSemi-Supervised Incremental Partially-Annotated Instance Segmentation For Microscopy Imagesen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee楊惠芳;黃文良zh_TW
dc.contributor.oralexamcommitteeHuei-Fang Yang;Wen-Liang Hwangen
dc.subject.keyword持續性學習,半監督學習,部分標註資料集,實例分割,醫學影像,zh_TW
dc.subject.keywordIncremental learning,Semi-supervised learning,Partially annotated dataset,Instance segmentation,Medical image,en
dc.relation.page36-
dc.identifier.doi10.6342/NTU202302031-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-08-11-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
9.12 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved