請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88669
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 林軒田 | zh_TW |
dc.contributor.advisor | Hsuan-Tien Lin | en |
dc.contributor.author | 林瑋毅 | zh_TW |
dc.contributor.author | Wei-I Lin | en |
dc.date.accessioned | 2023-08-15T17:18:01Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-08-15 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-02 | - |
dc.identifier.citation | [1] Y.-T. Chou, G. Niu, H.-T. Lin, and M. Sugiyama. Unbiased risk estimators can mislead: A case study of learning with complementary labels. In International Conference on Machine Learning, pages 1929–1938. PMLR, 2020.
[2] Y. Gao and M.-L. Zhang. Discriminative complementary-label learning with weighted loss. In International Conference on Machine Learning, pages 3587–3597. PMLR, 2021. [3] T. Ishida, G. Niu, W. Hu, and M. Sugiyama. Learning from complementary labels. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5644–5654, 2017. [4] T. Ishida, G. Niu, A. Menon, and M. Sugiyama. Complementary-label learning for arbitrary losses and models. In International Conference on Machine Learning, pages 2971–2980. PMLR, 2019. [5] M. Kull and P. Flach. Novel decompositions of proper scoring rules for classification: Score adjustment as precursor to calibration. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 68–85. Springer, 2015. [6] X. Li, T. Liu, B. Han, G. Niu, and M. Sugiyama. Provably end-to-end label-noise learning without anchor points. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 6403–6413. PMLR, 18–24 Jul 2021. [7] J. Liu, H. Hang, B. Wang, B. Li, H. Wang, Y. Tian, and Y. Shi. Gan-cl: Generative adversarial networks for learning from complementary labels. IEEE Transactions on Cybernetics, 2021. [8] D.-B. Wang, L. Feng, and M.-L. Zhang. Learning from complementary labels via partial-output consistency regularization. In IJCAI, pages 3075–3081, 2021. [9] H.-H. Wang, W.-I. Lin, and H.-T. Lin. Clcifar: Cifar-derived benchmark datasets with human annotated complementary labels. arXiv preprint arXiv:2305.08295, 2023. [10] J. Wei, Z. Zhu, H. Cheng, T. Liu, G. Niu, and Y. Liu. Learning with noisy labels revisited: A study using real-world human annotations. In International Conference on Learning Representations, 2021. [11] R. C. Williamson, E. Vernet, and M. D. Reid. Composite multiclass losses. Journal of Machine Learning Research, 17(222):1–52, 2016. [12] Y. Xu, M. Gong, J. Chen, T. Liu, K. Zhang, and K. Batmanghelich. Generative-discriminative complementary learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6526–6533, 2020. [13] X. Yu, T. Liu, M. Gong, and D. Tao. Learning with biased complementary labels. In Proceedings of the European conference on computer vision (ECCV), pages 68–83, 2018. [14] Y. Zhang, F. Liu, Z. Fang, B. Yuan, G. Zhang, and J. Lu. Learning from a complementary-label source domain: Theory and algorithms. IEEE Transactions on Neural Networks and Learning Systems, 2021. [15] Z.-H. Zhou. A brief introduction to weakly supervised learning. National science review, 5(1):44–53, 2018. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88669 | - |
dc.description.abstract | 互補標籤學習 (Complementary-Label Learning, CLL) 是一個弱監督學習問題,其目標在於僅從互補標籤 (Complementary Labels) 訓練出一個分類器,其中互補標籤指是某個資料「不」屬於的類別。已知方法的主要思想是將此問題化約成一般的分類問題,並設計特殊的轉換以及代理損失函數使互補標籤可以與一般的分類問題連結,但這類的方法卻有一些缺點,例如容易過度擬合。在此論文中,我們設計一個新的框架「化約成互補標籤的分布估計」以避開先前方法可能有的缺點。我們證明了準缺地估計互補標籤的分布再加上一個簡單的解碼即可準確地分類未見過的資料。這個框架更可以解釋一些先前互補標籤學習的重要方法,並使他們在有雜訊的資料集中變得更穩健。此外,這個框架揭示了機率估計的準確度能夠用來驗證模型的準確度。由於此框架以機率估計為基礎,因此不論是深度模型或是傳統方法都能在此框架下進行互補標籤學習。我們同時以實驗驗證此框架在不同情境下皆有一定的準確度以及穩健性。最後,我們也收集、分析並公開了一個由真實人類標記,而非人工生成的互補標籤資料集:CLCIFAR。 | zh_TW |
dc.description.abstract | Complementary-Label Learning (CLL) is a weakly-supervised learning problem that aims to learn a multi-class classifier from only complementary labels, which indicate a class to which an instance does not belong. Existing approaches mainly adopt the paradigm of reduction to ordinary classification, which applies specific transformations and surrogate losses to connect CLL back to ordinary classification. Those approaches, however, face several limitations, such as the tendency to overfit. In this paper, we sidestep those limitations with a novel perspective--reduction to probability estimates of complementary classes. We prove that accurate probability estimates of complementary labels lead to good classifiers through a simple decoding step. The proof establishes a reduction framework from CLL to probability estimates. The framework offers explanations of several key CLL approaches as its special cases and allows us to design an improved algorithm that is more robust in noisy environments. The framework also suggests a validation procedure based on the quality of probability estimates, offering a way to validate models with only CLs. The flexible framework opens a wide range of unexplored opportunities in using deep and non-deep models for probability estimates to solve CLL. Empirical experiments further verified the framework's efficacy and robustness in various settings. To further analyze the properties of complementary labels in real world, a CIFAR-based complementary dataset, CLCIFAR, was also collected, analyzed, and released publicly. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-15T17:18:01Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-08-15T17:18:01Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | Acknowledgements i
摘要 iii Abstract iv Contents vi List of Figures viii List of Tables ix Chapter 1 Introduction 1 Chapter 2 Problem Setup 4 2.1 Ordinary-label learning 4 2.2 Complementary-label learning 5 Chapter 3 Proposed Framework 7 3.1 Motivation 7 3.2 Methodology 8 3.3 Connection to Previous Methods 13 Chapter 4 Experiments 15 4.1 Experiment Setup 15 4.2 Discussion 17 4.3 Learn from CL with Traditional Methods 18 4.4 Comparison of validation processes 19 Chapter 5 CLCIFAR: Human-Annotated Complementary Datasets 21 5.1 Motivation 21 5.2 Data Collection Protocol 22 5.3 Preliminary Dataset Analysis 23 Chapter 6 Conclusion 27 References 28 Appendix A — Proofs 31 A.1 Proof of Proposition 3.2.1 31 A.2 Proof of Proposition 3.2.2 31 A.3 Proof of Corollary 3.2.3 32 Appendix B — Details of the Connections between Proposed Framework and Previous Methods 34 Appendix C — Experiment Details 40 C.1 Setup 40 C.2 Additional Results 41 Appendix D — Details of CLCIFAR20 44 D.1 Label names of CLCIFAR20 44 | - |
dc.language.iso | en | - |
dc.title | 從互補標籤學習化約至機率估計 | zh_TW |
dc.title | Reduction from Complementary-Label Learning to Probability Estimates | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 林守德;李宏毅;孫紹華 | zh_TW |
dc.contributor.oralexamcommittee | Shou-De Lin;Hung-Yi Lee;Shao-Hua Sun | en |
dc.subject.keyword | 互補標籤學習,弱監督學習,化約,監督式學習,機器學習, | zh_TW |
dc.subject.keyword | Complementary-Label Learning,Weakly Supervised Learning,Reduction,Supervised Learning,Machine Learning, | en |
dc.relation.page | 44 | - |
dc.identifier.doi | 10.6342/NTU202302414 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2023-08-04 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊工程學系 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 2.38 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。