Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86979
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林軒田zh_TW
dc.contributor.advisorHsuan-Tien Linen
dc.contributor.author余友竹zh_TW
dc.contributor.authorYu-Chu Yuen
dc.date.accessioned2023-05-02T17:12:33Z-
dc.date.available2023-11-09-
dc.date.copyright2023-05-02-
dc.date.issued2022-
dc.date.submitted2023-01-07-
dc.identifier.citationP.Bachman,O.Alsharif,andD.Precup.Learningwithpseudo-ensembles.Advances in neural information processing systems, 27, 2014.
S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Vaughan. A theory of learning from different domains. Machine Learning, 79:151–175, 2010.
D. M. Chan, R. Rao, F. Huang, and J. F. Canny. Gpu accelerated t-distributed stochastic neighbor embedding. Journal of Parallel and Distributed Computing, 131:1–13, 2019.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
Y.Ganin,E.Ustinova,H.Ajakan,P.Germain,H.Larochelle,F.Laviolette,M.Marc- hand, and V. Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. Advances in neural information processing systems, 17, 2004.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4893–4902, 2019.
G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4893–4902, 2019.
D.-H. Lee. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. ICML 2013 Workshop : Challenges in Representation Learning (WREPL), 07 2013.
J.Li,G.Li,Y.Shi,andY.Yu.Cross-domainadaptiveclusteringforsemi-supervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2505–2514, 2021.
J. Liang, D. Hu, and J. Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pages 6028–6039. PMLR, 2020.
J. Liang, D. Hu, and J. Feng. Domain adaptation with auxiliary target domain- oriented classifier. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16632–16642, 2021.
M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised domain adaptation with residual transfer networks. Advances in neural information processing systems, 29, 2016.
J. Na, H. Jung, H. J. Chang, and W. Hwang. Fixbi: Bridging domain spaces for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1094–1103, June 2021.
G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu. Making deep neu- ral networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1944–1952, 2017.
X. Peng, Q. Bai, X. Xia, Z. Huang, K. Saenko, and B. Wang. Moment match- ing for multi-source domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1406–1415, 2019.
I. Redko, A. Habrard, and M. Sebban. Theoretical analysis of domain adaptation with optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 737–753. Springer, 2017.
S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Train- ing deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014.
K. Saito, D. Kim, S. Sclaroff, T. Darrell, and K. Saenko. Semi-supervised domain adaptation via minimax entropy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8050–8058, 2019.
K. Saito, K. Watanabe, Y. Ushiku, and T. Harada. Maximum classifier discrep- ancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723–3732, 2018.
J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30, 2017.
K. Sohn, D. Berthelot, N. Carlini, Z. Zhang, H. Zhang, C. A. Raffel, E. D. Cubuk, A. Kurakin, and C.-L. Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33:596–608, 2020.
S. Sukhbaatar, J. Bruna, M. Paluri, L. Bourdev, and R. Fergus. Training convolu- tional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014.
D. Tanaka, D. Ikami, T. Yamasaki, and K. Aizawa. Joint optimization framework for learning with noisy labels. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5552–5560, 2018.
H. Venkateswara, J. Eusebio, S. Chakraborty, and S. Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018–5027, 2017.
X. Wang, Y. Hua, E. Kodirov, S. S. Mukherjee, D. A. Clifton, and N. M. Robertson. Proselflc: Progressive self label correction towards a low-temperature entropy state. arXiv preprint arXiv:2207.00118, 2022.
X.Xia,T.Liu,B.Han,N.Wang,M.Gong,H.Liu,G.Niu,D.Tao,andM.Sugiyama. Part-dependent label noise: Towards instance-dependent label noise. Advances in Neural Information Processing Systems, 33:7597–7610, 2020.
Z. Yan, Y. Wu, G. Li, Y. Qin, X. Han, and S. Cui. Multi-level consistency learning for semi-supervised domain adaptation. arXiv preprint arXiv:2205.04066, 2022.
L. Yang, Y. Wang, M. Gao, A. Shrivastava, K. Q. Weinberger, W.-L. Chao, and S.-N. Lim. Deep co-training with task decomposition for semi-supervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8906–8916, 2021.
Y. Yao, T. Liu, B. Han, M. Gong, J. Deng, G. Niu, and M. Sugiyama. Dual t: Reduc- ing estimation error for transition matrix in label-noise learning. Advances in neural information processing systems, 33:7260–7271, 2020.
H.-J.Ye,H.Hu,D.-C.Zhan,andF.Sha.Few-shotlearningviaembeddingadaptation with set-to-set functions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8808–8817, 2020.
C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.
Y. Zhang, H. Zhang, B. Deng, S. Li, K. Jia, and L. Zhang. Semi-supervised models are strong unsupervised domain adaptation learners. arXiv preprint arXiv:2106.00417, 2021.
H. Zhao, R. T. Des Combes, K. Zhang, and G. Gordon. On learning invariant repre- sentations for domain adaptation. In International Conference on Machine Learning, pages 7523–7532. PMLR, 2019.
Y.Zhao,L.Cai,etal.Reducingthecovariateshiftbymirrorsamplesincrossdomain alignment. Advances in Neural Information Processing Systems, 34:9546–9558, 2021.
G. Zheng, A. H. Awadallah, and S. Dumais. Meta label correction for noisy label learning. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 35, pages 11053–11061, 2021.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86979-
dc.description.abstract半監督式域適應涉及到學習使用少量的標記目標數據和許多未標記的目標數據,以及來自相關領域的標記源數據,以對未標記的目標數據進行分類。當前的半監督式域適應方法通常旨在通過特徵空間映射和偽標籤分配將目標數據與標記的源數據對齊。 然而,這種源導向的模型有時會將目標數據與錯誤類別的源數據對齊,從而降低分類的表現。 我們提出了一種新穎的域適應典範,可以調整源數據以匹配目標數據。 我們的核心思想是將源數據視為一種含有噪聲標記的理想目標數據。 我們提出了一個半監督式域適應模型,該模型借助從目標的角度設計的清理元件來動態清除源數據的噪聲標籤。 由於這種想法與現有的其他半監督式域適應方法背後的核心理念有很大的不同,因此,我們提出的模型可以很容易地與這些方法結合以提高它們的性能。 在兩種主流的半監督式域適應方法上的實驗結果表明,我們提出的模型有效地清除了源標籤內的噪聲,並在主流的數據集上得到優於這些方法的表現。zh_TW
dc.description.abstractSemi-supervised domain adaptation (SSDA) involves learning to classify unseen target data with a few labeled data and many unlabeled target data, along with many labeled source data from a related domain. Current SSDA approaches typically aim at aligning the target data to the labeled source data with feature space mapping and pseudo-label assignment. Nevertheless, such a source-oriented model sometimes aligns the target data to source data of the wrong class, degrading the classification performance. We present a novel source-adaptive paradigm that adapts the source data to match the target data. Our key idea is to view the source data as a noisily-labeled version of the ideal target data. We propose an SSDA model that cleans up the label noise dynamically with the help of a robust cleaner component designed from the perspective of the target. Since this paradigm differs greatly from the core ideas behind existing SSDA approaches, our proposed model can be easily coupled with such approaches to improve their performance. Empirical results on two state-of-the-art SSDA approaches demonstrate that the proposed model effectively cleans up noise within the source labels and exhibits superior performance over those approaches across benchmark datasets.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-05-02T17:12:33Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-05-02T17:12:33Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
摘要 iii
Abstract iv
Contents vi
List of Figures viii
List of Tables x
Chapter 1 Introduction 1
Chapter 2 Related Work 6
2.1 Problem Setup 6
2.2 Semi-Supervised Domain Adaptation (SSDA) 7
2.3 Noisy Label Learning (NLL) 8
Chapter 3 Proposed Framework 9
3.1 Domain Adaptation as Noisy Label Learning 11
3.2 Protonet with Pseodo Centers 13
3.3 Source Label Adaptation for SSDA 15
3.3.1 Implementation Details 16
3.3.1.1 Warmup Stage 16
3.3.1.2 Dynamic Updates 16
Chapter 4 Experiments 18
4.1 Experimental Setup 18
4.1.1 Datasets 18
4.1.2 Implementation 19
4.2 Comparison with State-of-the-Art Methods 19
4.2.1 DomainNet 20
4.2.2 Office-Home 22
4.3 Analysis 24
4.3.1 MCL Reproducibility 24
4.3.2 PPC for Inference 26
4.3.3 Illustration of Adapted Labels 26
4.3.4 Warmup for MME + SLA 28
4.3.5 Limitations 29
Chapter 5 Conclusion 30
References 31
Appendix A - Introduction 37
A.1 Implementation Detail 37
A.2 Experiment Detail 37
A.3 Reproducibility Issue for MCL 38
-
dc.language.isoen-
dc.subject機器學習zh_TW
dc.subject半監督式域適應zh_TW
dc.subject域適應zh_TW
dc.subject噪聲標籤學習zh_TW
dc.subject遷移學習zh_TW
dc.subjectSemi-Supervised Domain Adaptationen
dc.subjectDomain Adaptationen
dc.subjectTransfer Learningen
dc.subjectMachine Learningen
dc.subjectNoisy Label Learningen
dc.title修正源標籤以改進半監督式域適應zh_TW
dc.titleSemi-Supervised Domain Adaptation with Source Label Adaptationen
dc.typeThesis-
dc.date.schoolyear111-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳祝嵩;李宏毅zh_TW
dc.contributor.oralexamcommitteeChu-Song Chen;Hung-Yi Leeen
dc.subject.keyword域適應,半監督式域適應,機器學習,遷移學習,噪聲標籤學習,zh_TW
dc.subject.keywordDomain Adaptation,Semi-Supervised Domain Adaptation,Machine Learning,Transfer Learning,Noisy Label Learning,en
dc.relation.page43-
dc.identifier.doi10.6342/NTU202210189-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2023-01-09-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-1.pdf1.35 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved