Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87235
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳尚澤zh_TW
dc.contributor.advisorShang-Tse Chenen
dc.contributor.author林楷宸zh_TW
dc.contributor.authorKai-Chen Linen
dc.date.accessioned2023-05-18T16:32:12Z-
dc.date.available2023-11-09-
dc.date.copyright2023-05-11-
dc.date.issued2023-
dc.date.submitted2023-02-16-
dc.identifier.citation[1] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus En- zweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[2] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanN Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
[3] ZhenzhongLan,MingdaChen,SebastianGoodman,KevinGimpel,PiyushSharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language repre- sentations, 2019.
[4] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xi- aohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
[5] Brendan McMahan, Daniel Ramage, and Research Scientists. Federated learning: Collaborative machine learning without centralized training data, 2017. https:// ai.googleblog.com/2017/04/federated-learning-collaborative.html.
[6] H. B. McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. Communication-efficient learning of deep networks from decentralized data. In AISTATS, 2017.
[7] Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Ben- nis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D’Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mar- iana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Pra- neeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. CoRR, 2019.
[8] Qinbin Li, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. A survey on federated learning systems: Vision, hype and real- ity for data privacy and protection. IEEE Transactions on Knowledge and Data Engineering, 2021.
[9] DanielRothchild,AshwineePanda,EnayatUllah,NikitaIvkin,IonStoica,Vladimir Braverman, Joseph Gonzalez, and Raman Arora. FetchSGD: Communication- efficient federated learning with sketching. In Proceedings of the 37th International Conference on Machine Learning, 2020.
[10] Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 2019.
[11] Monica Ribero, Haris Vikalo, and Gustavo De Veciana. Federated Learning Un- der Intermittent Client Availability and Time-Varying Communication Constraints. arXiv e-prints, 2022.
[12] ChulinXie,KeliHuang,Pin-YuChen,andBoLi.Dba:Distributedbackdoorattacks against federated learning. In International Conference on Learning Representations, 2020.
[13] Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. An- alyzing federated learning through an adversarial lens. In Proceedings of the 36th International Conference on Machine Learning, 2019.
[14] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averag- ing for federated learning. In Proceedings of the 37th International Conference on Machine Learning, 2020.
[15] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems, 2020.
[16] Xiaoxiao Li, Meirui Jiang, Xiaofei Zhang, Michael Kamp, and Qi Dou. Fed{bn}: Federated learning on non-{iid} features via local batch normalization. In International Conference on Learning Representations, 2021.
[17] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep net- work training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, 2015.
[18] Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John C Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. In Advances in Neural Information Processing Systems, 2018.
[19] Long Zhao, Ting Liu, Xi Peng, and Dimitris Metaxas. Maximum-entropy adversarial data augmentation for improved generalization and robustness. In Advances in Neural Information Processing Systems, 2020.
[20] Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. In International Conference on Learning Representations, 2020.
[21] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas, Matthew Mattina, Paul What- mough, and Venkatesh Saligrama. Federated learning based on dynamic regulariza- tion. In International Conference on Learning Representations, 2021.
[22] YueZhao, MengLi, LiangzhenLai, NaveenSuda, DamonCivin, and Vikas Chandra. Federated learning with non-iid data. arXiv e-prints, 2018.
[23] Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. Federated learning with matched averaging. In International Conference on Learning Representations, 2020.
[24] Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non- identical data distribution for federated visual classification. CoRR, 2019.
[25] Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H. Brendan McMahan. Adaptive federated op- timization. CoRR, 2020.
[26] Canh T. Dinh, Nguyen Tran, and Josh Nguyen. Personalized federated learning with Moreau envelopes. In Advances in Neural Information Processing Systems, 2020.
[27] Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers. arXiv, 2019.
[28] Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In ICML, 2021.
[29] Xiaosong Ma, Jie Zhang, Song Guo, and Wenchao Xu. Layer-wised model aggregation for personalized federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
[30] Jean-Jacques Moreau. Propriétés des applications ‘prox’. Compte Rendus Acad. Sci., 1963.
[31] Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. Ro- bust federated learning: The case of affine distribution shifts. In Advances in Neural Information Processing Systems, 2020.
[32] Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting Batch Normalization For Practical Domain Adaptation. arXiv e-prints, 2016.
[33] Yanghao Li, Naiyan Wang, Jianping Shi, Xiaodi Hou, and Jiaying Liu. Adaptive batch normalization for practical domain adaptation. Pattern Recognition, 2018.
[34] III Daume, H. and D. Marcu. Domain Adaptation for Statistical Classifiers. arXiv e-prints, 2011.
[35] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by back-propagation. In Proceedings of the 32nd International Conference on Machine Learning, 2015.
[36] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
[37] Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, and Tao Qin. Generalizing to unseen domains: A survey on domain generalization. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, 2021.
[38] Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, and Dacheng Tao. Do- main generalization via entropy regularization. In Advances in Neural Information Processing Systems, 2020.
[39] Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. Domain Generalization via Invariant Feature Representation. arXiv e-prints, 2013.
[40] Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain gener- alization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
[41] Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations, 2018.Autoaugment: Learning augmentation strategies from data. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[50] Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In Advances in Neural Information Processing Systems, 2020.
[51] Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. AugMix: A simple data processing method to improve robustness and uncertainty. Proceedings of the International Conference on Learning Representations (ICLR), 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/87235-
dc.description.abstract聯邦學習可以在邊緣設備上協作訓練深度神經網絡,而無需集中數據以保護數據隱私。以往聯邦學習假設資料是獨立同分佈(iid),但該假設在現實世界中並不成立,因為客戶端的數據是異構的,我們稱這個現象為非獨立同分布 (non-iid)。在這種情況下,傳統聯邦學習框架的性能可能會因為數據分佈的不同而有顯著改變。最近的研究主要集中在標籤上的非獨立同分佈,其數據是基於標籤分佈進行分配。與此設定不同,我們將要處理一個更貼近現實的情形,即數據是基於特徵的異質性分配,造成這個行情的原因可能是不同的傳感器、不同的城市、不同的季節。我們稱這種情況為領域偏移非獨立同分佈(feature shift non-iid)。在這項研究中,我們提出 FedADA 通過消除領域衝突來解決這個問題。我們提出的方法在廣泛的實驗中優於傳統的聯邦學習框架以及現有最先進的資料增強方法。zh_TW
dc.description.abstractFederated learning(FL) enables training deep neural networks collaboratively on edge devices without centralizing the data as well as preserving data privacy.The previous FL assumption of independent and identical distribution (iid) does not hold in the real world as the data on the client sides are heterogeneous, which has been called non-iid. Under this scenario, the performance of conventional federated learning frameworks may vary significantly according to the data distribution. Recent works focus on label non-iid, where data are distributed based on labels. Unlike this setting, we deal with a more realistic problem that data are heterogeneous based on the feature, e.g., different sensors, different cities, and different seasons. This scenario is viewed as a feature-shift non-iid that clients contain non-identical domain data. In this work, we propose FedADA that address this problem by mitigating the domain conflicts. Our proposed method outperforms both classical FL frameworks as well as the SOTA augmentation methods in extensive experiments. The empirical results are conducted with various datasets and model architectures to guarantee the performance of our approach.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-05-18T16:32:12Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-05-18T16:32:12Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements ii
摘要 iii
Abstract iv
Contents v
List of Figures vi
List of Tables vii
Chapter 1 Introduction 1
Chapter 2 Related Works 5
Chapter 3 Methodology 9
Chapter 4 Experiment 14
4.1 Comparison to Baselines 18
4.2 Convergence 19
4.3 Ablation Study 20
4.4 Comprehensive Study 23
Chapter 5 Conclusion 27
References 28
-
dc.language.isoen-
dc.title通過消除用戶資料特徵衝突來改進特徵非獨立同分布的聯合學習zh_TW
dc.titleImproving Federated Learning on Non-IID Features via Mitigating Client Domain Conflicten
dc.typeThesis-
dc.date.schoolyear111-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee李政德;邱德泉zh_TW
dc.contributor.oralexamcommitteeCheng-Te Li;Te-Chuan Chiuen
dc.subject.keyword聯邦學習,分散式學習,非獨立同分佈,特徵偏移非獨立同分佈,領域泛化,zh_TW
dc.subject.keywordFederated Learning,Non-IID,Non-IID Feature,Domain Generalizatio,en
dc.relation.page35-
dc.identifier.doi10.6342/NTU202210155-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-02-17-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2027-12-30-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-1.pdf
  目前未授權公開取用
5.07 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved