請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89904
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳銘憲 | zh_TW |
dc.contributor.advisor | Ming-Syan Chen | en |
dc.contributor.author | 陳奕中 | zh_TW |
dc.contributor.author | Yi-Chung Chen | en |
dc.date.accessioned | 2023-09-22T16:37:07Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-09-22 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-09 | - |
dc.identifier.citation | [1] Sameer Bibikar, Haris Vikalo, Zhangyang Wang, and Xiaohan Chen. Federated dynamicsparse training: Computing less, communicating less, yet learning better. InAAAI Conference on Artificial Intelligence, 2022.
[2] Simina Brânzei. State of the art: Solution concepts for coalitional games. 2010. [3] Georgios Chalkiadakis, Edith Elkind, and Michael Wooldridge. Computational aspectsof cooperative game theory. In Computational Aspects of Cooperative GameTheory, 2011. [4] Wei-Ting Chen, Zhi-Kai Huang, Cheng-Che Tsai, Hao-Hsiang Yang, Jianping Ding,and Sy-Yen Kuo. Learning multiple adverse weather removal via two-stage knowledgelearning and multi-contrastive regularization: Toward a unified model. 2022IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages17632–17641, 2022. [5] Yae Jee Cho, Jianyu Wang, and Gauri Joshi. Towards understanding biased clientselection in federated learning. In International Conference on Artificial Intelligenceand Statistics, 2022. [6] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploitingshared representations for personalized federated learning. In Proceedings of the38th International Conference on Machine Learning, 2021. [7] J. S. Cramer. The origins of logistic regression. Econometrics eJournal, 2002. [8] Enmao Diao, Jie Ding, and Vahid Tarokh. Heterofl: Computation and communicationefficient federated learning for heterogeneous clients. In 9th InternationalConference on Learning Representations, ICLR 2021, Virtual Event, Austria, May3-7, 2021. OpenReview.net, 2021. [9] Guanyu Ding, Zhengxiong Li, Yusen Wu, Xiaokun Yang, Mehrdad Aliasgari, andHailu Xu. Towards an efficient client selection system for federated learning. InIEEE International Conference on Cloud Computing, 2022. [10] Ningning Ding, Zhixuan Fang, and Jianwei Huang. Incentive mechanism design forfederated learning with multi-dimensional private information. In International Symposiumon Modeling and Optimization in Mobile, Ad-Hoc and Wireless Networks,2020. [11] Yann Fraboni, Richard Vidal, Laetitia Kameni, and Marco Lorenzi. Clustered sampling:Low-variance and improved representativity for clients selection in federatedlearning. In International Conference on Machine Learning, 2021. [12] Yann Fraboni, Richard Vidal, and Marco Lorenzi. Free-rider attacks on model aggregationin federated learning. In Arindam Banerjee and Kenji Fukumizu, editors,The 24th International Conference on Artificial Intelligence and Statistics, AISTATS2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of MachineLearning Research, pages 1846–1854. PMLR, 2021. [13] Hongchang Gao, An Xu, and Heng Huang. On the convergence of communicationefficientlocal sgd for federated learning. In AAAI Conference on Artificial Intelligence,2021. [14] Amirata Ghorbani and James Y. Zou. Data shapley: Equitable valuation of datafor machine learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,Proceedings of the 36th International Conference on Machine Learning, ICML 2019,9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of MachineLearning Research, pages 2242–2251. PMLR, 2019. [15] Yufei Han and Xiangliang Zhang. Robust federated learning via collaborative machineteaching. In AAAI Conference on Artificial Intelligence, 2020. [16] Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in aneural network. ArXiv, abs/1503.02531, 2015. [17] R. Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nicholas Hynes, Nezihe MerveGürel, Bo Li, Ce Zhang, Dawn Xiaodong Song, and Costas J. Spanos. Towardsefficient data valuation based on the shapley value. In International Conference onArtificial Intelligence and Statistics, 2019. [18] Yongcheng Jing, Yiding Yang, Xinchao Wang, Mingli Song, and Dacheng Tao.Amalgamating knowledge from heterogeneous graph neural networks. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages15704–15713, 2021. [19] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influencefunctions. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, ICML’17, 2017. [20] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. [21] Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. [22] Anran Li, Lan Zhang, Junhao Wang, Feng Han, and Xiangyang Li. Privacypreservingefficient federated-learning model debugging. IEEE Transactions onParallel and Distributed Systems, 33:2291–2303, 2022. [23] Anran Li, Lan Zhang, Junhao Wang, Juntao Tan, Feng Han, Yaxuan Qin, NikolaosM. Freris, and Xiangyang Li. Efficient federated-learning model debugging.2021 IEEE 37th International Conference on Data Engineering (ICDE), pages 372–383, 2021. [24] Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robustfederated learning through personalization. In Marina Meila and Tong Zhang, editors,Proceedings of the 38th International Conference on Machine Learning, ICML2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine LearningResearch, pages 6357–6368. PMLR, 2021. [25] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, andVirginia Smith. Federated optimization in heterogeneous networks. In Inderjit S.Dhillon, Dimitris S. Papailiopoulos, and Vivienne Sze, editors, Proceedings of MachineLearning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2-4, 2020.mlsys.org, 2020. [26] Ying Li, Xingwei Wang, Rongfei Zeng, Mingzhou Yang, Ke-Xin Li, Min Huang, andSchahram Dustdar. Varf: An incentive mechanism of cross-silo federated learningin mec. IEEE Internet of Things Journal, 2023. [27] Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressedgradient descent in distributed and federated optimization. In Proceedingsof the 37th International Conference on Machine Learning, ICML 2020, 13-18 July2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research,pages 5895–5904. PMLR, 2020. [28] Jierui Lin, Min Du, and Jian Liu. Free-riders in federated learning: Attacks anddefenses. ArXiv, abs/1911.12560, 2019. [29] Zelei Liu, Yuanyuan Chen, Han Yu, Yang Liu, and Li zhen Cui. GTG-shapley:Efficient and accurate participant contribution evaluation in federated learning. ACMTransactions on Intelligent Systems and Technology (TIST), 13:1 – 21, 2021. [30] Sihui Luo, Xinchao Wang, Gongfan Fang, Yao Hu, Dapeng Tao, and Mingli Song.Knowledge amalgamation from heterogeneous networks by common feature learning.In Sarit Kraus, editor, Proceedings of the Twenty-Eighth International JointConference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16,2019, pages 3087–3093. ijcai.org, 2019. [31] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, andBlaise Agüera y Arcas. Communication-efficient learning of deep networksfrom decentralized data. In Aarti Singh and Xiaojin (Jerry) Zhu, editors, Proceedingsof the 20th International Conference on Artificial Intelligence and Statistics,AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, volume 54 ofProceedings of Machine Learning Research, pages 1273–1282. PMLR, 2017. [32] Umberto Michieli and Mete Ozay. Prototype guided federated learning of visualfeature representations. ArXiv, abs/2105.08982, 2021. [33] Lokesh Nagalapatti, Ruhi Sharma Mittal, and Ramasuri Narayanam. Is your datarelevant?: Dynamic selection of relevant data for federated learning. In AAAI Conferenceon Artificial Intelligence, 2022. [34] Shashi Raj Pandey, Nguyen Hoang Tran, Mehdi Bennis, Yan Kyaw Tun, Aunas Manzoor,and Choong Seon Hong. A crowdsourcing framework for on-device federatedlearning. IEEE Trans. Wirel. Commun., 19(5):3241–3256, 2020. [35] Krishna Pillutla, Kshitiz Malik, Abdel-Rahman Mohamed, Mike Rabbat, MaziarSanjabi, and Lin Xiao. Federated learning with partial model personalization. InProceedings of the 39th International Conference on Machine Learning, 2022. [36] Aviv Shamsian, Aviv Navon, Ethan Fetaya, and Gal Chechik. Personalized federatedlearning using hypernetworks. In International Conference on Machine Learning,2021. [37] Lloyd S. Shapley. A value for n-person games. 1988. [38] Chengchao Shen, Xinchao Wang, Jie Song, Li Sun, and Mingli Song. Amalgamatingknowledge towards comprehensive classification. In AAAI Conference on ArtificialIntelligence, 2019. [39] Chengchao Shen, Mengqi Xue, Xinchao Wang, Jie Song, Li Sun, and Mingli Song.Customizing student networks from heterogeneous teachers via adaptive knowledgeamalgamation. 2019 IEEE/CVF International Conference on Computer Vision(ICCV), pages 3503–3512, 2019. [40] Zhuan Shi, Lan Zhang, Zhenyu Yao, Lingjuan Lyu, Cen Chen, Li Wang, JunhaoWang, and Xiang-Yang Li. Fedfaim: A model performance-based fair incentivemechanism for federated learning. IEEE Transactions on Big Data, 2022. [41] Rachael Hwee Ling Sim, Yehong Zhang, Mun Choon Chan, Bryan Kian, and HsiangLow. Collaborative machine learning with incentive-aware model rewards. In InternationalConference on Machine Learning, 2020. [42] Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for fewshotlearning. In NIPS, 2017. [43] Tianshu Song, Yongxin Tong, and Shuyue Wei. Profit allocation for federated learning.2019 IEEE International Conference on Big Data (Big Data), pages 2577–2586,2019. [44] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H. S. Torr, and Timothy M.Hospedales. Learning to compare: Relation network for few-shot learning. pages1199–1208, 2018. [45] Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, and ChengqiZhang. Fedproto: Federated prototype learning across heterogeneous clients. InAAAI Conference on Artificial Intelligence, 2022. [46] Yue Tan, Guodong Long, Jie Ma, Lu Liu, Tianyi Zhou, and Jing Jiang. Federatedlearning from pre-trained models: A contrastive learning approach. In NeurIPS,2022. [47] Minxue Tang, Xuefei Ning, Yitu Wang, Jingwei Sun, Yu Wang, Hai Helen Li, andYiran Chen. Fedcor: Correlation-based active client selection strategy for heterogeneousfederated learning. 2022 IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR), pages 10092–10101, 2021. [48] Zuoqi Tang, Feifei Shao, Long Chen, Yunan Ye, Chao Wu, and Jun Xiao. Optimizingfederated learning on non-iid data using local shapley value. In Lu Fang, Yiran Chen,Guangtao Zhai, Z. Jane Wang, Ruiping Wang, and Weisheng Dong, editors, ArtificialIntelligence - First CAAI International Conference, CICAI 2021, Hangzhou, China,June 5-6, 2021, Proceedings, Part II, volume 13070 of Lecture Notes in ComputerScience, pages 164–175. Springer, 2021. [49] Sebastian Shenghong Tay, Xinyi Xu, Chuan Sheng Foo, and Bryan Kian HsiangLow. Incentivizing collaboration in machine learning via synthetic data rewards. InAAAI Conference on Artificial Intelligence, 2022. [50] Wei Wan, Shengshan Hu, Jianrong Lu, Leo Yu Zhang, Hai Jin, and Yuanyuan He.Shielding federated learning: Robust aggregation with adaptive client selection. InInternational Joint Conference on Artificial Intelligence, 2022. [51] Junhao Wang, Lan Zhang, Anran Li, Xuanke You, and Haoran Cheng. Efficient participantcontribution evaluation for horizontal and vertical federated learning. 2022IEEE 38th International Conference on Data Engineering (ICDE), pages 911–923,2022. [52] Tianhao Wang, Johannes Rausch, Ce Zhang, R. Jia, and Dawn Xiaodong Song. Aprincipled approach to data valuation for federated learning. In Federated Learning,2020. [53] Shuyue Wei, Yongxin Tong, Zimu Zhou, and Tianshu Song. Efficient and fair datavaluation for horizontal federated learning. In Federated Learning, 2020. [54] Don Xie, Ruonan Yu, Gongfan Fang, Jie Song, Zunlei Feng, Xinchao Wang, Li Sun,and Mingli Song. Federated selective aggregation for knowledge amalgamation.ArXiv, abs/2207.13309, 2022. [55] Chencheng Xu, Zhiwei Hong, Minlie Huang, and Tao Jiang. Acceleration of federatedlearning with alleviated forgetting in local training. In The Tenth InternationalConference on Learning Representations, ICLR 2022, Virtual Event, April 25-29,2022. OpenReview.net, 2022. [56] Jingyi Xu, Zihan Chen, Tony Q. S. Quek, and Kai Fong Ernest Chong. Fedcorr:Multi-stage federated learning for label noise correction. 2022 IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR), pages 10174–10183,2022. [57] Yihao Xue, Chaoyue Niu, Zhenzhe Zheng, Shaojie Tang, Chengfei Lyu, Fan Wu, andGuihai Chen. Toward understanding the influence of individual clients in federatedlearning. In AAAI Conference on Artificial Intelligence, 2021. [58] Jingwen Ye, Yixin Ji, Xinchao Wang, Kairi Ou, Dapeng Tao, and Mingli Song. Studentbecoming the master: Knowledge amalgamation for joint scene parsing, depthestimation, and more. 2019 IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR), pages 2824–2833, 2019. [59] Jingwen Ye, Xinchao Wang, Yixin Ji, Kairi Ou, and Mingli Song. Amalgamating filteredknowledge: Learning task-customized student from multi-task teachers. pages4128–4134, 2019. [60] Rui Ye, Zhenyang Ni, Chenxin Xu, Jianyu Wang, Siheng Chen, and Yonina C. Eldar.Fedfm: Anchor-based feature matching for data heterogeneity in federated learning.ArXiv, abs/2210.07615, 2022. [61] Han Yu, Zelei Liu, Yang Liu, Tianjian Chen, Mingshu Cong, Xi Weng, Dusit TaoNiyato, and Qiang Yang. A sustainable incentive scheme for federated learning.IEEE Intelligent Systems, 35:58–69, 2020. [62] Honglin Yuan and Tengyu Ma. Federated accelerated stochastic gradient descent. InNeural Information Processing Systems, 2020. [63] Rongfei Zeng, Shixun Zhang, Jiaqi Wang, and Xiaowen Chu. Fmore: An incentivescheme of multi-dimensional auction for federated learning in mec. 2020 IEEE 40thInternational Conference on Distributed Computing Systems (ICDCS), pages 278–288, 2020. [64] Jingwen Zhang, Yuezhou Wu, and Rong Pan. Incentive mechanism for horizontalfederated learning based on reputation and reverse auction. Proceedings of the WebConference 2021, 2021. [65] Sai Qian Zhang, Jieyu Lin, and Qi Zhang. A multi-agent reinforcement learningapproach for efficient client selection in federated learning. In AAAI Conference onArtificial Intelligence, 2022. [66] Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. Data-free knowledge distillation forheterogeneous federated learning. pages 12878–12889, 2021. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89904 | - |
dc.description.abstract | 聯邦學習(Federated Learning, FL)中參與者貢獻評估近年來受到了廣泛關注,因其在激勵機制、魯棒性增強和客戶選擇等領域的適用性。先前的方法主要依賴於被廣泛采用的夏普利值(Shapley value) 進行參與者評估。然而,儘管使用了基於梯度的模型重構和省略評估不必要的子集等技術,夏普利值的計算仍然需要花費大量的時間。因此,我們提出了一種高效的方法,稱為單輪參與者合併貢獻評估(Single-round Participants Amalgamation for Contribution Evaluation, SPACE)。SPACE 包含兩個創新的模組, 包括了“聯邦知識合併(Federated Knowledge Amalgamation)"和“基於原型的模型評估(Prototype-based Model Evaluation)",通過消除對驗證集大小的依賴,並在單個通信輪內進行參與者評估,從而減少了評估工作的計算量。實驗結果表明,SPACE 在運行時間和皮爾森相關系數(PCC)兩方面優於現有方法。此外,我們在客戶重加權和客戶選擇等應用進行了廣泛實驗,突出了SPACE 的有效性。 | zh_TW |
dc.description.abstract | The evaluation of participant contribution in federated learning (FL) has recently gained significant attention due to its applicability in various domains, such as incentive mechanisms, robustness enhancement, and client selection. Previous approaches have predominantly relied on the widely adopted Shapley value for participant evaluation. However, the computation of the Shapley value is expensive, despite using techniques like gradient-based model reconstruction and truncating unnecessary evaluations. Therefore, we present an efficient approach called Single-round Participants Amalgamation for Contribution Evaluation (SPACE). SPACE incorporates two novel components, namely Federated Knowledge Amalgamation and Prototype-based Model Evaluation to reduce the evaluation effort by eliminating the dependence on the size of the validation set and enabling participant evaluation within a single communication round. Experimental results demonstrate that SPACE outperforms state-of-the-art methods in terms of both running time and Pearson’s Correlation Coefficient (PCC). Furthermore, extensive experiments conducted on applications, client reweighting, and client selection highlight the effectiveness of SPACE. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-22T16:37:07Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-09-22T16:37:07Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 誌謝i
摘要ii Abstract iii Contents iv List of Figures vi List of Tables vii 1 Introduction 1 2 Related Work 5 3 Preliminary 7 4 Method 9 4.1 Federated Knowledge Amalgamation . . . . . . . . . . . . . . . . . 10 4.2 Prototype-based Model Evaluation . . . . . . . . . . . . . . . . . . . 13 4.3 Contribution Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 15 4.4 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.5 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5 Experiment 20 5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.2 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2.1 Participant contribution. . . . . . . . . . . . . . . . . . . . . 22 5.2.2 Client Reweighting. . . . . . . . . . . . . . . . . . . . . . . 23 5.2.3 Client Selection. . . . . . . . . . . . . . . . . . . . . . . . . 24 5.3 Sensitivity test of utility function . . . . . . . . . . . . . . . . . . . . 26 5.4 Ablation Study for smaller G . . . . . . . . . . . . . . . . . . . . . . 28 6 Conclusion 29 Bibliography 30 | - |
dc.language.iso | en | - |
dc.title | SPACE: 聯邦學習中的單輪參與者合併貢獻評估 | zh_TW |
dc.title | SPACE: Single-round Participant Amalgamation for Contribution Evaluation in Federated Learning | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 王鈺強;楊奕軒;帥宏翰 | zh_TW |
dc.contributor.oralexamcommittee | Yu-Chiang Wang;Yi-Hsuan Yang;Hong-Han Shuai | en |
dc.subject.keyword | 聯邦學習,貢獻衡量,夏普利值,知識合併, | zh_TW |
dc.subject.keyword | Federated Learning,Contribution Evaluation,Shapley Value,Knowledge Amalgamation, | en |
dc.relation.page | 39 | - |
dc.identifier.doi | 10.6342/NTU202302429 | - |
dc.rights.note | 未授權 | - |
dc.date.accepted | 2023-08-10 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 電信工程學研究所 | - |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf 目前未授權公開取用 | 4.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。