請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86529完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳家麟(Ja-Ling Wu) | |
| dc.contributor.author | Shao-Ming Lee | en |
| dc.contributor.author | 李紹銘 | zh_TW |
| dc.date.accessioned | 2023-03-20T00:01:18Z | - |
| dc.date.copyright | 2022-09-13 | |
| dc.date.issued | 2022 | |
| dc.date.submitted | 2022-08-12 | |
| dc.identifier.citation | [1] McMahan, Brendan, et al. 'Communication-efficient learning of deep networks from decentralized data.' Artificial intelligence and statistics. PMLR, 2017. [2] Kairouz, Peter, et al. 'Advances and open problems in federated learning.' Foundations and Trends® in Machine Learning 14.1–2 (2021): 1-210. [3] Zhao, Yue, et al. 'Federated learning with non-iid data.' arXiv preprint arXiv:1806.00582 (2018). [4] Xiao, Peng, et al. 'Averaging is probably not the optimum way of aggregating parameters in federated learning.' Entropy 22.3 (2020): 314. [5] Zhu, Hangyu, et al. 'Federated learning on non-IID data: A survey.' Neurocomputing 465 (2021): 371-390. [6] Yoshida, Naoya, et al. 'Hybrid-FL for wireless networks: Cooperative learning mechanism using non-IID data.' ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020. [7] Duan, Moming, et al. 'Astraea: Self-balancing federated learning for improving classification accuracy of mobile deep learning applications.' 2019 IEEE 37th international conference on computer design (ICCD). IEEE, 2019. [8] Ghosh, Avishek, et al. 'Robust federated learning in a heterogeneous environment.' arXiv preprint arXiv:1906.06629 (2019). [9] Ghosh, Avishek, et al. 'An efficient framework for clustered federated learning.' Advances in Neural Information Processing Systems 33 (2020): 19586-19597. [10] Li, Tian, et al. 'Federated optimization in heterogeneous networks.' Proceedings of Machine Learning and Systems 2 (2020): 429-450. [11] Hsu, Tzu-Ming Harry, Hang Qi, and Matthew Brown. 'Measuring the effects of non-identical data distribution for federated visual classification.' arXiv preprint arXiv:1909.06335 (2019). [12] Wang, Kangkang, et al. 'Federated evaluation of on-device personalization.' arXiv preprint arXiv:1910.10252 (2019). [13] Arivazhagan, Manoj Ghuhan, et al. 'Federated learning with personalization layers.' arXiv preprint arXiv:1912.00818 (2019). [14] Smith, Virginia, et al. 'Federated multi-task learning.' Advances in neural information processing systems 30 (2017). [15] Liu, Boyi, Lujia Wang, and Ming Liu. 'Lifelong federated reinforcement learning: a learning architecture for navigation in cloud robotic systems.' IEEE Robotics and Automation Letters 4.4 (2019): 4555-4562. [16] Zhou, Yanlin, et al. 'Distilled one-shot federated learning.' arXiv preprint arXiv:2009.07999 (2020). [17] Jeong, Eunjeong, et al. 'Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data.' arXiv preprint arXiv:1811.11479 (2018). [18] Li, Daliang, and Junpu Wang. 'Fedmd: Heterogenous federated learning via model distillation.' arXiv preprint arXiv:1910.03581 (2019). [19] Lin, Tao, et al. 'Ensemble distillation for robust model fusion in federated learning.' Advances in Neural Information Processing Systems 33 (2020): 2351-2363. [20] Chen, Hong-You, and Wei-Lun Chao. 'Fedbe: Making bayesian model ensemble applicable to federated learning.' arXiv preprint arXiv:2009.01974 (2020). [21] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. 'Distilling the knowledge in a neural network.' arXiv preprint arXiv:1503.02531 2.7 (2015). [22] Gawlikowski, Jakob, et al. 'A survey of uncertainty in deep neural networks.' arXiv preprint arXiv:2107.03342 (2021). [23] Hendrycks, Dan, and Kevin Gimpel. 'A baseline for detecting misclassified and out-of-distribution examples in neural networks.' arXiv preprint arXiv:1610.02136 (2016). [24] Gal, Yarin, and Zoubin Ghahramani. 'Dropout as a bayesian approximation: Representing model uncertainty in deep learning.' international conference on machine learning. PMLR, 2016. [25] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 'Simple and scalable predictive uncertainty estimation using deep ensembles.' Advances in neural information processing systems 30 (2017). [26] Ovadia, Yaniv, et al. 'Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift.' Advances in neural information processing systems 32 (2019). [27] Guo, Chuan, et al. 'On calibration of modern neural networks.' International conference on machine learning. PMLR, 2017. [28] Mukhoti, Jishnu, et al. 'Deep Deterministic Uncertainty: A Simple Baseline.' arXiv e-prints (2021): arXiv-2102. [29] Gal, Yarin, Riashat Islam, and Zoubin Ghahramani. 'Deep bayesian active learning with image data.' International Conference on Machine Learning. PMLR, 2017. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/86529 | - |
| dc.description.abstract | 近年來,聯邦式學習(Federated Learning)逐漸變成資訊領域重要的研究課題。而聯邦式學習強調學習任務是由鬆散的設備(或稱為用戶端)合力共同解決,此情境下存在的重要挑戰包括:用戶端間的資料是不平衡且非獨立同分布的 (Non-IID),並且設備間的溝通受限於有限的傳輸頻寬而不可靠。上述議題對於聯邦式學習是棘手的挑戰。本文從深度神經網路的不確定性切入聯邦式學習的效能評估並提出新的模型聚合架構(Model Aggregation)。此架構是以知識蒸餾(Knowledge Distillation)為基礎佐以量化深度神經網路的不確定性(Uncertainty in DNN)相關評估方法,進而提升學習效果。實驗在圖像分類(Image Classification)的任務上,證實我們提出的模型聚合架構可有效的解決非獨立同分布的問題,尤其在限制傳輸成本的狀態下,有不錯的表現。 | zh_TW |
| dc.description.abstract | In recent years, Federated Learning has gradually become an important research topic in the field of information theory. Federated learning emphasizes that learning tasks are jointly solved by loose devices (or clients). Important challenges in this situation include: the data among clients is unbalanced and non-IID, and the communication between devices is unreliable due to limited transmission bandwidth. The above issues are intractable to federated learning. This paper starts from the uncertainty of deep neural network to evaluate the effectiveness of federated learning and proposes a new model aggregation architecture. This scheme is based on knowledge distillation and quantifies the uncertainty in DNN related evaluation methods of deep neural networks to improve the learning performance. The experiments on the task of image classification confirm that our proposed model aggregation scheme can effectively solve the problem of non-IID data distribution, especially when the transmission cost is limited, and it has a good performance. | en |
| dc.description.provenance | Made available in DSpace on 2023-03-20T00:01:18Z (GMT). No. of bitstreams: 1 U0001-1108202215435100.pdf: 2636482 bytes, checksum: f6ab1776922dc93a1225dc8168a0c02e (MD5) Previous issue date: 2022 | en |
| dc.description.tableofcontents | 口試委員會審定書 i 致謝 ii 摘要 iii Abstract iv 目 錄 v 圖目錄 vi 表目錄 vii Chapter 1 Introduction 1 1.1 Non-IID issues in Federated Machine Learning 1 1.2 Distillation-Based FL 2 1.3 Possible contributions of this thesis 3 Chapter 2 Background 4 2.1 Knowledge Distillation 4 2.2 Uncertainty in DNNs 5 Chapter 3 Proposed Method 8 3.1 Uncertainty Measurement 8 3.2 Sample Assessment 9 3.3 Overall Architecture 10 Chapter 4 Experiments 12 4.1 Setup 12 4.2 Results and Analysis 13 4.2.1 Ablation Analysis 13 4.2.2 Overall Performance 17 Chapter 5 Conclusion & Future Work 21 Reference 22 | |
| dc.language.iso | zh-TW | |
| dc.subject | 模型聚合 | zh_TW |
| dc.subject | 知識蒸餾 | zh_TW |
| dc.subject | 聯邦式學習 | zh_TW |
| dc.subject | 模型聚合 | zh_TW |
| dc.subject | 深度神經網路的不確定性 | zh_TW |
| dc.subject | 知識蒸餾 | zh_TW |
| dc.subject | 聯邦式學習 | zh_TW |
| dc.subject | 深度神經網路的不確定性 | zh_TW |
| dc.subject | Model Aggregation | en |
| dc.subject | Federated Learning | en |
| dc.subject | Model Aggregation | en |
| dc.subject | Knowledge Distillation | en |
| dc.subject | Uncertainty in Deep Neural Networks | en |
| dc.subject | Federated Learning | en |
| dc.subject | Knowledge Distillation | en |
| dc.subject | Uncertainty in Deep Neural Networks | en |
| dc.title | FedUA:一種適用於圖像分類的可感知不確定性的蒸餾式聯邦學習方案 | zh_TW |
| dc.title | FedUA:A Uncertainty-Aware Distillation Based Federated Learning Scheme for Image Classification | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 110-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳文進(Wen-Chin Chen),許超雲(Chau-Yun Hsu) | |
| dc.subject.keyword | 聯邦式學習,模型聚合,知識蒸餾,深度神經網路的不確定性, | zh_TW |
| dc.subject.keyword | Federated Learning,Model Aggregation,Knowledge Distillation,Uncertainty in Deep Neural Networks, | en |
| dc.relation.page | 24 | |
| dc.identifier.doi | 10.6342/NTU202202304 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2022-08-15 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| dc.date.embargo-lift | 2022-09-13 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1108202215435100.pdf | 2.57 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
