Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/76813
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor逄愛君(Ai-Chun Pang)
dc.contributor.authorWei-Che Linen
dc.contributor.author林偉哲zh_TW
dc.date.accessioned2021-07-10T21:37:35Z-
dc.date.available2021-07-10T21:37:35Z-
dc.date.copyright2020-09-23
dc.date.issued2020
dc.date.submitted2020-08-18
dc.identifier.citation[1] McMahan, H. Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. “Communication-Efficient Learning of Deep Networks from Decentralized Data.” In Artificial Intelligence and Statistics, 1273–82.
[2] Bonawitz, Keith, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. “Practical Secure Aggregation for Privacy-Preserving Machine Learning.” In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 1175–91.
[3] Bagdasaryan, Eugene, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2018. “How To Backdoor Federated Learning.” In AISTATS, 2938–48.
[4] Xie, Chulin, Keli Huang, Pin-Yu Chen, and Bo Li. 2020. “DBA: Distributed Backdoor Attacks against Federated Learning.” In ICLR 2020 : Eighth International Conference on Learning Representations.
[5] Suresh, Ananda Theertha, Brendan McMahan, Peter Kairouz, and Ziteng Sun. 2019. “Can You Really Backdoor Federated Learning.” ArXiv Preprint ArXiv:1911.07963.
[6] Li, Suyi, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. 2020. “Learning to Detect Malicious Clients for Robust Federated Learning.” ArXiv Preprint ArXiv:2002.00211.
[7] C. Dwork and M. Naor, “On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy,” Journal of Privacy and Confidentiality, vol. 2, no. 1, Aug. 2008.
[8] Yin, Dong, Yudong Chen, Kannan Ramchandran, and Peter Bartlett. 2018. “Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates.” In ICML 2018: Thirty-Fifth International Conference on Machine Learning, 5636–45.
[9] M. Abadi et al., “Deep Learning with Differential Privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, New York, NY, USA, 2016, pp. 308–318.
[10] McMahan, H. Brendan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. “Learning Differentially Private Recurrent Language Models.” In ICLR 2018 : International Conference on Learning Representations 2018.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/76813-
dc.description.abstract聯邦式學習被認為是解決物聯網設備上大規模深層神經網路訓練中的隱私問題的一種前瞻性解決方案,並且具有通信效率。但是,仍然存在被稱為模型反推的技術,可以僅透過模型的權重來恢復敏感的數據。為了因應這些問題,有人提出了安全聚合技術,其中聚合器僅會得知合併的結果,而無法得知個別模型的權重。但是,若採用安全聚合技術,諸如模型中毒攻擊一類的後門攻擊將會成為更大的威脅,因為無法通過異常檢測來防止並排除惡意模型。
因此,在本論文中,我們提出一種嶄新的聯邦式學習方案,並設計了一個名為「上傳部分權重」的方法來減輕模型中毒攻擊,同時仍能保護敏感數據以防止模型反推。我們以標準數據如CIFAR-10和FEMNIST建立圖像分類任務來評估我們的方法。實驗結果表明,對中毒數據的準確性可以大幅降低,並且對正常數據的準確性所造成的波動較小。
zh_TW
dc.description.abstractFederated Learning is considered as one of the promising solutions to solve the privacy problem for large-scale deep neural network training on Internet of Things (IoT) devices in a communication-efficient manner. However, there is still technique known as model inversion, in which sensitive data can be recovered from model weights alone. In response to those concerns, Secure Aggregation is proposed, in which the aggregator learns only the results of merge, but not the individual model. However, backdoor attacks such as model poisoning attacks become a greater threat when Secure Aggregation is employed since malicious models cannot be prevented by anomaly detection.
Therefore in this thesis, we propose an innovative Federated Learning scheme, in which we design a new mechanism called Partial Weights Uploading to mitigate model poisoning attack, and in the mean time sensitive data is still protected against model inversion. We evaluate our method on image classification task using CIFAR-10 and FEMNIST benchmark data. The results of experiments show that the accuracy on poisoned data can be greatly reduced, and the turbulence of the accuracy on normal data is mild.
en
dc.description.provenanceMade available in DSpace on 2021-07-10T21:37:35Z (GMT). No. of bitstreams: 1
U0001-1708202011444600.pdf: 2493218 bytes, checksum: f0d8919b54504319f8c311e7f651091a (MD5)
Previous issue date: 2020
en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
1 Introduction 1
1.1 Background 1
1.2 Motivation 3
1.3 Related Works 3
1.4 Contribution 5
2 Methodology Design 7
2.1 System Model 7
2.2 Partial Weights Uploading 8
2.3 Mechanism Design 9
3 Performance Evaluation 11
3.1 Setup for image classification 11
3.2 CIFAR-10 Experiments 12
3.3 FEMNIST Experiments 14
3.4 Communication Cost 15
4 Conclusions 17
Bibliography 18
dc.language.isoen
dc.subject安全聚合技術zh_TW
dc.subject模型中毒攻擊zh_TW
dc.subject聯邦式學習zh_TW
dc.subjectModel Poisoning Attacksen
dc.subjectSecure Aggregationen
dc.subjectFederated Learningen
dc.title以上傳部分權重的方法抵抗針對聯邦式學習的後門攻擊zh_TW
dc.titleA Partial Weights Uploading Approach against Federated Learning Backdooren
dc.typeThesis
dc.date.schoolyear108-2
dc.description.degree碩士
dc.contributor.oralexamcommittee莊清智(Ching-Chih Chuang),施淵耀(Yuan-Yao Shih),余亞儒(Ya-Ju Yu),邱德泉(Te-Chuan Chiu)
dc.subject.keyword聯邦式學習,模型中毒攻擊,安全聚合技術,zh_TW
dc.subject.keywordFederated Learning,Model Poisoning Attacks,Secure Aggregation,en
dc.relation.page19
dc.identifier.doi10.6342/NTU202003706
dc.rights.note未授權
dc.date.accepted2020-08-19
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-1708202011444600.pdf
  未授權公開取用
2.43 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved