Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91508
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳銘憲zh_TW
dc.contributor.advisorMing-Syan Chenen
dc.contributor.author劉羽忻zh_TW
dc.contributor.authorYu-Hsin Liuen
dc.date.accessioned2024-01-28T16:18:54Z-
dc.date.available2024-01-29-
dc.date.copyright2024-01-27-
dc.date.issued2023-
dc.date.submitted2023-08-09-
dc.identifier.citation[1] Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. In International conference on artificial intelligence and statistics, pages 2938-2948. PMLR, 2020.
[2] Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Karthik Kalyanaraman. Project adam: Building an efficient and scalable deep learning training system. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14), pages 571–582, 2014.
[3] Cynthia Dwork. Differential privacy: A survey of results. In Theory and Applications of Models of Computation: 5th International Conference, TAMC 2008, Xi'an, China, April 25-29, 2008. Proceedings 5, pages 1–19. Springer, 2008.
[4] Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. Inverting gradients - how easy is it to break privacy in federated learning? In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 16937–16947. Curran Associates, Inc., 2020.
[5] Andrew Hard, Chloé M Kiddon, Daniel Ramage, Francoise Beaufays, Hubert Eichner, Kanishka Rao, Rajiv Mathews, and Sean Augenstein. Federated learning for mobile keyboard prediction, 2018.
[6] Ali Hatamizadeh, Hongxu Yin, Holger R Roth, Wenqi Li, Jan Kautz, Daguang Xu, and Pavlo Molchanov. Gradvit: Gradient inversion of vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10021–10030, 2022.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[8] Bargav Jayaraman and David Evans. Evaluating differentially private machine learning in practice. In USENIX Security Symposium, 2019.
[9] Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016.
[10] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
[11] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
[12] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
[13] H Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. Federated learning of deep networks using model averaging. arXiv preprint arXiv:1602.05629, 2:2, 2016.
[14] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
[15] Reza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pages 1310–1321, 2015.
[16] Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, and Pavlo Molchanov. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16337–16346, June 2021.
[17] Hongxu Yin, Pavlo Molchanov, Jose M. Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K. Jha, and Jan Kautz. Dreaming to distill: Data-free knowledge transfer via deepinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
[18] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
[19] Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018.
[20] Junyi Zhu and Matthew B. Blaschko. R-{gap}: Recursive gradient attack on privacy. In International Conference on Learning Representations, 2021.
[21] Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. Advances in neural information processing systems, 32, 2019.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91508-
dc.description.abstract聯邦學習作為一種新穎的機器學習方式,涉及由中央伺服器構建機器學習模型,遠程客戶端使用其私有數據來訓練該模型。普遍認為,聯邦學習具有保護客戶數據隱私的能力。然而,許多研究已成功利用梯度來還原客戶的個人隱私數據。因此,客戶已經開始採用各種方法來保護其隱私和數據。但是,受這些保護措施的影響,再加上惡意伺服器的數量不多,導致許多普通伺服器在聯邦學習框架內性能下降。為了解決上述挑戰,我們提出了一種創新的解決方案。我們的方法是將安全模塊加入到現有模型中,有效地防止客戶隱私通過梯度逆轉攻擊受到破壞,同時保持一定水平的性能。該方法使一般伺服器能夠向客戶展示其值得信賴的特性,消除了客戶採用各種手段來保護數據的需求,同時保持模型的性能。zh_TW
dc.description.abstractFederated learning, as a novel machine learning paradigm, involves the construction of a machine learning model by a central server, with remote clients utilizing their private data for training the model. It is widely believed that federated learning holds the potential to protect the data privacy of clients. However, numerous studies have successfully exploited gradients to recover clients' personal privacy data. Consequently, clients have resorted to employing diverse methods in an attempt to safeguard their privacy and data. Nevertheless, the impact of such protective measures, coupled with the limited prevalence of malicious servers, has led to a decline in performance for many regular servers within the federated learning framework. To address the aforementioned challenges, we propose an innovative solution. Our approach involves the integration of a secure module into the existing model, effectively preventing the compromise of clients' privacy through gradient inversion attacks while maintaining a certain level of performance. This method enables regular servers to demonstrate their trustworthiness to clients, eliminating the need for clients to employ various means to protect their data and maintaining the model's performance.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-01-28T16:18:54Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-01-28T16:18:54Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書i
誌謝ii
摘要iii
Abstract iv
Contents v
List of Figures vii
List of Tables viii
1 Introduction 1
1.1 Introduction 1
2 Related work 4
2.1 Related Work 4
2.1.1 Federated Learning 4
2.1.2 Gradient Inversion Attack 5
2.1.3 Differential Privacy 6
3 Methodology 7
3.1 Method 7
3.1.1 Preliminaries 7
3.1.2 Downsampling Convolutional Layer 8
3.1.3 Model Architecture 8
4 Experiments 11
4.1 Experiment 11
4.1.1 Setup 11
4.1.2 Experiment Result 12
4.1.3 Ablation study 18
5 Limitation 21
5.1 Limitation 21
6 Conclusion 23
6.1 Conclusion 23
Bibliography 24
-
dc.language.isoen-
dc.subject確保隱私zh_TW
dc.subject聯邦學習zh_TW
dc.subjectPrivacy-Preservingen
dc.subjectFederated Learningen
dc.title針對梯度逆轉攻擊建構安全卷積類神經網路zh_TW
dc.titleConstruct a Secure Convolutional Neural Network Against Gradient Inversion Attacken
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee曾新穆;吳尚鴻;吳沛遠zh_TW
dc.contributor.oralexamcommitteeShin-Mu Tseng;Shan-Hung Wu;Pei-Yuan Wuen
dc.subject.keyword聯邦學習,確保隱私,zh_TW
dc.subject.keywordFederated Learning,Privacy-Preserving,en
dc.relation.page27-
dc.identifier.doi10.6342/NTU202303369-
dc.rights.note未授權-
dc.date.accepted2023-08-10-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電機工程學系-
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
  未授權公開取用
1.75 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved