Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資料科學學位學程
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81728
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor李育杰(Yuh-Jye Lee)
dc.contributor.authorJhao-Gu Taien
dc.contributor.author戴肇谷zh_TW
dc.date.accessioned2022-11-24T09:26:22Z-
dc.date.available2022-11-24T09:26:22Z-
dc.date.copyright2022-02-16
dc.date.issued2021
dc.date.submitted2022-02-08
dc.identifier.citation[1] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016. [2] Thierry Bertin­Mahieux, Daniel P.W. Ellis, Brian Whitman, and Paul Lamere. The million song dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011), 2011. [3] Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 634–643. PMLR, 09–15 Jun 2019. [4] Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. [5] Stephen Boyd, Neal Parikh, and Eric Chu. Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc, 2011. [6] Ivan Damgård, Valerio Pastro, Nigel Smart, and Sarah Zakarias. Multiparty computation from somewhat homomorphic encryption. In Reihaneh Safavi­Naini and Ran Canetti, editors, Advances in Cryptology – CRYPTO 2012, pages 643–662, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg. [7] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. [8] Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. Rappor: Randomized aggregatable privacy­preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, page 1054–1067, New York, NY, USA, 2014. Association for Computing Machinery. [9] Jordi Fonollosa, Sadique Sheik, Ramón Huerta, and Santiago Marco. Reservoir computing compensates slow response of chemosensor arrays exposed to fast varying gas concentrations in continuous monitoring. Sensors and Actuators B: Chemical, 215: 618–629, 2015. [10] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, page 1322–1333, New York, NY, USA, 2015. Association for Computing Machinery. [11] Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. Privacy in pharmacogenetics: An end­to­end case study of personalized warfarin dosing. In Proceedings of the 23rd USENIX Conference on Security Symposium, SEC’14, page 17–32, USA, 2014. USENIX Association. [12] Anbu Huang. Dynamic backdoor attacks against federated learning. arXiv preprint arXiv:2011.07429, 2020. [13] Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019. [14] Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. ACM Trans. Program. Lang. Syst., 4(3):382–401, July 1982. [15] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication­efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017. [16] Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pages 245–248, 2013. [17] Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H. Yang, Farhad Farokhi, Shi Jin, Tony Q. S. Quek, and H. Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020. [18] Ruiliang Zhang and James Kwok. Asynchronous distributed admm for consensus optimization. In International conference on machine learning, pages 1701–1709, 2014. [19] Wenting Zheng, Raluca Ada Popa, Joseph E Gonzalez, and Ion Stoica. Helen: Maliciously secure coopetitive learning for linear models. In 2019 IEEE Symposium on Security and Privacy (SP), pages 724–738. IEEE, 2019.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/81728-
dc.description.abstract隨著機器學習的成長,有越來越多的組織與公司嘗試使用機器學習來預測或是做決策。雖然機器學習在現在被廣泛的使用,但是資料集數量不足仍會是很多組織面臨的一大重要問題。為了克服資料不足的情形,聯邦式學習自此便扮演著重要的角色。 我們的研究著重於聯邦式學習的運算效率以及網路安全性。我們應用 ADMM 這個方法來解聯邦學習中的分散式最佳化問題。傳統的同步式 ADMM 中,中央主機需要等所有本地端的工作單位都呈遞權重更新。基於工作單位計算資源的差異,落後者可能會造成同步延遲。為了減低落後者造成的同步延遲,我們提出了非同步 ADMM 這個方法。使用非同步 ADMM,中央主機可以更新全域權重而無需等待落後者,並且可以加快訓練的速度。 我們考慮到使用聯邦式(分散式共識的)LASSO 估計模型,若假設本地端的工作單位 和中央主機之間存在不信任的關係,例如好奇的主機可能會藉由工作單位的更新來尋找本地資料的訊息。另一方面,惡意的本地端工作單位可能會透過發送惡意的更新權重來破壞整體分散式系統的學習過程。我們提出了進階的聯邦式 LASSO 算法,該算法讓各個本地端工作單位在訓練過程中增加隨機噪音到資料裡,藉此提供本地的資料安全性。我們的演算法還可以透過偵測惡意的工作單位並阻絕進入機器學習系統內。由此方法來防止學習過程受到對抗式攻擊,例如模型毒素攻擊和資料毒素攻擊。zh_TW
dc.description.provenanceMade available in DSpace on 2022-11-24T09:26:22Z (GMT). No. of bitstreams: 1
U0001-0312202123561400.pdf: 4489335 bytes, checksum: 3e5f576c3cb5122f58b4705466aaa54b (MD5)
Previous issue date: 2021
en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i 誌謝iii 摘要v Abstract vii Contents ix List of Figures xiii List of Tables xv Chapter 1 Introduction 1 1.1 Federated Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Zero Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Chapter 2 Background 5 2.1 LASSO Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 ADMM(Alternating Direction Method of Multipliers) . . . . . . . 6 2.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Update Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.3 Stopping Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Chapter 3 Distributed Consensus (Federated) LASSO Algorithm 11 3.1 Model Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Synchronous ADMM . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Asynchronous ADMM . . . . . . . . . . . . . . . . . . . . . . . . . 17 Chapter 4 Advanced Distributed Consensus (Federated) LASSO Algorithm 21 4.1 Workers’ Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1.1 Additive Random Noise Mechanism . . . . . . . . . . . . . . . . . 24 4.2 Adversarial Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2.1 Data Poisoning Attack . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2.2 Model Poisoning Attack . . . . . . . . . . . . . . . . . . . . . . . 26 4.3 Model Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.3.1 Anomaly Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3.2 Interactive Proof System . . . . . . . . . . . . . . . . . . . . . . . 32 Chapter 5 Experiment 41 5.1 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.1.1 Basic Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.1.2 Adversarial Setting . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2 Million Song Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.3 Gas Sensor Array Dataset . . . . . . . . . . . . . . . . . . . . . . . 46 5.4 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.4.1 Asynchronous ADMM . . . . . . . . . . . . . . . . . . . . . . . . 47 5.4.2 Additive Random Noise Mechanism . . . . . . . . . . . . . . . . . 48 5.4.3 Model Poisoning Attack . . . . . . . . . . . . . . . . . . . . . . . 48 5.4.4 Data Poisoning Attack . . . . . . . . . . . . . . . . . . . . . . . . 49 Chapter 6 Conclusion 51 References 53 Appendix A — Proof 57 A.1 Proof of Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . 57 A.2 Proof of Lemmata . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Appendix B — Figures 63 B.1 Million song dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 63 B.2 Gas sensor dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
dc.language.isoen
dc.subject對抗式攻擊zh_TW
dc.subject聯邦式學習zh_TW
dc.subject分散式共識zh_TW
dc.subject最小絕對值收斂和選擇算子zh_TW
dc.subject隱私保護zh_TW
dc.subject非同步zh_TW
dc.subject交替方向乘子法zh_TW
dc.subjectDistributed Consensusen
dc.subjectAdversarial Attacken
dc.subjectADMMen
dc.subjectAsynchronousen
dc.subjectPrivacy Preservingen
dc.subjectLASSOen
dc.subjectFederated Learningen
dc.title分散式共識最小絕對值收斂和選擇算子zh_TW
dc.titleDistributed Consensus LASSOen
dc.date.schoolyear110-1
dc.description.degree碩士
dc.contributor.coadvisor李宏毅(Hung-Yi Lee)
dc.contributor.oralexamcommittee吳金典(Hsin-Tsai Liu),蔡志強(Chih-Yang Tseng)
dc.subject.keyword聯邦式學習,分散式共識,最小絕對值收斂和選擇算子,隱私保護,非同步,交替方向乘子法,對抗式攻擊,zh_TW
dc.subject.keywordFederated Learning,Distributed Consensus,LASSO,Privacy Preserving,Asynchronous,ADMM,Adversarial Attack,en
dc.relation.page68
dc.identifier.doi10.6342/NTU202104514
dc.rights.note未授權
dc.date.accepted2022-02-09
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資料科學學位學程zh_TW
Appears in Collections:資料科學學位學程

Files in This Item:
File SizeFormat 
U0001-0312202123561400.pdf
  Restricted Access
4.38 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved