Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電機工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88572
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor雷欽隆zh_TW
dc.contributor.advisorChin-Laung Leien
dc.contributor.author劉品枘zh_TW
dc.contributor.authorPin-Ruei Liuen
dc.date.accessioned2023-08-15T16:53:26Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-15-
dc.date.issued2023-
dc.date.submitted2023-08-02-
dc.identifier.citation[1] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov. How to backdoor federated learning, 2019.
[2] Y. Cui, Z. Liu, and S. Lian. A survey on unsupervised visual industrial anomaly detection algorithms, 2022.
[3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidi rectional transformers for language understanding, 2019.
[4] T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019.
[5] B. Hou, J. Gao, X. Guo, T. Baker, Y. Zhang, Y. Wen, and Z. Liu. Mitigating the backdoor attack by federated filters for industrial iot applications. IEEE Transactions on Industrial Informatics, 18(5):3562–3571, 2022.
[6] B. Hou, J. Gao, X. Guo, T. Baker, Y. Zhang, Y. Wen, and Z. Liu. Mitigating the backdoor attack by federated filters for industrial iot applications. IEEE Transactions on Industrial Informatics, 18(5):3562–3571, 2022.
[7] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith. Federated learning: Challenges, 41 methods, and future directions. IEEE Signal Processing Magazine, 37(3):50–60, 2020.
[8] F. T. Liu, K. M. Ting, and Z.-H. Zhou. Isolation-based anomaly detection. ACM Trans. Knowl. Discov. Data, 6(1), mar 2012.
[9] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data, 2023.
[10] N. Mejri, L. Lopez-Fuentes, K. Roy, P. Chernakov, E. Ghorbel, and D. Aouada. Un supervised anomaly detection in time-series: An extensive evaluation and analysis of state-of-the-art methods, 2023.
[11] K. O’Shea and R. Nash. An introduction to convolutional neural networks, 2015.
[12] J. Shlens. A tutorial on principal component analysis, 2014.
[13] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan. Can you really backdoor federated learning?, 2019.
[14] H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. yong Sohn, K. Lee, and D. Papailiopoulos. Attack of the tails: Yes, you really can backdoor federated learning, 2020.
[15] Y. Wang, D. Zhai, Y. Zhan, and Y. Xia. Rflbat: A robust federated learning algorithm against backdoor attack, 2022.
[16] Wikipedia. BERT — Wikipedia, the free encyclopedia. http://zh.wikipedia.org/w/index.php?title=BERT&oldid=75585154, 2023. [Online; accessed 11-June-2023].
[17] Wikipedia. Isolation forest — Wikipedia, the free encyclopedia. http://en.wikipedia.org/w/index.php?title=Isolation%20forest&oldid=1154885602, 2023. [Online; accessed 07-June-2023].
[18] Wikipedia. Shapley value — Wikipedia, the free encyclopedia. http://en.wikipedia.org/w/index.php?title=Shapley%20value&oldid=1153942994, 2023. [Online; accessed 07-June-2023].
[19] C. Wu, X. Yang, S. Zhu, and P. Mitra. Toward cleansing backdoored neural networks in federated learning. In 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pages 820–830, 2022.
[20] B. Xi, S. Li, J. Li, H. Liu, H. Liu, and H. Zhu. Batfl: Backdoor detection on federated learning in e-health. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1–10, 2021.
[21] T. Zoppi, A. Ceccarelli, and A. Bondavalli. Into the unknown: Unsupervised machine learning algorithms for anomaly-based intrusion detection. In 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S), pages 81–81, 2020.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88572-
dc.description.abstract本論文提出對於聯邦式學習可能潛在的後門攻擊進行過濾與防禦的方法,此方法基於非監督式學習中的分群以及異常檢測,嘗試分辨一般參與訓練的正常客戶端與試圖埋藏後門的惡意客戶端。

在我們的實驗中使用了四種不同的資料集進行聯邦式學習訓練,其中包含了兩個圖像資料集與一個文本資料集。模型的部分圖像辨識的模型我們採用的是較為輕量、疊層較少的卷積神經網絡模型,而文本分類則採用現今較多人使用的「基於變換器的雙向編碼器表示技術」模型(依然選擇較小的預訓練模型),並在訓練過程中埋入後門、再嘗試用我們的方法將有埋入後門的客戶端進行排除。只要客戶端上傳的模型同時被分群分類在較小的群組且異常檢測偵測為異常,就會被排除在這次的模型聚合之外;然而也可以根據客戶端的數量來調整是否要擴增、複製客戶端所上傳的模型之後再進行分類。

我們對這些模型與資料做了許多不同的實驗,包含調整參與訓練的客戶端、惡意攻擊者的比例、資料集埋入後門的比例等,盡量讓我們的方法能夠在多種情況下依然能夠有效地排除攻擊者,並保留相當程度的模型表現;而使用較輕量的模型是為了減少收斂的時間、讓我們的方法能更快的展示出成果。
zh_TW
dc.description.abstractThis paper proposes a method for filtering and defending potential backdoor attacks in federated learning. The method is based on unsupervised learning techniques such as clustering and anomaly detection, and it aims to distinguish normal clients participating in the training process from malicious clients attempting to embed backdoors.

In the experiments conducted in this paper, three different datasets were used for federated learning training, including two image datasets and one text datasets. For the image recognition model, a lightweight convolutional neural network with fewer layers was used, while for the text classification model, a transformer-based bidirectional encoder representation from transformers (BERT) model was used. Backdoors were embedded in some of the client models during training, and the proposed method was used to exclude these models during model aggregation. If a client's uploaded model is classified into a smaller cluster and detected as an anomaly during anomaly detection, it will be excluded from model aggregation. However, the number of client models can also be adjusted to decide whether to augment or duplicate the uploaded models before clustering.

Various experiments were conducted to evaluate the effectiveness of the proposed method under different conditions, such as adjusting the ratio of participating clients, the ratio of backdoor embedding, and the proportion of malicious attackers. The use of lightweight models was intended to reduce convergence time and demonstrate the results of the proposed method more quickly while preserving a reasonable level of model performance.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-15T16:53:26Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-15T16:53:26Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements iii
中文摘要 v
Abstract vii
Contents ix
List of Figures xi
List of Tables xiii
Chapter 1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Chapter 2 Related Works 5
2.1 Defense against FL/DL backdoor attack . . . . . . . . . . . . . . . . 5
2.2 BatFL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Filter-Based Defense . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 RFLBAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chapter 3 Background 9
3.1 Backdoor attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 FL and Backdoor attack against FL . . . . . . . . . . . . . . . . . . 11
3.3 Anomaly detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 4 Methodology 15
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Method design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3 Step by step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 5 Experiments 21
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Models and Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Experiments Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 26
5.5 Experiments results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Chapter 6 Conclusion and Future works 37
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
References 41
-
dc.language.isoen-
dc.subject文本分類zh_TW
dc.subject影像分類zh_TW
dc.subject非監督式學習zh_TW
dc.subject聯邦式學習zh_TW
dc.subject後門攻擊zh_TW
dc.subject異常檢測zh_TW
dc.subjectBackdoor Attacken
dc.subjectAnomaly De tectionen
dc.subjectImage classificationen
dc.subjectUnsupervised Learningen
dc.subjectFederated Learningen
dc.subjectText Classificationen
dc.title通過分群和異常檢測在聯邦式學習中檢測並移除後門攻擊者zh_TW
dc.titleDetect and remove backdoor attacker in federated learning with clustering and anomaly detectionen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee郭斯彥;王銘宏zh_TW
dc.contributor.oralexamcommitteeSy-Yen Kuo;Ming-Hung Wangen
dc.subject.keyword聯邦式學習,後門攻擊,非監督式學習,異常檢測,影像分類,文本分類,zh_TW
dc.subject.keywordFederated Learning,Backdoor Attack,Unsupervised Learning,Anomaly De tection,Image classification,Text Classification,en
dc.relation.page43-
dc.identifier.doi10.6342/NTU202302433-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-08-04-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電機工程學系-
顯示於系所單位:電機工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
3.94 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved