請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88572完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 雷欽隆 | zh_TW |
| dc.contributor.advisor | Chin-Laung Lei | en |
| dc.contributor.author | 劉品枘 | zh_TW |
| dc.contributor.author | Pin-Ruei Liu | en |
| dc.date.accessioned | 2023-08-15T16:53:26Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-08-15 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-08-02 | - |
| dc.identifier.citation | [1] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov. How to backdoor federated learning, 2019.
[2] Y. Cui, Z. Liu, and S. Lian. A survey on unsupervised visual industrial anomaly detection algorithms, 2022. [3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidi rectional transformers for language understanding, 2019. [4] T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019. [5] B. Hou, J. Gao, X. Guo, T. Baker, Y. Zhang, Y. Wen, and Z. Liu. Mitigating the backdoor attack by federated filters for industrial iot applications. IEEE Transactions on Industrial Informatics, 18(5):3562–3571, 2022. [6] B. Hou, J. Gao, X. Guo, T. Baker, Y. Zhang, Y. Wen, and Z. Liu. Mitigating the backdoor attack by federated filters for industrial iot applications. IEEE Transactions on Industrial Informatics, 18(5):3562–3571, 2022. [7] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith. Federated learning: Challenges, 41 methods, and future directions. IEEE Signal Processing Magazine, 37(3):50–60, 2020. [8] F. T. Liu, K. M. Ting, and Z.-H. Zhou. Isolation-based anomaly detection. ACM Trans. Knowl. Discov. Data, 6(1), mar 2012. [9] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data, 2023. [10] N. Mejri, L. Lopez-Fuentes, K. Roy, P. Chernakov, E. Ghorbel, and D. Aouada. Un supervised anomaly detection in time-series: An extensive evaluation and analysis of state-of-the-art methods, 2023. [11] K. O’Shea and R. Nash. An introduction to convolutional neural networks, 2015. [12] J. Shlens. A tutorial on principal component analysis, 2014. [13] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan. Can you really backdoor federated learning?, 2019. [14] H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. yong Sohn, K. Lee, and D. Papailiopoulos. Attack of the tails: Yes, you really can backdoor federated learning, 2020. [15] Y. Wang, D. Zhai, Y. Zhan, and Y. Xia. Rflbat: A robust federated learning algorithm against backdoor attack, 2022. [16] Wikipedia. BERT — Wikipedia, the free encyclopedia. http://zh.wikipedia.org/w/index.php?title=BERT&oldid=75585154, 2023. [Online; accessed 11-June-2023]. [17] Wikipedia. Isolation forest — Wikipedia, the free encyclopedia. http://en.wikipedia.org/w/index.php?title=Isolation%20forest&oldid=1154885602, 2023. [Online; accessed 07-June-2023]. [18] Wikipedia. Shapley value — Wikipedia, the free encyclopedia. http://en.wikipedia.org/w/index.php?title=Shapley%20value&oldid=1153942994, 2023. [Online; accessed 07-June-2023]. [19] C. Wu, X. Yang, S. Zhu, and P. Mitra. Toward cleansing backdoored neural networks in federated learning. In 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pages 820–830, 2022. [20] B. Xi, S. Li, J. Li, H. Liu, H. Liu, and H. Zhu. Batfl: Backdoor detection on federated learning in e-health. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1–10, 2021. [21] T. Zoppi, A. Ceccarelli, and A. Bondavalli. Into the unknown: Unsupervised machine learning algorithms for anomaly-based intrusion detection. In 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S), pages 81–81, 2020. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88572 | - |
| dc.description.abstract | 本論文提出對於聯邦式學習可能潛在的後門攻擊進行過濾與防禦的方法,此方法基於非監督式學習中的分群以及異常檢測,嘗試分辨一般參與訓練的正常客戶端與試圖埋藏後門的惡意客戶端。
在我們的實驗中使用了四種不同的資料集進行聯邦式學習訓練,其中包含了兩個圖像資料集與一個文本資料集。模型的部分圖像辨識的模型我們採用的是較為輕量、疊層較少的卷積神經網絡模型,而文本分類則採用現今較多人使用的「基於變換器的雙向編碼器表示技術」模型(依然選擇較小的預訓練模型),並在訓練過程中埋入後門、再嘗試用我們的方法將有埋入後門的客戶端進行排除。只要客戶端上傳的模型同時被分群分類在較小的群組且異常檢測偵測為異常,就會被排除在這次的模型聚合之外;然而也可以根據客戶端的數量來調整是否要擴增、複製客戶端所上傳的模型之後再進行分類。 我們對這些模型與資料做了許多不同的實驗,包含調整參與訓練的客戶端、惡意攻擊者的比例、資料集埋入後門的比例等,盡量讓我們的方法能夠在多種情況下依然能夠有效地排除攻擊者,並保留相當程度的模型表現;而使用較輕量的模型是為了減少收斂的時間、讓我們的方法能更快的展示出成果。 | zh_TW |
| dc.description.abstract | This paper proposes a method for filtering and defending potential backdoor attacks in federated learning. The method is based on unsupervised learning techniques such as clustering and anomaly detection, and it aims to distinguish normal clients participating in the training process from malicious clients attempting to embed backdoors.
In the experiments conducted in this paper, three different datasets were used for federated learning training, including two image datasets and one text datasets. For the image recognition model, a lightweight convolutional neural network with fewer layers was used, while for the text classification model, a transformer-based bidirectional encoder representation from transformers (BERT) model was used. Backdoors were embedded in some of the client models during training, and the proposed method was used to exclude these models during model aggregation. If a client's uploaded model is classified into a smaller cluster and detected as an anomaly during anomaly detection, it will be excluded from model aggregation. However, the number of client models can also be adjusted to decide whether to augment or duplicate the uploaded models before clustering. Various experiments were conducted to evaluate the effectiveness of the proposed method under different conditions, such as adjusting the ratio of participating clients, the ratio of backdoor embedding, and the proportion of malicious attackers. The use of lightweight models was intended to reduce convergence time and demonstrate the results of the proposed method more quickly while preserving a reasonable level of model performance. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-15T16:53:26Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-08-15T16:53:26Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements iii 中文摘要 v Abstract vii Contents ix List of Figures xi List of Tables xiii Chapter 1 Introduction 1 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Chapter 2 Related Works 5 2.1 Defense against FL/DL backdoor attack . . . . . . . . . . . . . . . . 5 2.2 BatFL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Filter-Based Defense . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 RFLBAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 3 Background 9 3.1 Backdoor attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 FL and Backdoor attack against FL . . . . . . . . . . . . . . . . . . 11 3.3 Anomaly detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Chapter 4 Methodology 15 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.2 Method design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3 Step by step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 5 Experiments 21 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.2 Models and Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.3 Experiments Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 26 5.5 Experiments results . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Chapter 6 Conclusion and Future works 37 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 References 41 | - |
| dc.language.iso | en | - |
| dc.subject | 文本分類 | zh_TW |
| dc.subject | 影像分類 | zh_TW |
| dc.subject | 非監督式學習 | zh_TW |
| dc.subject | 聯邦式學習 | zh_TW |
| dc.subject | 後門攻擊 | zh_TW |
| dc.subject | 異常檢測 | zh_TW |
| dc.subject | Backdoor Attack | en |
| dc.subject | Anomaly De tection | en |
| dc.subject | Image classification | en |
| dc.subject | Unsupervised Learning | en |
| dc.subject | Federated Learning | en |
| dc.subject | Text Classification | en |
| dc.title | 通過分群和異常檢測在聯邦式學習中檢測並移除後門攻擊者 | zh_TW |
| dc.title | Detect and remove backdoor attacker in federated learning with clustering and anomaly detection | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 郭斯彥;王銘宏 | zh_TW |
| dc.contributor.oralexamcommittee | Sy-Yen Kuo;Ming-Hung Wang | en |
| dc.subject.keyword | 聯邦式學習,後門攻擊,非監督式學習,異常檢測,影像分類,文本分類, | zh_TW |
| dc.subject.keyword | Federated Learning,Backdoor Attack,Unsupervised Learning,Anomaly De tection,Image classification,Text Classification, | en |
| dc.relation.page | 43 | - |
| dc.identifier.doi | 10.6342/NTU202302433 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-08-04 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 3.94 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
