請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88015完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 蔡政安 | zh_TW |
| dc.contributor.advisor | Chen-An Tsai | en |
| dc.contributor.author | 劉家銘 | zh_TW |
| dc.contributor.author | Chia-Ming Liu | en |
| dc.date.accessioned | 2023-08-01T16:24:45Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-08-01 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-07-06 | - |
| dc.identifier.citation | [1] W. Fan et al., “Graph Neural Networks for Social Recommendation,” The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019, pp. 417–426, Feb. 2019, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1902.07243v2
[2] N. Park, A. Kan, X. L. Dong, T. Zhao, and C. Faloutsos, “Estimating node importance in knowledge graphs using graph neural networks,” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 596–606, Jul. 2019, doi: 10.1145/3292500.3330855. [3] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec, “Graph Convolutional Neural Networks for Web-Scale Recommender Systems,” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 10, pp. 974–983, Jun. 2018, doi: 10.1145/3219819.3219890. [4] J. Shlomi, P. Battaglia, and J.-R. Vlimant, “Graph Neural Networks in Particle Physics,” Mach Learn Sci Technol, vol. 2, no. 2, p. 021001, Jul. 2020, doi: 10.1088/2632-2153/abbf9a. [5] D. Marcheggiani and I. Titov, “Encoding sentences with graph convolutional networks for semantic role labeling,” EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings, pp. 1506–1515, Mar. 2017, doi: 10.18653/V1/D17-1159. [6] J. M. Stokes et al., “A Deep Learning Approach to Antibiotic Discovery,” Cell, vol. 180, no. 4, pp. 688-702.e13, Feb. 2020, doi: 10.1016/J.CELL.2020.01.021. [7] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A Comprehensive Survey on Graph Neural Networks,” IEEE Trans Neural Netw Learn Syst, vol. 32, no. 1, pp. 4–24, Jan. 2019, doi: 10.1109/TNNLS.2020.2978386. [8] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive Representation Learning on Large Graphs,” Adv Neural Inf Process Syst, vol. 2017-December, pp. 1025–1035, Jun. 2017, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1706.02216v4 [9] T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, Sep. 2016, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1609.02907v4 [10] P. Veličković, A. Casanova, P. Liò, G. Cucurull, A. Romero, and Y. Bengio, “Graph Attention Networks,” 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, Oct. 2017, doi: 10.1007/978-3-031-01587-8_7. [11] J. Du, S. Zhang, G. Wu, J. M. F. Moura, and S. Kar, “Topology Adaptive Graph Convolutional Networks,” Oct. 2017, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1710.10370v5 [12] F. Wu, T. Zhang, A. H. de Souza, C. Fifty, T. Yu, and K. Q. Weinberger, “Simplifying Graph Convolutional Networks,” 36th International Conference on Machine Learning, ICML 2019, vol. 2019-June, pp. 11884–11894, Feb. 2019, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1902.07153v2 [13] S. Brody, U. Alon, and E. Yahav, “How Attentive are Graph Attention Networks?,” May 2021, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/2105.14491v3 [14] Q. Lin et al., “Robust Graph Neural Networks via Ensemble Learning,” Mathematics 2022, Vol. 10, Page 1300, vol. 10, no. 8, p. 1300, Apr. 2022, doi: 10.3390/MATH10081300. [15] K. Sun, Z. Zhu, and Z. Lin, “AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models.” Jan. 12, 2021. Accessed: May 11, 2023. [Online]. Available: https://github.com/datake/AdaGCN. [16] X. Hou et al., “Graph Ensemble Learning over Multiple Dependency Trees for Aspect-level Sentiment Classification,” NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, pp. 2884–2894, Mar. 2021, doi: 10.18653/V1/2021.NAACL-MAIN.229. [17] W. Zhang et al., “Reliable Data Distillation on Graph Convolutional Network,” Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 1399–1414, Jun. 2020, doi: 10.1145/3318464.3389706. [18] Y. J. Mun and J. W. Kang, “Ensemble of random binary output encoding for adversarial robustness,” IEEE Access, vol. 7, pp. 124632–124640, 2019, doi: 10.1109/ACCESS.2019.2937604. [19] P. Goyal et al., “Graph Representation Ensemble Learning,” Proceedings of the 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2020, pp. 24–31, Dec. 2020, doi: 10.1109/ASONAM49781.2020.9381465. [20] L. Wang, Y. Wang, J. Li, B. Wang, and Z. Yu, “UP-GNN: Ensemble Graph Neural Network for Link Prediction via Uncertainty Learning and Positional Capturing,” Proceedings - 2021 7th International Conference on Big Data and Information Analytics, BigDIA 2021, pp. 399–405, 2021, doi: 10.1109/BIGDIA53151.2021.9619741. [21] E. E. Kosasih et al., “On Graph Neural Network Ensembles for Large-Scale Molecular Property Prediction,” Jun. 2021, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/2106.15529v1 [22] S. Shi, K. Qiao, S. Yang, L. Wang, J. Chen, and B. Yan, “Boosting-GNN: Boosting Algorithm for Graph Networks on Imbalanced Node Classification,” Front Neurorobot, vol. 15, May 2021, doi: 10.3389/fnbot.2021.775688. [23] S. Ivanov and L. Prokhorenkova, “BOOST THEN CONVOLVE: GRADIENT BOOSTING MEETS GRAPH NEURAL NETWORKS”, Accessed: May 11, 2023. [Online]. Available: https://github.com/nd7141/bgnn. [24] R. E. Schapire, “A brief introduction to boosting,” IJCAI International Joint Conference on Artificial Intelligence, vol. 2, pp. 1401–1406, Dec. 1999, Accessed: May 11, 2023. [Online]. Available: https://collaborate.princeton.edu/en/publications/a-brief-introduction-to-boosting [25] S. Abu-El-Haija, A. Kapoor, B. Perozzi, and J. Lee, “N-GCN: Multi-scale Graph Convolution for Semi-supervised Node Classification,” 35th Conference on Uncertainty in Artificial Intelligence, UAI 2019, Feb. 2018, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1802.08888v1 [26] N. Wu and C. Wang, “Ensemble Graph Attention Networks,” Transactions on Engineering and Computing Sciences, vol. 10, no. 3, pp. 29–41, Jun. 2022, doi: 10.14738/TMLAI.103.12399. [27] H. He, X. Sun, H. He, G. Zhao, L. He, and J. Ren, “A Novel Multimodal-Sequential Approach Based on Multi-View Features for Network Intrusion Detection,” IEEE Access, vol. 7, pp. 183207–183221, 2019, doi: 10.1109/ACCESS.2019.2959131. [28] M. A. Lawal, R. A. Shaikh, and S. R. Hassan, “An Anomaly Mitigation Framework for IoT Using Fog Computing,” Electronics 2020, Vol. 9, Page 1565, vol. 9, no. 10, p. 1565, Sep. 2020, doi: 10.3390/ELECTRONICS9101565. [29] M. Sarhan, S. Layeghy, N. Moustafa, and M. Portmann, “NetFlow Datasets for Machine Learning-based Network Intrusion Detection Systems,” Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, vol. 371 LNICST, pp. 117–135, Nov. 2020, doi: 10.1007/978-3-030-72802-1_9. [30] P. Kumar, G. P. Gupta, and R. Tripathi, “An ensemble learning and fog-cloud architecture-driven cyber-attack detection framework for IoMT networks,” Comput Commun, vol. 166, pp. 110–124, Jan. 2021, doi: 10.1016/J.COMCOM.2020.12.003. [31] Q. Xiao, J. Liu, Q. Wang, Z. Jiang, X. Wang, and Y. Yao, “Towards network anomaly detection using graph embedding,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12140 LNCS, pp. 156–169, 2020, doi: 10.1007/978-3-030-50423-6_12/TABLES/5. [32] J. Zhou, Z. Xu, A. M. Rush, and M. Yu, “Automating Botnet Detection with Graph Neural Networks,” Mar. 2020, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/2003.06344v1 [33] W. W. Lo, S. Layeghy, M. Sarhan, M. Gallagher, and M. Portmann, “E-GraphSAGE: A Graph Neural Network based Intrusion Detection System for IoT,” Proceedings of the IEEE/IFIP Network Operations and Management Symposium 2022: Network and Service Management in the Era of Cloudification, Softwarization and Artificial Intelligence, NOMS 2022, Mar. 2022, doi: 10.1109/NOMS54207.2022.9789878. [34] J. Chen and H. Chen, “Edge-Featured Graph Attention Network,” Jan. 2021, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/2101.07671v1 [35] L. Chang and P. Branco, “Graph-based Solutions with Residuals for Intrusion Detection: the Modified E-GraphSAGE and E-ResGAT Algorithms,” Nov. 2021, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/2111.13597v1 [36] P. G. H. Lehot, “An Optimal Algorithm to Detect a Line Graph and Output Its Root Graph,” 1974. [37] A. L. Maas, “Rectifier Nonlinearities Improve Neural Network Acoustic Models,” 2013. [38] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986, doi: 10.1038/323533A0. [39] A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore, “Automating the construction of internet portals with machine learning,” Inf Retr Boston, vol. 3, no. 2, pp. 127–163, 2000, doi: 10.1023/A:1009953814988/METRICS. [40] C. L. Giles, K. D. Bollacker, and S. Lawrence, “CiteSeer,” pp. 89–98, 1998, doi: 10.1145/276675.276685. [41] P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad, “Collective Classification in Network Data,” AI Mag, vol. 29, no. 3, pp. 93–93, Sep. 2008, doi: 10.1609/AIMAG.V29I3.2157. [42] “An Introduction to Bag of Words in NLP using Python | What is BoW?” https://www.mygreatlearning.com/blog/bag-of-words/ (accessed May 11, 2023). [43] A. Alsaedi, N. Moustafa, Z. Tari, A. Mahmood, and Adna N Anwar, “TON-IoT telemetry dataset: A new generation dataset of IoT and IIoT for data-driven intrusion detection systems,” IEEE Access, vol. 8, pp. 165130–165150, 2020, doi: 10.1109/ACCESS.2020.3022862. [44] N. Koroniotis, N. Moustafa, E. Sitnikova, and B. Turnbull, “Towards the development of realistic botnet dataset in the Internet of Things for network forensic analytics: Bot-IoT dataset,” Future Generation Computer Systems, vol. 100, pp. 779–796, Nov. 2019, doi: 10.1016/J.FUTURE.2019.05.041. [45] “Node-RED.” https://nodered.org/ (accessed May 11, 2023). [46] M. Sarhan, S. Layeghy, N. Moustafa, and M. Portmann, “NetFlow Datasets for Machine Learning-based Network Intrusion Detection Systems,” Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, vol. 371 LNICST, pp. 117–135, Nov. 2020, doi: 10.1007/978-3-030-72802-1_9. [47] “nProbe – ntop.” https://www.ntop.org/products/netflow/nprobe/ (accessed May 11, 2023). [48] “PyTorch.” https://pytorch.org/ (accessed May 11, 2023). [49] “PyG Documentation — pytorch_geometric documentation.” https://pytorch-geometric.readthedocs.io/en/latest/ (accessed May 11, 2023). [50] T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 13-17-August-2016, pp. 785–794, Mar. 2016, doi: 10.1145/2939672.2939785. [51] R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “GNNExplainer: Generating Explanations for Graph Neural Networks,” Adv Neural Inf Process Syst, vol. 32, Mar. 2019, Accessed: May 11, 2023. [Online]. Available: https://arxiv.org/abs/1903.03894v4 [52] J. H. Lee and K. H. Park, “GAN-based imbalanced data intrusion detection system,” Pers Ubiquitous Comput, vol. 25, no. 1, pp. 121–128, Feb. 2019, doi: 10.1007/S00779-019-01332-Y. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88015 | - |
| dc.description.abstract | 本研究旨在解決偵測網絡入侵的挑戰性任務,以應對日益複雜和精密的網絡攻擊。儘管傳統機器學習方法已廣泛應用於網絡攻擊檢測並取得了可觀的成果,但對於網絡流量數據的內在拓撲結構卻鮮有研究。因此,我們提出了一個基於圖神經網絡(GNN)的新型集成學習框架Ensemble-GNN以及一種具有創新性的模型架構E-SageGAT,該模型在圖的邊層級應用了自注意力機制,使得模型更能夠彈性地聚合鄰居訊息。這些創新性的貢獻有助於挖掘網絡結構信息和流量相關性,從而提高檢測惡意活動的準確性。在應用於三個基準數據集時,隨機指派節點導致節我們的方法在加權F1分數上比現有方法(E-GraphSAGE / E-ResGAT / XGBoost)提高了1-2%,而集成學習框架則進一步提高了2-3%的性能。另外透過保留原始拓撲結構並將網絡數據包流量表格數據轉換為多向圖,我們提出的方法準確地揭示了潛在的網絡流量結構,為網絡入侵檢測領域的重大進步奠定了基礎。
關鍵詞:圖神經網絡、集成學習、網絡入侵檢測、網絡安全 | zh_TW |
| dc.description.abstract | This work aims to addresses the challenging tasks for robust Network Intrusion Detection Systems (NIDS) capable of tackling increasingly sophisticated and complex cyberattacks. Although the conventional machine learning methods have been widely applied to cyberattack detections and show promising results, little attention has been given to the inherent topological structure of network flow data. Therefore, we present a novel ensemble framework for Graph Neural Networks (GNN) and a revolutionary model architecture, E-SageGAT, which applies self-attention mechanisms at the edge level of graphs. These innovative contributions empower the exploration of the structural information of networks and traffic correlations, thereby enhancing the accuracy of detecting malicious activities. When applied to three benchmark datasets, our approach outperforms existing methods (E-GraphSAGE/E-ResGAT/XGBoost) with a 1-2% improvement in weighted F1-score, while the ensemble framework further improves performance by 2-3%. By retaining the original topological structure and transforming network packet flow tabular data into multi-directed graphs, our proposed methods accurately reveal the latent network flow structures and set the stage for significant progress in the field of network intrusion detection.
Keywords: Graph Neural Networks, Ensemble learning, Network Intrusion Detection, Cyber Security | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-01T16:24:45Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-08-01T16:24:45Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 I
Table of Contents II List of Figures IV List of Tables V 中文摘要 VI Abstract VII Chapter 1 Introduction 1 1.1 Background 1 1.2 Main Contribution 2 1.3 Thesis Structure 3 Chapter 2 Literature Review 5 2.1 Research on Graph Neural Networks 5 2.2 Research on Graph Ensemble Learning 9 2.3 Graph-based Network Intrusion Detection Research 11 Chapter 3 Methodology 16 3.1 Graph Construction 16 3.1.1 Tabular Data to Multi-Directed Graph 16 3.1.2 Reversing Multi-Directed Graph Structure 17 3.1.3 Mini-Batch Stratified Graph Construction 19 3.2 The E-SageGAT Algorithm 21 3.3 The Ensemble-GNN Framework 24 Chapter 4 Experiments 28 4.1 Benchmark Datasets 28 4.1.1 The Citation Netwrok Dataset 28 4.1.2 The Network Instruction Detection Dataset 30 4.2 Evaluation Metric 32 4.3 Ensemble-GNN Experiment Result 34 4.3.1 Node Classification Accuracy 34 4.3.2 Tolerance to feature noise 37 4.3.3 Effect of different levels of imbalance in the training data 38 4.4 Network Intrusion Detection Experiment Result 41 4.4.1 Graph Processing and Execution Setup 42 4.4.2 Hyperparameters for E-SageGAT and Other Models 43 4.4.3 Multi-Label Classification Results 44 4.4.4 Per Class Results on the BoT related dataset 49 4.4.5 Per Class Results on the ToN related dataset 52 Chapter 5 Conclusion 56 Reference 58 List of Figures Figure 1. Graph construction method for tabular data 17 Figure 2. Reversing graph construction method 19 Figure 3. A give graph (left) and the corresponding E-Sage architecture with 1- 22 Figure 4. Framework of ensemble-GNN 25 Figure 5. Classification accuracy for the selected dataset with 50% feature removed. 37 Figure 6. The classification accuracy of Ensemble-GNN, Average, SGC, and SAGE on imbalanced datasets. 40 List of Tables Table 1. Summary statistics of citation network dataset 28 Table 2. Summary statistics of network intrusion detection dataset 30 Table 3. Per class ratio on each selected dataset 30 Table 4. Summary of results in terms of classification accuracy (in percentage). 35 Table 5. Weighted and Macro F1 measure results on the four selected datasets. 44 Table 6. F1-score results per class on the BoT related datasets. 49 Table 7. Per classes description of the BoT related dataset 50 Table 8. F1-score results per class on the ToN related datasets 52 Table 9. Per classes description of the BoT related dataset 53 | - |
| dc.language.iso | en | - |
| dc.subject | 圖神經網路 | zh_TW |
| dc.subject | 網路安全 | zh_TW |
| dc.subject | 網路入侵檢測 | zh_TW |
| dc.subject | 集成學習 | zh_TW |
| dc.subject | Network Intrusion Detection | en |
| dc.subject | Graph Neural Networks | en |
| dc.subject | Ensemble learning | en |
| dc.subject | Cyber Security | en |
| dc.title | 基於圖神經網路之新型網路入侵檢測: E-SageGAT架構與集成框架 | zh_TW |
| dc.title | A novel GNN-based Network Intrusion Detection using Ensemble Framework and E-SageGAT Architecture | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 陳錦華;薛慧敏 | zh_TW |
| dc.contributor.oralexamcommittee | Jin-Hua Chen;Huey-Miin Hsueh | en |
| dc.subject.keyword | 圖神經網路,集成學習,網路入侵檢測,網路安全, | zh_TW |
| dc.subject.keyword | Graph Neural Networks,Ensemble learning,Network Intrusion Detection,Cyber Security, | en |
| dc.relation.page | 68 | - |
| dc.identifier.doi | 10.6342/NTU202301354 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2023-07-10 | - |
| dc.contributor.author-college | 共同教育中心 | - |
| dc.contributor.author-dept | 統計碩士學位學程 | - |
| dc.date.embargo-lift | 2028-07-05 | - |
| 顯示於系所單位: | 統計碩士學位學程 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 未授權公開取用 | 1.69 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
