請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89000完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳銘憲 | zh_TW |
| dc.contributor.advisor | Ming-Syan Chen | en |
| dc.contributor.author | 賴繹文 | zh_TW |
| dc.contributor.author | Yi-Wen Lai | en |
| dc.date.accessioned | 2023-08-16T16:42:48Z | - |
| dc.date.available | 2023-11-09 | - |
| dc.date.copyright | 2023-08-16 | - |
| dc.date.issued | 2023 | - |
| dc.date.submitted | 2023-08-09 | - |
| dc.identifier.citation | [1] Jinheon Baek, Minki Kang, and Sung Ju Hwang. Accurate learning of graph representations with graph multiset pooling. In International Conference on Learning Representations, 2021.
[2] Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(suppl_1):i47–i56, 2005. [3] Sundeep Prabhakar Chepuri, Sijia Liu, Geert Leus, and Alfred O Hero. Learning sparse graphs under smoothness prior. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6508–6512. IEEE, 2017. [4] Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. Unnoticeable backdoor attacks on graph neural networks. In Proceedings of the ACM Web Conference 2023, pages 2263–2273, 2023. [5] Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115–1124. PMLR, 2018. [6] Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, et al. Eta prediction with graph neural networks in google maps. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 3767–3776, 2021. [7] Paul D Dobson and Andrew J Doig. Distinguishing enzyme structures from non enzymes without alignments. Journal of molecular biology, 330(4):771–783, 2003. [8] Junyuan Fang, Haixian Wen, Jiajing Wu, Qi Xuan, Zibin Zheng, and Chi K Tse. Gani: Global attacks on graph neural networks via imperceptible node injections. arXiv preprint arXiv:2210.12598, 2022. [9] Linton C Freeman et al. Centrality in social networks: Conceptual clarification. Social network: critical concepts in sociology. Londres: Routledge, 1:238–263, 2002. [10] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George EDahl. Neural message passing for quantum chemistry. In International conference on machine learning, pages 1263–1272. PMLR, 2017. [11] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in neural information processing systems, volume 30, 2017. [12] Tomaž Hočevar and Janez Demšar. A combinatorial approach to graphlet counting. Bioinformatics, 30(4):559–565, 2014. [13] I-Chung Hsieh and Cheng-Te Li. Netfense: Adversarial defenses against privacy attacks on neural networks for graph data. IEEE Transactions on Knowledge and Data Engineering, 2021. [14] Bingchen Jiang and Zhao Li. Defending against backdoor attack on graph nerual network by explainability. arXiv preprint arXiv:2209.02902, 2022. [15] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017. [16] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. 2017. [17] Shiqing Ma and Yingqi Liu. Nic: Detecting adversarial samples with neural network invariant checking. In Proceedings of the 26th network and distributed system security symposium (NDSS 2019), 2019. [18] Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? In International Conference on Learning Representations. [19] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [20] Alessandro Nordio, Alberto Tarable, Carla-Fabiana Chiasserini, and Emilio Leonardi. Belief dynamics in social networks: A fluid-based analysis. IEEE Transactions on Network Science and Engineering, 5(4):276–287, 2017. [21] Shirui Pan, Xingquan Zhu, Chengqi Zhang, and S Yu Philip. Graph stream classification using labeled and unlabeled graphs. In 2013 IEEE 29th International Conference on Data Engineering (ICDE), pages 398–409. IEEE, 2013. [22] Ryan Rossi and Nesreen Ahmed. The network data repository with interactive graph analytics and visualization. In Proceedings of the AAAI conference on artificial intelligence, volume 29, 2015. [23] Andrew W Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Žídek, Alexander WR Nelson, Alex Bridgland, et al. Improved protein structure prediction using potentials from deep learning. Nature, 577(7792):706–710, 2020. [24] Yu Sheng, Rong Chen, Guanyu Cai, and Li Kuang. Backdoor attack of graph neural networks based on subgraph trigger. In Collaborative Computing: Networking, Applications and Worksharing: 17th EAI International Conference, CollaborateCom 2021, Virtual Event, October 16-18, 2021, Proceedings, Part II 17, pages 276–296. Springer, 2021. [25] Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011. [26] Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In Proceedings of the Web Conference 2020, pages 673–683, 2020. [27] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018. [28] Izhar Wallach, Michael Dzamba, and Abraham Heifets. Atomnet: a deep convolutional neural network for bioactivity prediction in structure-based drug discovery. arXiv preprint arXiv:1510.02855, 2015. [29] Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, and Qinghua Zheng. Scalable attack on graph data by injecting vicious nodes. Data Mining and Knowledge Discovery, 34:1363–1389, 2020. [30] Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh, and Felix Wu. At tack graph convolutional networks by adding fake nodes. arXiv preprint arXiv:1810.10751, 2018. [31] Mark Weber, Giacomo Domeniconi, Jie Chen, Daniel Karl I Weidele, Claudio Bellei, Tom Robinson, and Charles E Leiserson. Anti-money laundering in bitcoin: Experimenting with graph convolutional networks for financial forensics. arXiv preprint arXiv:1908.02591, 2019. [32] Bo Wu, Yang Liu, Bo Lang, and Lei Huang. Dgcnn: Disordered graph convolutional neural network based on the gaussian mixture model. arXiv preprint arXiv:1712.03563, 2017. [33] Zhaohan Xi, Ren Pang, Shouling Ji, and Ting Wang. Graph backdoor. In USENIX Security Symposium, pages 1523–1540, 2021. [34] Jing Xu and Stjepan Picek. Poster: Clean-label backdoor attack on graph neural networks. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 3491–3493, 2022. [35] Jing Xu, Minhui Xue, and Stjepan Picek. Explainability-based backdoor attacks against graph neural networks. In Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, pages 31–36, 2021. [36] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations. [37] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018. [38] Hangfan Zhang, Jinghui Chen, Lu Lin, Jinyuan Jia, and Dinghao Wu. Graph contrastive backdoor attacks. 2023. [39] Lan Zhang, Peng Liu, Yoon-Ho Choi, and Ping Chen. Semantics-preserving reinforcement learning attack against graph neural networks for malware detection. IEEE Transactions on Dependable and Secure Computing, 20(2):1390–1402, 2022. [40] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018. [41] Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. Backdoor attacks to graph neural networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies, pages 15–26, 2021. [42] Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, and Guohan Huang. Motif backdoor: Rethinking the backdoor attack on graph neural networks via motifs. arXiv preprint arXiv:2210.13710, 2022. [43] Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, and Jinyin Chen. Link backdoor: Backdoor attack on link prediction via node injection. arXiv preprint arXiv:2208.06776, 2022. [44] Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, and Bo Han. On strengthening and defending graph reconstruction attack with markov chain approximation. arXiv preprint arXiv:2306.09104, 2023. [45] Jiong Zhu, Junchen Jin, Donald Loveland, Michael T Schaub, and Danai Koutra. How does heterophily impact the robustness of graph neural networks? theoretical connections and practical implications. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2637–2647, 2022. [46] Francesco Zola, Lander Segurola-Gil, Jan Lukas Bruse, Mikel Galar, and Raúl Orduna-Urrutia. Network traffic analysis through node behaviour classification: a graph-based approach with temporal dissection and data-level preprocessing. Com puters & Security, 115:102632, 2022. [47] Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, and Jie Tang. Tdgia: Effective injection attacks on graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 2461–2471, 2021. [48] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2847–2856, 2018. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/89000 | - |
| dc.description.abstract | 本文介紹了一種新穎的方法,BAGNN,用於圖形基礎的後門攻擊領域。BAGNN將經典的圖形分類問題的探索範疇擴展到注入攻擊場景,利用強化學習技術,特別是 Q 學習和深度 Q 學習。通過檢視狀態、行動和獎勵動態,構建了精密的後門注入攻擊方法。在多個資料集上的評估顯示了令人鼓舞的結果,展示了 BAGNN在攻擊成功率和誤分類信心方面的效能,並對良性準確性的影響最小。該研究強調了創建防禦對抗攻擊的安全模型的重要性,同時也強調了需要強大的對抗模型。本研究拓寬了我們對圖形任務的對抗攻擊的理解,並識別了進一步探索的可能途徑,如在更大的數據集上進行驗證以及探查更複雜的攻擊情境。 | zh_TW |
| dc.description.abstract | This paper introduces a novel method, BAGNN, to the field of graph-based backdoor attacks. BAGNN extends the exploration of the classic graph classification problem to injection attack scenarios, utilizing reinforcement learning techniques, specifically Q-learning and Deep Q-learning. By examining state, action, and reward dynamics, sophisticated backdoor injection attack methods are constructed. Evaluations reveal promising results across multiple datasets, showcasing BAGNN’s effectiveness in attack success rate and misclassification confidence, with minimal impact on benign accuracy. The study underscores the importance of creating secure models against adversarial attacks while also emphasizing the need for robust adversarial models. This research broadens our understanding of adversarial attacks on graph tasks, identifying potential avenues for further exploration such as validation on larger datasets and probing more complex attack scenarios. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-16T16:42:48Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2023-08-16T16:42:48Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 摘要 iii Abstract iv Contents v List of Figures vii List of Tables viii 1 Introduction 1 2 Related work 4 2.0.1 Graph Neural Networks . . . . . . . . . . . . . . . . . . . . 4 2.0.2 Backdoor Attack and Adversarial Attack on GNNs . . . . . . 5 2.0.3 Application on GNNs . . . . . . . . . . . . . . . . . . . . . 9 3 Problem formulation 10 3.1 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1.1 Graph Neural Network. . . . . . . . . . . . . . . . . . . . . . 10 3.1.2 Backdoor Attack on GNN . . . . . . . . . . . . . . . . . . . 11 3.1.2.1 Graph Classification . . . . . . . . . . . . . . . . . 11 3.1.2.2 Backdoor Injection Attack on Graph Classification . 12 4 Methodology 14 4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1.1 BIA via Q-Learning . . . . . . . . . . . . . . . . . . . . . . 14 4.1.2 State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.1.3 Action Space . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.1.4 Q-Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.1.5 Training Backdoored GNN . . . . . . . . . . . . . . . . . . . 18 4.1.6 BIA via Deep QL . . . . . . . . . . . . . . . . . . . . . . . . 18 4.1.7 DQL Training . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.1.8 State Space in DQL . . . . . . . . . . . . . . . . . . . . . . . 20 5 Experiments 23 5.1 Experiment and Discussion . . . . . . . . . . . . . . . . . . . . . . . 23 5.1.1 Datasets and Models . . . . . . . . . . . . . . . . . . . . . . 23 5.1.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.1.3 Result and Discussion . . . . . . . . . . . . . . . . . . . . . 27 5.1.4 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.1.5 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6 Conclusion 34 Bibliography 36 | - |
| dc.language.iso | en | - |
| dc.subject | 圖神經網路 | zh_TW |
| dc.subject | 後門攻擊 | zh_TW |
| dc.subject | Backdoor Attack | en |
| dc.subject | GNN | en |
| dc.title | 圖神經網路之後門注入式攻擊 | zh_TW |
| dc.title | Backdoor Injection Attack via Graph Neural Network | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 111-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 黃俊龍;賴冠廷;楊得年 | zh_TW |
| dc.contributor.oralexamcommittee | Jiun-Long Huang;Kuan-Ting Lai;De-Nian Yang | en |
| dc.subject.keyword | 圖神經網路,後門攻擊, | zh_TW |
| dc.subject.keyword | GNN,Backdoor Attack, | en |
| dc.relation.page | 42 | - |
| dc.identifier.doi | 10.6342/NTU202301785 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2023-08-10 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-111-2.pdf 未授權公開取用 | 929.59 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
