Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93566Full metadata record
| ???org.dspace.app.webui.jsptag.ItemTag.dcfield??? | Value | Language |
|---|---|---|
| dc.contributor.advisor | 郭斯彥 | zh_TW |
| dc.contributor.advisor | Sy-Yen Kuo | en |
| dc.contributor.author | 柯冠宇 | zh_TW |
| dc.contributor.author | Kuan-Yu Ko | en |
| dc.date.accessioned | 2024-08-05T16:37:13Z | - |
| dc.date.available | 2024-08-06 | - |
| dc.date.copyright | 2024-08-05 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-07-27 | - |
| dc.identifier.citation | B. Adhikari, Y. Zhang, N. Ramakrishnan, and B. A. Prakash. Sub2vec: Feature learning for subgraphs. In Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part II 22, pages 170–182. Springer, 2018.
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020. Y. Chen, Q. Ren, and L. Yong. Hybrid augmented automated graph contrastive learning. arXiv preprint arXiv:2303.15182, 2023. C.-Y. Chuang, J. Robinson, Y.-C. Lin, A. Torralba, and S. Jegelka. Debiased contrastive learning. Advances in neural information processing systems, 33:8765–8775, 2020. H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song. Adversarial attack on graph structured data. In International conference on machine learning, pages 1115–1124. PMLR, 2018. A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864, 2016. W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. Advances in neural information processing systems, 30, 2017. T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. R. Linsker. Self-organization in a perceptual network. Computer, 21(3):105–117, 1988. C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663, 2020. A. Narayanan, M. Chandramohan, R. Venkatesan, L. Chen, Y. Liu, and S. Jaiswal. graph2vec: Learning distributed representations of graphs. arXiv preprint arXiv:1707.05005, 2017. N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011. N. Shervashidze, S. Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt. Efficient graphlet kernels for large graph comparison. In Artificial intelligence and statistics, pages 488–495. PMLR, 2009. F.-Y. Sun, J. Hoffmann, V. Verma, and J. Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000, 2019. M. Sun, J. Xing, H. Wang, B. Chen, and J. Zhou. Mocl: data-driven molecular fingerprint via knowledge-aware contrastive learning from molecular graph. In Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 3585–3594, 2021. S. Suresh, P. Li, C. Hao, and J. Neville. Adversarial graph augmentation to improve graph contrastive learning. Advances in Neural Information Processing Systems, 34:15920–15933, 2021. Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola. What makes for good views for contrastive learning? Advances in neural information processing systems, 33:6827–6839, 2020. L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. J. Xia, L. Wu, J. Chen, B. Hu, and S. Z. Li. Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022, pages 1070–1079, 2022. D. Xu, W. Cheng, D. Luo, H. Chen, and X. Zhang. Infogcl: Information-aware graph contrastive learning. Advances in Neural Information Processing Systems, 34:30414–30425, 2021. K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. P. Yanardag and S. Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1365–1374, 2015. Y. Yin, Q. Wang, S. Huang, H. Xiong, and X. Zhang. Autogcl: Automated graph contrastive learning via learnable view generators. In Proceedings of the AAAI conference on artificial intelligence, volume 36, pages 8892–8900, 2022. Y. You, T. Chen, Y. Shen, and Z. Wang. Graph contrastive learning automated. In International Conference on Machine Learning, pages 12121–12132. PMLR, 2021. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen. Graph contrastive learning with augmentations. Advances in neural information processing systems, 33:5812–5823, 2020. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93566 | - |
| dc.description.abstract | 作為著名的自監督式學習方法,圖對比學習是現今一大熱門的主題。常見的對比學習方法需要對輸入資料進行資料擴增,然而,在圖上進行資料擴增並不直觀,不適當的方法有可能破壞圖的結構,進而導致模型訓練結果不佳。因此,如何在不破壞結構的情況下對圖進行資料擴增,又或者不使用資料擴增去做圖對比學習是目前在這個領域的一大難題。
這篇論文提出了一種全新的架構,該架構並不限制任何的資料擴增方法,可以自己適應並排除掉被破壞結構的資料,並且生成出一個包含原本資料集和資料擴增後沒有被破壞結構的資料的新資料集。準確來說,我們將一個批次的原本的資料和進行資料擴增後的資料輸入訓練好的模型,並計算這兩個批次間輸出的表徵的L2範數,蒐集L2範數較小的圖形成新的資料集。這是建立在同類別的資料若是沒有遭到資料擴增破壞結構的話,輸入受過訓練的模型,其表徵在潛在空間上距離比較近的觀察下所發想出來的。我們將新的資料集再去訓練一個全新的模型,也顯示這個用新資料集訓練的模型不僅比使用原有資料集訓練表現得更好,更能夠得到和最先進模型相比接近或更好的準確度。 | zh_TW |
| dc.description.abstract | Graph contrastive learning (GCL) has emerged as a famous self-supervised learning method. Its efficacy often hinges on the generation of positive samples through data augmentation. Unfortunately, applying data augmentation to graph is not intuitive. Inappropriate augmentation methods may destroy graph structure, leading to poor model performance. Thus, developing a data augmentation method that preserve semantics of the graph, or alternatively, a GCL methods without data augmentation becomes a significant challenge within this domain.
In this paper, we propose a novel framework that is compatible with all data augmentation methods while being self-adaptive. It excludes data which graph structure are destroyed, creating a new dataset including data from original dataset and those preserved its semantics after data augmentation. Specifically, we input a batch of original data and augmented data into a trained model. The L2 norm of the representations between two batches are computed, and we extract those graphs with minimal L2 norm. This is inspired by the fact that for a trained model, representations from two graphs with same label should exhibit proximity. We train a new model on refined dataset. The results show that this model not only outperforms the model trained on the original dataset but also achieves competitive or better performance in comparison to state-of-the-art methods. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-05T16:37:13Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-05T16:37:13Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 ii Abstract iii Contents v List of Figures vii List of Tables viii Chapter 1 Introduction 1 Chapter 2 Background 4 2.1 Graph Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Contrastive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Graph Contrastive Learning . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 3 Related Works 7 Chapter 4 Methodology 9 4.1 Inference Encoder Training . . . . . . . . . . . . . . . . . . . . . . 9 4.2 Create Augmented Dataset . . . . . . . . . . . . . . . . . . . . . . . 10 4.2.1 Data Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2.2 Data Shuffling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3 New Encoder Training . . . . . . . . . . . . . . . . . . . . . . . . . 12 Chapter 5 Evaluation 13 5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.2 Comparison with the State-of-the-art Methods . . . . . . . . . . . . . 14 5.3 Analysis on Augmentations . . . . . . . . . . . . . . . . . . . . . . 15 5.4 Distance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 6 Conclusion 17 References 18 | - |
| dc.language.iso | en | - |
| dc.subject | 圖對比學習 | zh_TW |
| dc.subject | 機器學習 | zh_TW |
| dc.subject | 自監督式學習 | zh_TW |
| dc.subject | 資料擴增 | zh_TW |
| dc.subject | 圖神經網路 | zh_TW |
| dc.subject | data augmentation | en |
| dc.subject | graph contrastive learning | en |
| dc.subject | graph neural networks | en |
| dc.subject | self-supervised learning | en |
| dc.subject | Machine learning | en |
| dc.title | 圖對比學習的自適應資料擴增架構 | zh_TW |
| dc.title | Self-Adaptive Data Augmentation Framework for Graph Contrastive Learning | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 林宗男;雷欽隆;呂學坤;袁世一 | zh_TW |
| dc.contributor.oralexamcommittee | Tsung-Nan Lin;Chin-Laung Lei;Shyue-Kung Lu;Shih-Yi Yuan | en |
| dc.subject.keyword | 機器學習,自監督式學習,圖神經網路,圖對比學習,資料擴增, | zh_TW |
| dc.subject.keyword | Machine learning,self-supervised learning,graph neural networks,graph contrastive learning,data augmentation, | en |
| dc.relation.page | 21 | - |
| dc.identifier.doi | 10.6342/NTU202401171 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-07-29 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電子工程學研究所 | - |
| Appears in Collections: | 電子工程學研究所 | |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-112-2.pdf Restricted Access | 633.67 kB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
