Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99451
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor郭斯彥zh_TW
dc.contributor.advisorSy-Yen Kuoen
dc.contributor.author謝承恩zh_TW
dc.contributor.authorCheng-En Hsiehen
dc.date.accessioned2025-09-10T16:19:46Z-
dc.date.available2025-09-11-
dc.date.copyright2025-09-10-
dc.date.issued2025-
dc.date.submitted2025-07-26-
dc.identifier.citationB. Adhikari, Y. Zhang, N. Ramakrishnan, and B. A. Prakash. Sub2vec: Feature learning for subgraphs. In Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part II, page 170–182, Berlin, Heidelberg, 2018. Springer-Verlag.
T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama. Optuna: A next-generation hyperparameter optimization framework. In The 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2623–2631, 2019.
C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3), May 2011.
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML’20. JMLR.org, 2020.
X. Chen and K. He. Exploring simple siamese representation learning. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 15745–15753, 2021.
G. Cui, J. Zhou, C. Yang, and Z. Liu. Adaptive graph encoder for attributed graph embedding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 976–985, New York, NY, USA, 2020. Association for Computing Machinery.
N. De Cao and T. Kipf. MolGAN: An implicit generative model for small molecular graphs. ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018.
C. Doersch, A. Gupta, and A. A. Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, page 1422–1430, USA, 2015. IEEE Computer Society.
V. et al. Stacked denoising autoencoders. In JMLR, 2010.
W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin. Graph neural networks for social recommendation. In The World Wide Web Conference, pages 417–426. ACM, 2019.
M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019.
[12] L. Girin, S. Sigtia, M. Rowley, A. Senior, and O. Vinyals. Dnn-based dimensionality
[12] L. Girin, S. Sigtia, M. Rowley, A. Senior, and O. Vinyals. Dnn-based dimensionality reduction for improved speech representation and autoencoding. IEEE Transactions on Audio, Speech, and Language Processing, 23(1):12–23, 2015.
J.-B. Grill, F. Strub, F. Altché, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar, B. Piot, K. Kavukcuoglu, R. Munos, and M. Valko. Bootstrap your own latent a new approach to self-supervised learning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, Red Hook, NY, USA, 2020. Curran Associates Inc.
A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, page 855–864, New York, NY, USA, 2016. Association for Computing Machinery.
K. Hassani and A. H. Khasahmadi. Contrastive multi-view representation learning on graphs. In Proceedings of International Conference on Machine Learning, pages 3451–3461, 2020.
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9726–9735, 2020.
G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006.
T. Hua, W. Wang, Z. Xue, S. Ren, Y. Wang, and H. Zhao. On feature decorrelation in self-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9598–9608, 2021.
e. a. Huang. Model-aware contrastive learning: Towards escaping the dilemmas. Proceedings of Machine Learning Research, 2023.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Y. Bengio and Y. LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceeding
T. N. Kipf and M. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014.
T. N. Kipf and M. Welling. Variational graph auto-encoders. NIPS Workshop on Bayesian Deep Learning, 2016.
T. N. Kipf and M. Welling. Variational graph auto-encoders. CoRR, 2016.
T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
C.-I. Lai. Contrastive predictive coding based feature for automatic speaker verification. arXiv preprint arXiv:1904.01575, 2019.
S. Li, X. Wang, A. Zhang, X. He, and T.-S. Chua. Let invariant rationale discovery inspire graph contrastive learning. In ICML, 2022.
C. Morris, N. M. Kriege, F. Bause, K. Kersting, P. Mutzel, and M. Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020.
G. Mou, Y. Li, and K. Lee. Reducing and exploiting data augmentation noise through meta reweighting contrastive learning for text classification. arXiv preprint, 2024.
A. Narayanan, M. Chandramohan, R. Venkatesan, L. Chen, Y. Liu, and S. Jaiswal. graph2vec: Learning distributed representations of graphs. CoRR, abs/1707.05005, 2017.
J. Park, M. Lee, H. J. Chang, K. Lee, and J. Y. Choi. Symmetric graph convolutional autoencoder for unsupervised graph representation learning. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 6518–6527, 2019
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: an imperative style, high-performance deep learning library. Curran Associates Inc., Red Hook, NY, USA, 2019.
A. Salehi and H. Davulcu. Graph attention auto-encoders. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pages 989–996, 2020.
N. Shervashidze, T. Petri, K. Mehlhorn, K. M. Borgwardt, and S. Vishwanathan. Efficient graphlet kernels for large graph comparison. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pages 488–495, 2009.
N. Shervashidze, P. Schweitzer, E. J. Van Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-lehman graph kernels. The Journal of Machine Learning Research, 12:2539–2561, 2011.
F.-Y. Sun, J. Hoffman, V. Verma, and J. Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations, 2019.
S. Suresh, P. Li, C. Hao, and J. Neville. Adversarial graph augmentation to improve graph contrastive learning. NeurIPS, 2021.
P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio. Graph Attention Networks. International Conference on Learning Representations, 2018. accepted as poster.
P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning (ICML), pages 1096–1103. ACM, 2008.
T. Wang and P. Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929–9939. PMLR, 2020.
Z. Wu, Y. Xiong, X. Y. Stella, and D. Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
J. Xia, L. Wu, J. Chen, B. Hu, and S. Z. Li. Simgrace: A simple framework for graph contrastive learning without data augmentation. In Proceedings of the ACM Web Conference 2022, WWW ’22, page 1070–1079, New York, NY, USA, 2022. Association for Computing Machinery.
J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems (NeurIPS), pages 341–349, 2012.
J. Xu, S. Chen, Y. Ren, X. Shi, H. Shen, G. Niu, and X. Zhu. Self‑weighted contrastive learning among multiple views for mitigating representation degeneration. In NeurIPS, 2023.
K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.
R. Yan, P. Bao, X. Zhang, Z. Liu, and H. Liu. Towards alignment-uniformity aware representation in graph contrastive learning. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, WSDM ’24, page 873–881, New York, NY, USA, 2024. Association for Computing Machinery.
P. Yanardag and S. Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, page 1365–1374, New York, NY, USA, 2015. Association for Computing Machinery.
Y. Yin, Q. Wang, S. Huang, H. Xiong, and X. Zhang. Autogcl: Automated graph contrastive learning via learnable view generators. AAAI, 2022.
Y. You, T. Chen, Y. Shen, and Z. Wang. Graph contrastive learning automated. ICML, 2022.
Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen. Graph contrastive learning with augmentations. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 5812–5823. Curran Associates, Inc., 2020.
Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang. Deep Graph Contrastive Representation Learning. In ICML Workshop on Graph Representation Learning and Beyond, 2020
Lin, Lu and Chen, Jinghui and Wang, Hongning. Spectral Augmentation for Self-Supervised Learning on Graphs. International Conference on Learning Representations. 2023
Ji, Qirui and Li, Jiangmeng and Hu, Jie and Wang, Rui and Zheng, Changwen and Xu, Fanjiang. Rethinking dimensional rationale in graph contrastive learning from causal perspective. AAAI'24/IAAI'24/EAAI'24.
Chen, Yuzhou and Frias, Jose and Gel, Yulia R. TopoGCL: topological graph contrastive learning. AAAI'24/IAAI'24/EAAI'24
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99451-
dc.description.abstract由於圖神經網路(Graph Neural Networks, GNNs)在實際應用中常面臨標註資料不足的問題,因此採用自監督式學習(self-supervised learning)來訓練圖神經網路已成為一項重要課題。其中,對比學習(contrastive learning)作為非監督式學習中的核心方法,已被廣泛應用於圖神經網路領域。然而,對比學習的效能高度依賴於正樣本與負樣本的選擇方式,如何合理定義正負樣本,一直是對比學習中的關鍵挑戰。
本研究提出 GeoGCL,一個具備幾何感知能力的圖對比學習框架,將幾何先驗資訊引入正樣本的生成與負樣本的加權機制中。不同於以往方法僅依賴隨機或可學的增強策略,GeoGCL 利用預測出的方向與距離比例,在嵌入空間中對錨點進行幾何變換,以生成保留結構語義的正樣本。為了避免生成樣本與原始圖結構偏離過大,我們引入一個基於圖自編碼器(Graph Autoencoder)的重建模組作為正則項,約束生成樣本的拓撲一致性。此外,GeoGCL 依據負樣本與錨點之間的角度與距離關係進行分類,並根據其困難度與誤標潛在性自適應地重新加權對比損失。實驗結果顯示,GeoGCL 在多個基準數據集上均達到優異的準確率,展示了在圖對比學習中納入幾何資訊的重要性。
zh_TW
dc.description.abstractGraph Neural Networks (GNNs) often suffer from a lack of labeled data in real-world scenarios, making self-supervised learning an essential approach for training them. Among various unsupervised methods, contrastive learning has gained significant attention and has been widely adopted in the GNN domain. However, the effectiveness of contrastive learning largely depends on how positive and negative samples are defined—a critical challenge in its application.
We propose GeoGCL, a novel Geometry-aware Graph Contrastive Learning framework that incorporates geometric priors into both the generation of positive samples and the reweighting of negative ones. Unlike previous methods that rely on randomized or learned augmentations without considering embedding geometry, GeoGCL generates structure-preserving positive pairs by perturbing the anchor embedding along a learned direction and distance, guided by predicted angular and radial scaling. To ensure semantic and structural fidelity, we introduce a Graph Autoencoder-based reconstructor as a regularization component, which encourages the generated positives to remain topologically consistent with the original graph. Furthermore, GeoGCL classifies negative samples based on their angular and distance proximity to the anchor, and adaptively reweights their contribution in the contrastive loss to better model hard and false negatives. Extensive experiments on benchmark datasets demonstrate that GeoGCL achieves superior performance, showcasing the importance of geometric awareness in graph contrastive learning.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-09-10T16:19:46Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-09-10T16:19:46Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsAcknowledgements i
摘要 ii
Abstract iii
Contents v
List of Figures vii
List of Tables viii
Chapter 1 Introduction 1
Chapter 2 Background 6
2.1 Graph Neural Network 6
2.2 Self-supervised Learning 7
2.3 Autoassociative learning 8
2.4 Graph Autoassociative learning 8
2.5 Contrastive Learning 10
2.6 Graph Contrastive Learning 12
Chapter 3 Methodologies 16
3.1 Notation and Problem Setup 16
3.2 Framework Overview 17
3.3 Geometric-based Positive Pair Generator (GPPG) 19
3.4 Objective Function 25
Chapter 4 Experimental Results 29
4.1 Benchmarks 29
4.2 Evaluation Protocol 29
4.3 Unsupervised Results 31
4.4 Semi-supervised Results 32
4.5 Ablation Study 33
Chapter 5 Conclusion 34
References 36
-
dc.language.isoen-
dc.subject圖表徵學習zh_TW
dc.subject自監督式學習zh_TW
dc.subject圖神經網路zh_TW
dc.subject機器學習zh_TW
dc.subject對比學習zh_TW
dc.subjectContrastive Learningen
dc.subjectMachine Learningen
dc.subjectGraph Representation Learningen
dc.subjectSelf-supervised Learningen
dc.subjectGraph Neural Networken
dc.title具幾何感知能力之圖對比學習框架zh_TW
dc.titleGeometry-aware Graph Contrastive Learning Frameworken
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee張耀文;雷欽隆;顏嗣鈞;陳俊良zh_TW
dc.contributor.oralexamcommitteeYao-Wen Chang;Chin-Laung Lei;Hsu-chun Yen;Jiann-Liang Chenen
dc.subject.keyword機器學習,自監督式學習,圖神經網路,對比學習,圖表徵學習,zh_TW
dc.subject.keywordMachine Learning,Self-supervised Learning,Graph Neural Network,Contrastive Learning,Graph Representation Learning,en
dc.relation.page43-
dc.identifier.doi10.6342/NTU202501913-
dc.rights.note未授權-
dc.date.accepted2025-07-28-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電子工程學研究所-
dc.date.embargo-liftN/A-
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  未授權公開取用
3.24 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved