請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80948完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林守德(Shou-De Lin) | |
| dc.contributor.author | Xi Chen | en |
| dc.contributor.author | 陳熙 | zh_TW |
| dc.date.accessioned | 2022-11-24T03:23:12Z | - |
| dc.date.available | 2021-09-17 | |
| dc.date.available | 2022-11-24T03:23:12Z | - |
| dc.date.copyright | 2021-09-17 | |
| dc.date.issued | 2021 | |
| dc.date.submitted | 2021-09-10 | |
| dc.identifier.citation | [1] S. Ji, S. Pan, E. Cambria, P. Marttinen, and S. Y. Philip, “A survey on knowledge graphs: Representation, acquisition, and applications,” IEEE Transactions on Neural Networks and Learning Systems, 2021. [2] N. Lao, T. Mitchell, and W. W. Cohen, “Random walk inference and learning in a large scale knowledge base,” Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 529–539, Association for Computational Linguistics. [3] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko, “Translating embeddings for modeling multi-relational data,” Advances in neural information processing systems, vol. 26, 2013. [4] Z. Wang, J. Zhang, J. Feng, and Z. Chen, “Knowledge graph embedding by translating on hyperplanes,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28. [5] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, “Learning entity and relation embeddings for knowledge graph completion,” in Twenty-ninth AAAI conference on artificial intelligence. [6] G. Ji, S. He, L. Xu, K. Liu, and J. Zhao, “Knowledge graph embedding via dynamic mapping matrix,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 687–696. [7] M. Nickel, V. Tresp, and H.-P. Kriegel, “A three-way model for collective learning on multi-relational data,” in ICML. [8] B. Yang, W.-t. Yih, X. He, J. Gao, and L. Deng, “Embedding entities and relations for learning and inference in knowledge bases,” arXiv preprint arXiv:1412.6575, 2014. [9] T. Trouillon, J. Welbl, S. Riedel, r. Gaussier, and G. Bouchard, “Complex embeddings for simple link prediction,” in International conference on machine learning, pp. 2071–2080, PMLR. [10] S. Zhang, Y. Tay, L. Yao, and Q. Liu, “Quaternion knowledge graph embeddings,” arXiv preprint arXiv:1904.10281, 2019. [11] D. Q. Nguyen, T. D. Nguyen, D. Q. Nguyen, and D. Phung, “A novel embedding model for knowledge base completion based on convolutional neural network,” arXiv preprint arXiv:1712.02121, 2017. [12] S. M. Kazemi and D. Poole, “Simple embedding for link prediction in knowledge graphs,” in NeurIPS, 2018. [13] Z. Sun, Z.-H. Deng, J.-Y. Nie, and J. Tang, “Rotate: Knowledge graph embedding by relational rotation in complex space,” in International Conference on Learning Representations, 2018. [14] I. Balažević, C. Allen, and T. Hospedales, “Tucker: Tensor factorization for knowledge graph completion,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 5185–5194, 2019. [15] T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel, “Convolutional 2d knowledge graph embeddings,” in Thirty-second AAAI conference on artificial intelligence, 2018. [16] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua, “KGAT: Knowledge graph attention network for recommendation,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining, pp. 950–958, 2019. [17] R. Sun, X. Cao, Y. Zhao, J. Wan, K. Zhou, F. Zhang, Z. Wang, and K. Zheng, “Multi-modal knowledge graphs for recommender systems,” in Proceedings of the 29th ACM International Conference on Information Knowledge Management, pp. 1405–1414, 2020. [18] G. Wang, R. Ying, J. Huang, and J. Leskovec, “Direct multi-hop attention based graph neural network,” CoRR, vol. abs/2009.14332, 2020. [19] W. Xiong, T. Hoang, and W. Y. Wang, “Deeppath: A reinforcement learning method for knowledge graph reasoning,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 564–573. [20] X. V. Lin, R. Socher, and C. Xiong, “Multi-hop knowledge graph reasoning with reward shaping,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3243–3253. [21] X. Lv, Y. Gu, X. Han, L. Hou, J. Li, and Z. Liu, “Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3376–3381. [22] S. Li, H. Wang, R. Pan, and M. Mao, “Memorypath: A deep reinforcement learning framework for incorporating memory component into knowledge graph reasoning,” Neurocomputing, vol. 419, pp. 273–286, 2021. [23] X. Chen, M. Chen, W. Shi, Y. Sun, and C. Zaniolo, “Embedding uncertain knowledge graphs,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 3363–3370. [24] Z.-M. Chen, M.-Y. Yeh, and T.-W. Kuo, “Passleaf: A pool-based semi-supervised learning framework for uncertain knowledge graph embedding,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 4019–4026. [25] J. Wang, K. Nie, X. Chen, and J. Lei, “Suke: Embedding model for prediction in uncertain knowledge graph,” IEEE Access, vol. 9, pp. 3871–3879, 2020. [26] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” Computer Science, 2013. [27] X. Dong, E. Gabrilovich, G. Heitz, W. Horn, N. Lao, K. Murphy, T. Strohmann, S. Sun, and W. Zhang, “Knowledge vault: A web-scale approach to probabilistic knowledge fusion,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 601–610. [28] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. Van Den Berg, I. Titov, and M. Welling, “Modeling relational data with graph convolutional networks,” in European semantic web conference, pp. 593–607, Springer. [29] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” International Conference on Learning Representations, 2018. [30] R. Speer, J. Chin, and C. Havasi, “Conceptnet 5.5: An open multilingual graph of general knowledge,” in Thirty-first AAAI conference on artificial intelligence. [31] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka, and T. M. Mitchell, “Toward an architecture for never-ending language learning,” in Twenty-Fourth AAAI conference on artificial intelligence. [32] D. Szklarczyk, J. H. Morris, H. Cook, M. Kuhn, S. Wyder, M. Simonovic, A. Santos, N. T. Doncheva, A. Roth, and P. Bork, “The string database in 2017: qualitycontrolled protein– protein association networks, made broadly accessible,” Nucleic acids research, p. 937, 2016. [33] K. Toutanova, “Observed versus latent features for knowledge base and text inference,” ACL-IJCNLP 2015, p. 57, 2015. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/80948 | - |
| dc.description.abstract | 本論文提出了一種新的在不確定知識圖譜(uncertain knowledge graph)上進行表徵學習的方法。近年來,表徵學習在確定知識圖譜上取得了不俗的表現,相比傳統的基於規則和路徑排名的方法,透過在訓練時不斷更新實體和關係的表徵向量,而取得了較好的效果。在確定知識圖譜表徵學習的方法中,有些方法使用了基於深度學習的模型,例如多層感知器和圖卷積網絡等。然而,在不確定的知識圖譜上,基於深層可學習模型的方法並未使用,重心還是放在固定的評分函數(scoring function)的設計上,而忽略了深度學習模型的潛力。我們透過使用多層感知器作為評分函數和基於關係的資料增強方法,在不確定知識圖譜三元組的可信度預測和尾部實體預測任務上取得了更好的效果。 | zh_TW |
| dc.description.provenance | Made available in DSpace on 2022-11-24T03:23:12Z (GMT). No. of bitstreams: 1 U0001-1009202119503300.pdf: 1091113 bytes, checksum: aff34bbc2c9707a5ba4d216c757d2885 (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | 誌謝 i 摘要 ii Abstract iii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Main Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Related Work 3 2.1 Deterministic Knowledge Graph Representations . . . . . . . . . . . . . 3 2.2 Uncertain Knowledge Graph Representations . . . . . . . . . . . . . . . 7 3 Preliminaries 9 3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.2 Multiple Layer Perceptron . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.3.1 Mean Square Error Loss . . . . . . . . . . . . . . . . . . . . . . 10 3.4 Negative Sampling and Semi-supervised Learning . . . . . . . . . . . . . 11 4 Methodology 12 4.1 Neural Network Learning Scoring Function . . . . . . . . . . . . . . . . 12 4.1.1 Simple MLP Scoring Function . . . . . . . . . . . . . . . . . . . 12 4.1.2 Combination of Translation-based and Bilinear Model Scoring Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Relation Specific Data Augmentation . . . . . . . . . . . . . . . . . . . 14 5 Experiments 15 5.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.1.1 CN15k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.1.2 NL27k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.1.3 PPI5k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.1.4 Summary of Datasets . . . . . . . . . . . . . . . . . . . . . . . . 16 5.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.4 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.5 Confidence Score Prediction . . . . . . . . . . . . . . . . . . . . . . . . 17 5.6 Tail Entity Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.7 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.7.1 Confidence Score Prediction . . . . . . . . . . . . . . . . . . . . 20 5.7.2 Tail Entity Prediction . . . . . . . . . . . . . . . . . . . . . . . . 21 5.8 Comparison of Simple MLP and Combined MLP . . . . . . . . . . . . . 22 5.9 Experiments in Deterministic Knowledge Graph . . . . . . . . . . . . . . 23 6 Conclusion and Future Work 25 Bibliography 26 | |
| dc.language.iso | en | |
| dc.subject | 不確定知識圖譜 | zh_TW |
| dc.subject | 多層感知器 | zh_TW |
| dc.subject | 表徵學習 | zh_TW |
| dc.subject | 評分函數 | zh_TW |
| dc.subject | 深度學習模型 | zh_TW |
| dc.subject | multi-layer perceptron | en |
| dc.subject | deep learning models | en |
| dc.subject | scoring function | en |
| dc.subject | representation learning | en |
| dc.subject | uncertain knowledge graph | en |
| dc.title | 一種不確定知識圖譜上表徵學習的方法 | zh_TW |
| dc.title | A Representation Learning Method on Uncertain Knowledge Graph | en |
| dc.date.schoolyear | 109-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 葉彌妍(Hsin-Tsai Liu),陳怡伶(Chih-Yang Tseng) | |
| dc.subject.keyword | 不確定知識圖譜,表徵學習,評分函數,深度學習模型,多層感知器, | zh_TW |
| dc.subject.keyword | uncertain knowledge graph,representation learning,scoring function,deep learning models,multi-layer perceptron, | en |
| dc.relation.page | 30 | |
| dc.identifier.doi | 10.6342/NTU202103116 | |
| dc.rights.note | 同意授權(限校園內公開) | |
| dc.date.accepted | 2021-09-11 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| U0001-1009202119503300.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 1.07 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
