請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68329完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 鄭卜壬(Pu-Jen Cheng 鄭卜壬) | |
| dc.contributor.author | Chun-Chih Wang | en |
| dc.contributor.author | 王俊智 | zh_TW |
| dc.date.accessioned | 2021-06-17T02:17:49Z | - |
| dc.date.available | 2023-07-23 | |
| dc.date.copyright | 2018-07-23 | |
| dc.date.issued | 2017 | |
| dc.date.submitted | 2017-08-31 | |
| dc.identifier.citation | [1] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247-1250.
[2] George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 28(11):39-41. [3] Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. A Semantic Matching Energy Function for Learning with Multirelational Data. Machine Learning. 94(2): 233-259. [4] Antione Bordes, Jason Weston, Ronan Collobert, and Toshua Bengio. 2011. Learning Structured Embeddings of Knowledge Bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 301-306. [5] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. In Advances in Neural Information Processing Systems 26, pages 2787-2795. [6] Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the Twenty-Eighth AAAI conference on Artificial Intelligence, pages 1112-1119. [7] Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI conference on Artificial Intelligence, pages 2181-2187. [8] Guoliang Ji, Zhizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 687-696. [9] Jun Feng, Minlie Huang, Yang Yang, and Xiaoyan Zhu. 2016. GAKE: Graph Aware Knowledge Embedding. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 641-651. [10] Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. Neighborhood Mixture Model for Knowledge Base Completion. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 40-50. [11] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning With Neural Tensor Network for Knowledge Base Completion. In Advances in Neural Information Processing Systems 26, pages 926-934. [12] Shu Guo, Quan Wang, Bin Wang, Lihong Wang, and Li Guo. 2015. Semantically Smooth Knowledge Graph Embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 84-94. [13] Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka, Jr., and Tom M. Mitchell. 2010. Toward an Architecture for Never Ending Language Learning. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, pages 1306-1313. [14] Geoffrey Hinton, Nitish Srivastava, Kevin Swersky. Neural Networks for Machine Learning. Lecture 6a Overview of Mini-batch Gradient Descent. http://www.cs.toronto.edu/%7Etijmen/csc321/slides/lecture_slides_lec6.pdf [15] Laurens van der Maaten, Geoffrey Hinton. Visualizing Data using t-SNE. Journal of Machine Learning Research 9 (2008) 2579-2605. [16] Luong, Minh-Thang and Pham, Hieu and Manning, Christopher D. 2015. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages. [17] Xavier Glorot , Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR 9:249-256. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68329 | - |
| dc.description.abstract | 在人工智慧領域當中,知識圖譜扮演著非常重要的角色,因為他們含有大量有用的資訊,不管是自然語言處理、問答系統或是搜尋引擎都被廣泛應用。然而,大部分的知識圖譜都非常的不完整。因此,找到一個有效又能處理大量資料的方法,來幫助知識圖譜補足遺失的資料則是一個非常重要的議題。
目前既有的方法,以向量表示法的模型受到最多關注,因為他們不管在準確性或是效能上都有非常傑出的表現。然而他們將所有在知識圖譜內的資料都視為獨立的個體,這部分不合理的原因在於在真實資料當中,個體之間其實存在著特定的關係。舉例來說,當我們得知某個人曾經出版過某張專輯,同時也可以確認該位是一名歌手,因此專輯和歌手並非互相獨立。 在這篇論文當中,我們提出一個除了利用節點本身以外,透過整合鄰居節點的資訊,來協助提高知識圖譜資料的準確性。首先,我們提出了一個篩選鄰居節點的方法,避免引入過多的雜訊。再者,我們利用選擇的鄰居節點,動態產生出新的向量表示法,再利用該表示法預測兩節點之間的關係。除此之外,由於所有鄰居節點都有不同程度的重要性及影響力,我們透過注意力機制來調整不同鄰居節點的權重。實驗部分,我們利用基準資料集來驗證我們的模型,不管是連結預測或是分類都有非常傑出的表現。最後,我們透過一系列分析來證實我們的模型在學習到的向量表示法有較合理的結果,顯示出在加入鄰居節點的資訊以後,大幅提升了在知識圖譜上的成效。 | zh_TW |
| dc.description.abstract | In the field of AI, knowledge graphs play an important role because of their huge amount of resources. They have been applied to several tasks, such as natural language processing, question answering, and search engine. However, most knowledge graphs are far from complete. Hence, finding an efficient and effective approach is a significant issue in knowledge graphs.
Among all previous works, embedding models have caught most attention due to their performance and efficiency. Nonetheless, they consider triples in the knowledge graph independently. It is unreasonable since many triples connect to each other in reality and there exist many hidden relations between them even though they are not linked together. For example, if one person has published an album, he or she is a singer by profession. Therefore, publishing an album is highly correlated with being a singer. In this paper, in addition to use information of an entity itself, we would like to enhance the knowledge graph completion more effectively by integrating the information provided by its neighbors. To start with, we come up with a method to select effective neighbors in order to prevent from introducing too many noises. Second, we utilize the filtered neighbors to generate the neighbor-based entity embeddings dynamically and use the created embeddings to predict the relationships between entities. Furthermore, since every neighbor has different power of influence, we exploit attention mechanism to weight neighbors according to their importance. In experiments, we implement our model on several benchmark datasets, and it outperforms other baseline methods on both link prediction and triple classification tasks. In the end, we conduct a series of analyses to justify the results produces by our model. This also demonstrates that neighborhood information is helpful on knowledge graph completion. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T02:17:49Z (GMT). No. of bitstreams: 1 ntu-106-R04922066-1.pdf: 5083819 bytes, checksum: 0ea0a70a451bebb06ba5840532c278c4 (MD5) Previous issue date: 2017 | en |
| dc.description.tableofcontents | 摘要 i
Abstract iii CONTENTS v List of Figures vii List of Tables ix Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 Translation-based and Other Models 5 2.1.1 General Idea 5 2.1.2 TransE 6 2.1.3 TransH 6 2.1.4 TransR 7 2.1.5 TransD 9 2.1.6 Unstructured and Structured Embedding 9 2.1.7 Semantic Matching Energy 10 2.2 Context-aware Models 11 2.2.1 TransE-NMM 11 2.2.2 GAKE 12 2.3 Summary 13 Chapter 3 Problem Definition 15 3.1 Motivation 15 3.2 Notations 16 3.3 Problem Definition 17 Chapter 4 Methodology 19 4.1 Selecting Effective Neighbors 19 4.1.1 Neighbor Selection for Different Neighbors 19 4.1.2 Neighbor Selection through Time 20 4.2 Translating with Neighbors 21 4.2.1 Neighbor-based Entity Embedding 21 4.2.2 Attention Mechanism 22 4.2.3 Illustration 23 4.2.4 Compare with TransE-NMM 24 4.3 Negative Sampling 25 4.4 Algorithm 26 Chapter 5 Experiments 27 5.1 Datasets and Experimental Settings 27 5.2 Experiments Results 28 5.2.1 Task Description 28 5.2.2 Link Prediction 29 5.2.3 Triple Classification 35 Chapter 6 Qualitative Analysis 37 6.1 Parameter Analysis 37 6.2 Visualization of Representations 39 6.3 Model Comparison 43 Chapter 7 Conclusions and Future Work 45 7.1 Conclusions 45 7.2 Future Work 46 REFERENCE 47 | |
| dc.language.iso | en | |
| dc.subject | 知識圖譜 | zh_TW |
| dc.subject | 向量表示法 | zh_TW |
| dc.subject | 多關係圖 | zh_TW |
| dc.subject | 連結預測 | zh_TW |
| dc.subject | 三重資料分類 | zh_TW |
| dc.subject | 自然語言處理 | zh_TW |
| dc.subject | triple classification | en |
| dc.subject | knowledge graph | en |
| dc.subject | representation learning | en |
| dc.subject | multi-relational graph | en |
| dc.subject | natural language processing | en |
| dc.subject | link prediction | en |
| dc.title | 利用鄰居節點學習知識圖譜表示法 | zh_TW |
| dc.title | Translating Representations of Knowledge Graphs with Neighbors | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 105-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳信希(HSIN-HSI CHEN 陳信希),蔡銘峰(Ming-Feng Tsai 蔡銘峰),陳柏琳(Berlin Chen 陳柏琳) | |
| dc.subject.keyword | 知識圖譜,向量表示法,多關係圖,連結預測,三重資料分類,自然語言處理, | zh_TW |
| dc.subject.keyword | knowledge graph,representation learning,multi-relational graph,link prediction,triple classification,natural language processing, | en |
| dc.relation.page | 48 | |
| dc.identifier.doi | 10.6342/NTU201704192 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2017-08-31 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-106-1.pdf 未授權公開取用 | 4.96 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
