Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93295| Title: | 圖神經網路的可讀表示 Human Readable Representation for Graph Neural Network |
| Authors: | 李姵徵 Pei-Cheng Li |
| Advisor: | 黃瀚萱 Hen-Hsen Huang |
| Co-Advisor: | 鄭卜壬 Pu-Jen Cheng |
| Keyword: | 大型語言模型,圖神經網路,推薦系統, Large Language Model (LLM),Graph Neural Network (GNN),Recom-mendation System (RecSys), |
| Publication Year : | 2024 |
| Degree: | 碩士 |
| Abstract: | 本研究提出了一種創新方法,使用自然語言中的可讀文本來表示圖神經網絡(GNN) 中的節點,這不同於傳統的數值嵌入。我們使用大型語言模型 (LLM) 作為投影器,訓練 GNN 以從鄰居節點中聚合信息並迭代更新節點表示。我們在推薦任務中廣泛使用的 MovieLens 數據集上的實驗表明,可讀表示能有效捕捉推薦所需的信息,這表明 LLM 可以成功聚合圖中的鄰居信息。此外,微調 LLM 可以提高其生成更具應用特定的可讀表示的能力。這種技術不僅有助於將世界知識融入 GNN,還增強了其可解釋性,並允許人工干預其行為。我們的方法顯示出使圖神經網絡更易理解和控制的顯著潛力。 This research presents an innovative method for representing nodes in graph neural networks using text in natural language, diverging from the traditional numerical embeddings. By employing a large language model as a projector, we train GNNs to aggregate information from neighboring nodes and update node representations iteratively. Our experiments on the MovieLens dataset, widely used for recommendation tasks, demonstrate that human readable representations effectively capture useful information for recommendations. This suggests that LLMs can successfully aggregate neighborhood information in a graph. Furthermore, finetuning the LLMs can improve their ability to generate more application specific human readable representations. This technique not only facilitates the incorporation of world knowledge into GNNs but also enhances their interpretability and allows for human intervention in their behavior. Our approach shows significant potential for making graph neural networks more understandable and controllable. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93295 |
| DOI: | 10.6342/NTU202402156 |
| Fulltext Rights: | 未授權 |
| Appears in Collections: | 資料科學學位學程 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-112-2.pdf Restricted Access | 4.07 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
