Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93295Full metadata record
| ???org.dspace.app.webui.jsptag.ItemTag.dcfield??? | Value | Language |
|---|---|---|
| dc.contributor.advisor | 黃瀚萱 | zh_TW |
| dc.contributor.advisor | Hen-Hsen Huang | en |
| dc.contributor.author | 李姵徵 | zh_TW |
| dc.contributor.author | Pei-Cheng Li | en |
| dc.date.accessioned | 2024-07-26T16:09:30Z | - |
| dc.date.available | 2024-07-27 | - |
| dc.date.copyright | 2024-07-26 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-07-23 | - |
| dc.identifier.citation | [1] A. Acharya, B. Singh, and N. Onoe. Llm based generation of item-description for recommendation system. In Proceedings of the 17th ACM Conference on Recommender Systems, pages 1204–1207, 2023.
[2] D. Afchar, A. Melchiorre, M. Schedl, R. Hennequin, E. Epure, and M. Moussallam. Explainability in music recommender systems. AI Magazine, 43(2):190–208, 2022. [3] X. Chen, Y. Zhang, and J.R. Wen. Measuring” why” in recommender systems: a comprehensive survey on the evaluation of explainable recommendation. arXiv preprint arXiv:2202.06466, 2022. [4] Z. Chen, H. Mao, H. Li, W. Jin, H. Wen, X. Wei, S. Wang, D. Yin, W. Fan, H. Liu, et al. Exploring the potential of large language models (llms) in learning on graphs. ACM SIGKDD Explorations Newsletter, 25(2):42–61, 2024. [5] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement learning from human preferences. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. [6] Z. Z. Darban and M. H. Valipour. Ghrs: Graph-based hybrid recommendation system with application to movie recommendation. Expert Systems with Applications, 200:116850, 2022. [7] J. Devlin, M.W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. [8] U. Ergashev, E. Dragut, and W. Meng. Learning to rank resources with gnn. In Proceedings of the ACM Web Conference 2023, pages 3247–3256, 2023. [9] K. Ethayarajh, W. Xu, N. Muennighoff, D. Jurafsky, and D. Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306, 2024. [10] Y. Fang, S. Sun, Z. Gan, R. Pillai, S. Wang, and J. Liu. Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8823–8838, Online, Nov. 2020. Association for Computational Linguistics. [11] C. Gao, X. Wang, X. He, and Y. Li. Graph neural networks for recommender system. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 1623–1625, 2022. [12] Y. Hao, X. Cao, Y. Sheng, Y. Fang, and W. Wang. Ks-gnn: Keywords search over incomplete graphs via graphs neural network. Advances in Neural Information Processing Systems, 34:1700–1712, 2021. [13] F. M. Harper and J. A. Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):1–19, 2015. [14] F. M. Harper and J. A. Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4), dec 2015. [15] X. He, X. Bresson, T. Laurent, A. Perold, Y. LeCun, and B. Hooi. Harnessing explanations: LLM-to-LM interpreter for enhanced text-attributed graph representation learning. In The Twelfth International Conference on Learning Representations, 2024. [16] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. [17] X. Huang, K. Han, Y. Yang, D. Bao, Q. Tao, Z. Chai, and Q. Zhu. Can gnn be good adapter for llms? In Proceedings of the ACM on Web Conference 2024, pages 893–904, 2024. [18] V. Kalofolias, X. Bresson, M. Bronstein, and P. Vandergheynst. Matrix completion on graphs. arXiv preprint arXiv:1408.1717, 2014. [19] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized BERT pre-training approach. CoRR, abs/1907.11692, 2019. [20] OpenAI. Gpt-4 technical report, 2023. [21] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024. [22] K. Shuster, J. Xu, M. Komeili, D. Ju, E. M. Smith, S. Roller, M. Ung, M. Chen, K. Arora, J. Lane, M. Behrooz, W. Ngan, S. Poff, N. Goyal, A. Szlam, Y.L. Boureau, M. Kambadur, and J. Weston. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage, 2022. [23] J. Tan, S. Xu, W. Hua, Y. Ge, Z. Li, and Y. Zhang. Towards llm-recsys alignment with textual id learning. arXiv preprint arXiv:2403.19021, 2024. [24] R. Thoppilan, D. D. Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.T. Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng, A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin, D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, V. Zhao, Y. Zhou, C.C. Chang, I. Krivokon, W. Rusch, M. Pickett, P. Srinivasan, L. Man, K. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson, K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Rajakumar, A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein, R. Kurzweil, B. Aguera-Arcas, C. Cui, M. Croak, E. Chi, and Q. Le. Lamda: Language models for dialog applications, 2022. [25] G. Wang, R. Ying, J. Huang, and J. Leskovec. Multi-hop attention graph neural networks. In Z.H. Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3089–3096. International Joint Conferences on Artificial Intelligence Organization, 8 2021. Main Track. [26] S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui. Graph neural networks in recommender systems: a survey. ACM Computing Surveys, 55(5):1–37, 2022. [27] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020. [28] H. Yuan, H. Yu, S. Gui, and S. Ji. Explainability in graph neural networks: A taxonomic survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. [29] U. Zaratiana, N. Tomeh, P. Holat, and T. Charnois. Gnner: Reducing over-lapping in span-based ner using graph neural networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 97–103, 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93295 | - |
| dc.description.abstract | 本研究提出了一種創新方法,使用自然語言中的可讀文本來表示圖神經網絡(GNN) 中的節點,這不同於傳統的數值嵌入。我們使用大型語言模型 (LLM) 作為投影器,訓練 GNN 以從鄰居節點中聚合信息並迭代更新節點表示。我們在推薦任務中廣泛使用的 MovieLens 數據集上的實驗表明,可讀表示能有效捕捉推薦所需的信息,這表明 LLM 可以成功聚合圖中的鄰居信息。此外,微調 LLM 可以提高其生成更具應用特定的可讀表示的能力。這種技術不僅有助於將世界知識融入 GNN,還增強了其可解釋性,並允許人工干預其行為。我們的方法顯示出使圖神經網絡更易理解和控制的顯著潛力。 | zh_TW |
| dc.description.abstract | This research presents an innovative method for representing nodes in graph neural networks using text in natural language, diverging from the traditional numerical embeddings. By employing a large language model as a projector, we train GNNs to aggregate information from neighboring nodes and update node representations iteratively. Our experiments on the MovieLens dataset, widely used for recommendation tasks, demonstrate that human readable representations effectively capture useful information for recommendations. This suggests that LLMs can successfully aggregate neighborhood information in a graph. Furthermore, finetuning the LLMs can improve their ability to generate more application specific human readable representations. This technique not only facilitates the incorporation of world knowledge into GNNs but also enhances their interpretability and allows for human intervention in their behavior. Our approach shows significant potential for making graph neural networks more understandable and controllable. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-26T16:09:30Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-07-26T16:09:30Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員審定書 i
誌謝 ii 摘要 iii Abstract iv Contents vi List of Figures viii List of Tables ix 1 Introduction 1 2 Related Work 4 2.1 LLMs and GNNs 4 2.2 LLMs for Recommendation Systems 5 3 Methodology 6 3.1 GNN with Human-Readable Representation 7 3.2 Training of Our GNN Model 9 3.2.1 Large Language Model Update 10 3.2.2 Graph Representation Update 11 3.2.3 Model Update 11 4 Experiment 13 4.1 Dataset 14 4.2 Baseline Models 14 4.3 Settings 15 4.4 Results 16 4.5 Analysis 20 5 Conclusion 23 Reference 24 | - |
| dc.language.iso | en | - |
| dc.subject | 圖神經網路 | zh_TW |
| dc.subject | 大型語言模型 | zh_TW |
| dc.subject | 推薦系統 | zh_TW |
| dc.subject | Recom-mendation System (RecSys) | en |
| dc.subject | Large Language Model (LLM) | en |
| dc.subject | Graph Neural Network (GNN) | en |
| dc.title | 圖神經網路的可讀表示 | zh_TW |
| dc.title | Human Readable Representation for Graph Neural Network | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.coadvisor | 鄭卜壬 | zh_TW |
| dc.contributor.coadvisor | Pu-Jen Cheng | en |
| dc.contributor.oralexamcommittee | 李政德;顏安孜 | zh_TW |
| dc.contributor.oralexamcommittee | Cheng-Te Li;An-Zi Yen | en |
| dc.subject.keyword | 大型語言模型,圖神經網路,推薦系統, | zh_TW |
| dc.subject.keyword | Large Language Model (LLM),Graph Neural Network (GNN),Recom-mendation System (RecSys), | en |
| dc.relation.page | 28 | - |
| dc.identifier.doi | 10.6342/NTU202402156 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-07-26 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資料科學學位學程 | - |
| Appears in Collections: | 資料科學學位學程 | |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-112-2.pdf Restricted Access | 4.07 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
