Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97739
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳建錦zh_TW
dc.contributor.advisorChien-Chin Chenen
dc.contributor.author賴煒奇zh_TW
dc.contributor.authorWei-Chi Laien
dc.date.accessioned2025-07-16T16:06:47Z-
dc.date.available2025-07-17-
dc.date.copyright2025-07-16-
dc.date.issued2025-
dc.date.submitted2025-07-08-
dc.identifier.citation[1] R. Ma, X. Hu, Q. Zhang, X. Huang, and Y.-G. Jiang, “Hot Topic-Aware Retweet Prediction with Masked Self-attentive Model,” in Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris France: ACM, Jul. 2019, pp. 525–534. doi: 10.1145/3331184.3331236.
[2] J. Bollen, H. Mao, and A. Pepe, “Modeling Public Mood and Emotion: Twitter Sentiment and Socio-Economic Phenomena,” Proc. Int. AAAI Conf. Web Soc. Media, vol. 5, no. 1, pp. 450–453, Aug. 2021, doi: 10.1609/icwsm.v5i1.14171.
[3] J. Bollen, H. Mao, and X. Zeng, “Twitter mood predicts the stock market,” J. Comput. Sci., vol. 2, no. 1, pp. 1–8, Mar. 2011, doi: 10.1016/j.jocs.2010.12.007.
[4] T. Sakaki, M. Okazaki, and Y. Matsuo, “Earthquake shakes Twitter users: real-time event detection by social sensors,” in Proceedings of the 19th international conference on World wide web, Raleigh North Carolina USA: ACM, Apr. 2010, pp. 851–860. doi: 10.1145/1772690.1772777.
[5] S. N. Firdaus, C. Ding, and A. Sadeghian, “Retweet Prediction based on Topic, Emotion and Personality,” Online Soc. Netw. Media, vol. 25, p. 100165, Sep. 2021, doi: 10.1016/j.osnem.2021.100165.
[6] S. N. Firdaus, C. Ding, and A. Sadeghian, “Retweet: A popular information diffusion mechanism – A survey paper,” Online Soc. Netw. Media, vol. 6, pp. 26–40, Jun. 2018, doi: 10.1016/j.osnem.2018.04.001.
[7] F. Zhou, X. Xu, G. Trajcevski, and K. Zhang, “A Survey of Information Cascade Analysis: Models, Predictions, and Recent Advances,” ACM Comput. Surv., vol. 54, no. 2, pp. 1–36, Mar. 2022, doi: 10.1145/3433000.
[8] Q. Zhang, Y. Gong, J. Wu, H. Huang, and X. Huang, “Retweet Prediction with Attention-based Deep Neural Network,” in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Indianapolis Indiana USA: ACM, Oct. 2016, pp. 75–84. doi: 10.1145/2983323.2983809.
[9] L. Wang, Y. Zhang, J. Yuan, K. Hu, and S. Cao, “FEBDNN: fusion embedding-based deep neural network for user retweeting behavior prediction on social networks,” Neural Comput. Appl., vol. 34, no. 16, pp. 13219–13235, Aug. 2022, doi: 10.1007/s00521-022-07174-9.
[10] A. K. Kushwaha, A. K. Kar, and P. Vigneswara Ilavarasan, “Predicting Information Diffusion on Twitter a Deep Learning Neural Network Model Using Custom Weighted Word Features,” in Responsible Design, Implementation and Use of Information and Communication Technology, vol. 12066, M. Hattingh, M. Matthee, H. Smuts, I. Pappas, Y. K. Dwivedi, and M. Mäntymäki, Eds., in Lecture Notes in Computer Science, vol. 12066. , Cham: Springer International Publishing, 2020, pp. 456–468. doi: 10.1007/978-3-030-44999-5_38.
[11] Z. He et al., “Large Language Models as Zero-Shot Conversational Recommenders,” in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham United Kingdom: ACM, Oct. 2023, pp. 720–730. doi: 10.1145/3583780.3614949.
[12] H. Lyu et al., “LLM-Rec: Personalized Recommendation via Prompting Large Language Models,” 2023, arXiv. doi: 10.48550/ARXIV.2307.15780.
[13] X. Liu et al., “GPT understands, too,” AI Open, vol. 5, pp. 208–215, 2024, doi: 10.1016/j.aiopen.2023.08.012.
[14] L. Li, Y. Zhang, and L. Chen, “Prompt Distillation for Efficient LLM-based Recommendation,” in Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham United Kingdom: ACM, Oct. 2023, pp. 1348–1357. doi: 10.1145/3583780.3615017.
[15] S. Krishna, J. Ma, D. Slack, A. Ghandeharioun, S. Singh, and H. Lakkaraju, “Post Hoc Explanations of Language Models Can Improve Language Models,” Dec. 07, 2023, arXiv: arXiv:2305.11426. doi: 10.48550/arXiv.2305.11426.
[16] M. Jenders, G. Kasneci, and F. Naumann, “Analyzing and predicting viral tweets,” in Proceedings of the 22nd International Conference on World Wide Web, Rio de Janeiro Brazil: ACM, May 2013, pp. 657–664. doi: 10.1145/2487788.2488017.
[17] H. Huang, Q. Zhang, J. Wu, and X. Huang, “Predicting Which Topics You Will Join in the Future on Social Media,” in Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku Tokyo Japan: ACM, Aug. 2017, pp. 733–742. doi: 10.1145/3077136.3080791.
[18] J. Tang, J. Zhang, L. Yao, J. Li, L. Zhang, and Z. Su, “ArnetMiner: extraction and mining of academic social networks,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, Las Vegas Nevada USA: ACM, Aug. 2008, pp. 990–998. doi: 10.1145/1401890.1402008.
[19] Q. Zhang, Y. Gong, Y. Guo, and X. Huang, “Retweet Behavior Prediction Using Hierarchical Dirichlet Process,” Proc. AAAI Conf. Artif. Intell., vol. 29, no. 1, Feb. 2015, doi: 10.1609/aaai.v29i1.9152.
[20] H.-K. Peng, J. Zhu, D. Piao, R. Yan, and Y. Zhang, “Retweet Modeling Using Conditional Random Fields,” in 2011 IEEE 11th International Conference on Data Mining Workshops, Vancouver, BC, Canada: IEEE, Dec. 2011, pp. 336–343. doi: 10.1109/ICDMW.2011.146.
[21] Z. Luo, M. Osborne, J. Tang, and T. Wang, “Who will retweet me?: finding retweeters in twitter,” in Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, Dublin Ireland: ACM, Jul. 2013, pp. 869–872. doi: 10.1145/2484028.2484158.
[22] B. Jiang, J. Liang, Y. Sha, and L. Wang, “Message Clustering based Matrix Factorization Model for Retweeting Behavior Prediction,” in Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, Melbourne Australia: ACM, Oct. 2015, pp. 1843–1846. doi: 10.1145/2806416.2806650.
[23] K. Chen, T. Chen, G. Zheng, O. Jin, E. Yao, and Y. Yu, “Collaborative personalized tweet recommendation,” in Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, Portland Oregon USA: ACM, Aug. 2012, pp. 661–670. doi: 10.1145/2348283.2348372.
[24] Z. Zhao et al., “Recommender Systems in the Era of Large Language Models (LLMs),” IEEE Trans. Knowl. Data Eng., vol. 36, no. 11, pp. 6889–6907, Nov. 2024, doi: 10.1109/TKDE.2024.3392335.
[25] X. Tang, Q. Miao, Y. Quan, J. Tang, and K. Deng, “Predicting individual retweet behavior by user similarity: A multi-task learning approach,” Knowl.-Based Syst., vol. 89, pp. 681–688, Nov. 2015, doi: 10.1016/j.knosys.2015.09.008.
[26] W.-C. Kang et al., “Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction,” 2023, arXiv. doi: 10.48550/ARXIV.2305.06474.
[27] J. Liu, C. Liu, P. Zhou, R. Lv, K. Zhou, and Y. Zhang, “Is ChatGPT a Good Recommender? A Preliminary Study,” Oct. 27, 2023, arXiv: arXiv:2304.10149. doi: 10.48550/arXiv.2304.10149.
[28] S. Huang, S. Mamidanna, S. Jangam, Y. Zhou, and L. H. Gilpin, “Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations,” 2023, arXiv. doi: 10.48550/ARXIV.2310.11207.
[29] F. Huang, H. Kwak, K. Park, and J. An, “ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?,” 2024, arXiv. doi: 10.48550/ARXIV.2403.17368.
[30] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation of Word Representations in Vector Space,” 2013, arXiv. doi: 10.48550/ARXIV.1301.3781.
[31] J. Pennington, R. Socher, and C. Manning, “Glove: Global Vectors for Word Representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar: Association for Computational Linguistics, 2014, pp. 1532–1543. doi: 10.3115/v1/D14-1162.
[32] N. Muennighoff, N. Tazi, L. Magne, and N. Reimers, “MTEB: Massive Text Embedding Benchmark,” 2022, arXiv. doi: 10.48550/ARXIV.2210.07316.
[33] Z. Qiu, H. Lyu, W. Xiong, and J. Luo, “Can LLMs Simulate Social Media Engagement? A Study on Action-Guided Response Generation,” Feb. 17, 2025, arXiv: arXiv:2502.12073. doi: 10.48550/arXiv.2502.12073.
[34] E. Yu, J. Li, and C. Xu, “PopALM: Popularity-Aligned Language Models for Social Media Trendy Response Prediction,” Feb. 29, 2024, arXiv: arXiv:2402.18950. doi: 10.48550/arXiv.2402.18950.
[35] O. Khattab et al., “DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines,” 2023, arXiv. doi: 10.48550/ARXIV.2310.03714.
[36] C. Li, J. Ma, X. Guo, and Q. Mei, “DeepCas: An End-to-end Predictor of Information Cascades,” in Proceedings of the 26th International Conference on World Wide Web, Perth Australia: International World Wide Web Conferences Steering Committee, Apr. 2017, pp. 577–586. doi: 10.1145/3038912.3052643.
[37] S. Ji et al., “Community-based Dynamic Graph Learning for Popularity Prediction,” in Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach CA USA: ACM, Aug. 2023, pp. 930–940. doi: 10.1145/3580305.3599281.
[38] H. Wang, C. Yang, and C. Shi, “Neural Information Diffusion Prediction with Topic-Aware Attention Network,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event Queensland Australia: ACM, Oct. 2021, pp. 1899–1908. doi: 10.1145/3459637.3482374.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97739-
dc.description.abstract近年來,人們習慣透過社群平台接收資訊或表達自身觀點,其中在 Twitter (X.com) 平台上的「轉推」行為,是指分享一則已發佈的訊息,不僅可用以傳達使用者立場,亦有助於強化個人觀點。轉推預測任務作為社群媒體中探討資訊傳播的重要研究方向,旨在提升預測準確率,並藉由分析轉推行為,深入了解使用者偏好與其背後的決策因素。為了提升預測表現,許多研究提出深度學習模型應用於轉推預測任務。與傳統機器學習方法相比,深度學習不僅免除人工特徵工程的繁瑣程序,亦能顯著提升預測效果。隨著近年來大型語言模型(LLM)的快速發展,其於文本理解、摘要生成與推理等方面展現出強大能力,並廣泛應用於各種自然語言處理領域。然而,目前針對大型語言模型在轉推預測任務中的應用仍屬少見,有待進一步探討與發展。
本文聚焦於以內容為基礎的使用者轉推行為預測,利用使用者與目標推文作者的歷史發文紀錄,透過分析目標推文與這些發文紀錄之間的相似度,以預測使用者是否會轉發該目標推文。本研究提出一個創新的預測框架,結合可進行輸入權重分析的深度學習模型,識別對預測結果影響最顯著的輸入資料,並據此調整大型語言模型的提示指令,以提升其在轉推預測任務中的表現。此方法對未來將大型語言模型應用於類似任務具有重要的參考價值與貢獻。此外,本文所提出的基於內容相似度的深度學習模型具備簡化的模型架構,能以直觀的方式進行特徵歸因分析,同時展現出優異的預測效能與執行效率,為相關領域提供一個兼具解釋性與效能的實用解決方案。
zh_TW
dc.description.abstractIn recent years, social media platforms have become central to how people receive information and express opinions. On Twitter (X.com), retweeting—sharing an existing post—serves both to express user stance and reinforce personal views. Retweet prediction is a key research area in understanding information diffusion, aiming to improve accuracy and reveal user preferences and decision-making factors. Deep learning models have been widely adopted for this task, offering superior performance over traditional machine learning by eliminating the need for manual feature engineering. With the rapid development of large language models (LLMs), their capabilities in text understanding and reasoning have been applied across various NLP tasks. However, their use in retweet prediction remains underexplored.
This study focuses on content-based retweet prediction, using the posting histories of both the user and the tweet author to analyze similarity with the target tweet. We propose a novel framework that combines a deep learning model capable of input weight analysis with prompt refinement for LLMs, improving their predictive performance. This approach offers valuable insights for applying LLMs to similar tasks. Additionally, our proposed Similarity-Based deep learning model features a simplified architecture that enables intuitive feature attribution, strong prediction performance, and efficient execution—making it a practical and interpretable solution for related research.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-16T16:06:47Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-07-16T16:06:47Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
摘要 iii
Abstract iv
Table of Contents v
List of Figures viii
List of Tables ix
Chapter 1: Introduction 1
1.1 Background 1
1.2 Research Motivations and Objectives 2
1.3 Research Scope 3
1.4 Research Contributions 4
Chapter 2: Related Works 5
2.1 Retweet prediction 5
2.1.1 Machine Learning Approaches 5
2.1.2 Deep Learning Approaches 6
2.2 Utilizing LLMs for Recommendation Tasks 7
2.3 Advances in Interpreting and Evaluating Explanations in LLMs 8
Chapter 3: Methodology 10
3.1 Problem Formulation 10
3.2 Overall Framework 11
3.3 Similarity-Based Models 13
3.4 LLM-Based Models 15
3.5 Feature Attribution of Similarity-Based Models 17
3.5.1 Weight Analysis for Similarity-Based Models 18
3.5.2 Prompt Refinement and Self‑Explanation of LLM‑Based Models 19
3.6 Comparison of Similarity-Based and LLM-Based Models 20
Chapter 4: Experiments 21
4.1 Dataset Construction 21
4.2 Experiment Configuration 22
4.3 Evaluation metrics 23
4.4 Baseline Models 23
4.5 Retweet Prediction Performance 24
4.6 LLM Prompt Refinement through Feature Attribution 26
4.6.1 Ablation Study of Similarity-Based Models 26
4.6.2 Weight Analysis Results for Similarity‑Based Models 27
4.6.3 Leveraging Weight Analysis to Refine LLM Prompts 29
4.7 Experimental Evaluation of Efficiency, Embedding Models, and User Representations 32
4.7.1 Execution Time Analysis 32
4.7.2 Embedding Model Analysis 34
4.7.3 User Representation Method Analysis 35
Chapter 5: Conclusion and Future Work 36
5.1 Conclusion 36
5.2 Future Work 37
References 38
Appendix 44
Appendix A: Prompt Instruction for Retweet Prediction 44
A.1 Prompt Instruction Targeting x1 45
A.2 Prompt Instruction Targeting x2 46
A.3 Prompt Instruction Targeting x3 47
A.4 Prompt Instruction Targeting x4 48
A.5 Prompt Instruction Targeting x5 49
A.6 Prompt Instruction Targeting x6 50
Appendix B: Prompt Instruction for Summarization 51
-
dc.language.isoen-
dc.subject特徵歸因zh_TW
dc.subject轉推預測zh_TW
dc.subject深度學習zh_TW
dc.subject提示詞工程zh_TW
dc.subject大型語言模型zh_TW
dc.subjectlarge language modelen
dc.subjectfeature attributionen
dc.subjectprompt engineeringen
dc.subjectretweet predictionen
dc.subjectdeep learningen
dc.title運用內容相似度與大型語言模型提升轉推預測之表現zh_TW
dc.titleEnhancing Retweet Prediction Performance via Content‑Based Similarity and Large Language Modelsen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.coadvisor何承遠zh_TW
dc.contributor.coadvisorCheng-Yuan Hoen
dc.contributor.oralexamcommittee盧信銘;詹益禎zh_TW
dc.contributor.oralexamcommitteeHsin-Min Lu;Yi-Cheng Chanen
dc.subject.keyword轉推預測,深度學習,大型語言模型,特徵歸因,提示詞工程,zh_TW
dc.subject.keywordretweet prediction,deep learning,large language model,feature attribution,prompt engineering,en
dc.relation.page51-
dc.identifier.doi10.6342/NTU202501568-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-07-09-
dc.contributor.author-college管理學院-
dc.contributor.author-dept資訊管理學系-
dc.date.embargo-lift2025-07-17-
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf2.78 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved