請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99138完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 許永真 | zh_TW |
| dc.contributor.advisor | Jane Yung-jen Hsu | en |
| dc.contributor.author | 白宜平 | zh_TW |
| dc.contributor.author | Yi-Ping Bai | en |
| dc.date.accessioned | 2025-08-21T16:32:17Z | - |
| dc.date.available | 2025-08-22 | - |
| dc.date.copyright | 2025-08-21 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-08-02 | - |
| dc.identifier.citation | [1] D. M. J.Lazer et al., “The science of fake news: Addressing fake news requires a multidisciplinary effort,” Science (80-. )., vol. 359, no. 6380, pp. 1094–1096, Mar.2018, doi: 10.1126/science.aao2998.
[2] S.Vosoughi, D.Roy, andS.Aral, “The spread of true and false news online,” Science (80-. )., vol. 359, no. 6380, pp. 1146–1151, Mar.2018, doi: 10.1126/science.aap9559. [3] G.Eysenbach, “Infodemiology: the epidemiology of (mis)information,” Am. J. Med., vol. 113, no. 9, pp. 763–765, Dec.2002, doi: 10.1016/S0002-9343(02)01473-0. [4] G.Eysenbach, “How to fight an infodemic: The four pillars of infodemic management,” Journal of Medical Internet Research, vol. 22, no. 6. JMIR Publications Inc., 01-Jun-2020, doi: 10.2196/21820. [5] A.Bhattacharjee andH.Liu, “Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text?,” ACM SIGKDD Explor. Newsl., vol. 25, no. 2, pp. 14–21, Mar.2024, doi: 10.1145/3655103.3655106. [6] B.Hu et al., “Bad Actor, Good Advisor: Exploring the Role of Large Language Models in Fake News Detection,” Proc. AAAI Conf. Artif. Intell., vol. 38, no. 20, pp. 22105–22113, Mar.2024, doi: 10.1609/aaai.v38i20.30214. [7] Y.Liu, X.Chen, X.Zhang, X.Gao, J.Zhang, andR.Yan, “From Skepticism to Acceptance: Simulating the Attitude Dynamics Toward Fake News,” in Proceedings of the Thirty-ThirdInternational Joint Conference on Artificial Intelligence, 2024, pp. 7886–7894, doi: 10.24963/ijcai.2024/873. [8] J.Pastor-Galindo, P.Nespoli, andJ. A.Ruipérez-Valiente, “Large-Language-Model-Powered Agent-Based Framework for Misinformation and Disinformation Research: Opportunities and Open Challenges,” IEEE Secur. Priv., vol. 22, no. 3, pp. 24–36, May2024, doi: 10.1109/MSEC.2024.3380511. [9] M.Machete Pauland Turpin, “The Use of Critical Thinking to Identify Fake News: A Systematic Literature Review,” in Responsible Design, Implementation and Use of Information and Communication Technology, 2020, pp. 235–246. [10] A.Duelen, I.Jennes, andW.Van denBroeck, “Socratic AI Against Disinformation Improving Critical Thinking to Recognize Disinformation Using Socratic AI,” in IMX 2024 - Proceedings of the 2024 ACM International Conference on Interactive Media Experiences, 2024, pp. 375–381, doi: 10.1145/3639701.3663640. [11] J.Zarocostas, “How to fight an infodemic,” Lancet, vol. 395, no. 10225, p. 676, 2020. [12] E.Musi, E.Carmi, C.Reed, S.Yates, andK.O’Halloran, “Developing Misinformation Immunity: How to Reason-Check Fallacious News in a Human–Computer Interaction Environment,” Soc. Media Soc., vol. 9, no. 1, Jan.2023, doi: 10.1177/20563051221150407. [13] E. Y.Chang, “Prompting Large Language Models With the Socratic Method,” in 2023 IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC), 2023, pp. 0351–0360, doi: 10.1109/CCWC57344.2023.10099179. [14] J.Liu et al., “SocraticLM: Exploring Socratic Personalized Teaching with Large Language Models,” in Advances in Neural Information Processing Systems, 2024, vol. 37, pp. 85693–85721. [15] L.Favero, J. A.Pérez-Ortiz, T.Käser, andN.Oliver, “Enhancing Critical Thinking in Education by means of a Socratic Chatbot,” Sep.2024. [16] E. Y.Chang, “Examining GPT-4’s Capabilities and Enhancement with SocraSynth,” in Proceedings - 2023 International Conference on Computational Science and Computational Intelligence, CSCI 2023, 2023, pp. 7–14, doi: 10.1109/CSCI62032.2023.00009. [17] R. H.Thaler and C. R.Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven, CT: Yale University Press, 2008. [18] F.Jahanbakhsh, A. X.Zhang, A. J.Berinsky, G.Pennycook, D. G.Rand, andD. R.Karger, “Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media,” Proc. ACM Human-Computer Interact., vol. 5, no. CSCW1, pp. 1–42, 2021, doi: 10.1145/3449092. [19] K.Ruggeri et al., “A synthesis of evidence for policy from behavioural science during COVID-19,” Nature, vol. 625, no. 7993, pp. 134–147, Jan.2024, doi: 10.1038/s41586-023-06840-9. [20] E.Nekmat, “Nudge Effect of Fact-Check Alerts: Source Influence and Media Skepticism on Sharing of News Misinformation in Social Media,” Soc. Media Soc., vol. 6, no. 1, Jan.2020, doi: 10.1177/2056305119897322. [21] D.Barman andO.Conlan, “Evaluating Prebunking and Nudge Techniques in Tackling Misinformation: A Between-Subject Study on Social Media Platforms,” in HT 2024: Creative Intelligence - 35th ACM Conference on Hypertext and Social Media, 2024, pp. 167–177, doi: 10.1145/3648188.3675125. [22] T.Biselli, K.Hartwig, andC.Reuter, “Mitigating Misinformation Sharing on Social Media through Personalised Nudging,” Proc. ACM Human-Computer Interact., vol. 9, no. 2, May2025, doi: 10.1145/3711034. [23] G.Pennycook, J.McPhetres, Y.Zhang, J. G.Lu, andD. G.Rand, “Fighting COVID-19 Misinformation on Social Media: Experimental Evidence for a Scalable Accuracy-Nudge Intervention,” Psychol. Sci., vol. 31, no. 7, pp. 770–780, Jul.2020, doi: 10.1177/0956797620939054. [24] S.Lewandowsky, U. K. H.Ecker, C. M.Seifert, N.Schwarz, andJ.Cook, “Misinformation and Its Correction: Continued Influence and Successful Debiasing,” Psychol. Sci. Public Interes. Suppl., vol. 13, no. 3, pp. 106–131, 2012, doi: 10.1177/1529100612451018. [25] J. J. V.Bavel et al., “Using social and behavioural science to support COVID-19 pandemic response,” Nat. Hum. Behav., vol. 4, no. 5, pp. 460–471, 2020, doi: 10.1038/s41562-020-0884-z. [26] U. K. H.Ecker et al., “The psychological drivers of misinformation belief and its resistance to correction,” Nature Reviews Psychology, vol. 1, no. 1. Nature Publishing Group, pp. 13–29, 01-Jan-2022, doi: 10.1038/s44159-021-00006-y. [27] S.van derLinden, J.Roozenbeek, andJ.Compton, “Inoculating Against Fake News About COVID-19,” Front. Psychol., vol. 11, Oct.2020, doi: 10.3389/fpsyg.2020.566790. [28] S.Lewandowsky andS.van derLinden, “Countering Misinformation and Fake News Through Inoculation and Prebunking,” Eur. Rev. Soc. Psychol., vol. 32, no. 2, pp. 348–384, Jul.2021, doi: 10.1080/10463283.2021.1876983. [29] J.Roozenbeek andS.van derLinden, “Fake news game confers psychological resistance against online misinformation,” Palgrave Commun., vol. 5, no. 1, pp. 1–10, 2019, doi: 10.1057/s41599-019-0279-9. [30] M.Basol, J.Roozenbeek, andS.Van DerLinden, “Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news,” J. Cogn., vol. 3, no. 1, pp. 1–9, 2020, doi: 10.5334/joc.91. [31] C.Fan, J.Chen, Y.Jin, andH.He, “Can Large Language Models Serve as Rational Players in Game Theory? A Systematic Analysis,” Proc. AAAI Conf. Artif. Intell., vol. 38, no. 16, pp. 17960–17967, Mar.2024, doi: 10.1609/aaai.v38i16.29751. [32] J. J.Horton et al., “NBER WORKING PAPER SERIES LARGE LANGUAGE MODELS AS SIMULATED ECONOMIC AGENTS: WHAT CAN WE LEARN FROM HOMO SILICUS? Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?,” 2023. [33] J.Duan et al., “GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations,” Feb.2024. [34] W.Hua et al., “Game-theoretic LLM: Agent Workflow for Negotiation Games,” Nov.2024. [35] C.Melchior andM.Oliveira, “A systematic literature review of the motivations to share fake news on social media platforms and how to fight them,” New Media and Society, vol. 26, no. 2. SAGE Publications Ltd, pp. 1127–1150, 01-Feb-2024, doi: 10.1177/14614448231174224. [36] OpenAI, “Chat Markup Language (ChatML) Specification.” 2023. [37] OpenAI, “OpenAI API Reference: Chat.” 2025. [38] E.Wallace, K.Xiao, R.Leike, L.Weng, J.Heidecke, andA.Beutel, “The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions,” Apr.2024. [39] Z.Li, B.Peng, P.He, andX.Yan, “Evaluating the Instruction-Following Robustness of Large Language Models to Prompt Injection.” [40] A.Peysakhovich andA.Lerer, “Attention Sorting Combats Recency Bias In Long Context Language Models,” Sep.2023. [41] C.Clark, B.-D.Oh, andW.Schuler, “Linear Recency Bias During Training Improves Transformers’ Fit to Reading Times,” Sep.2024. [42] A.Mulahuwaish, M.Osti, K.Gyorick, M.Maabreh, A.Gupta, andB.Qolomany, “CovidMis20: COVID-19 Misinformation Detection System on Twitter Tweets using Deep Learning Models,” pp. 1–15, 2022. [43] P.Patwa et al., “Fighting an Infodemic: COVID-19 Fake News Dataset,” Commun. Comput. Inf. Sci., vol. 1402 CCIS, pp. 21–29, 2021, doi: 10.1007/978-3-030-73696-5_3. [44] S.Talwar, A.Dhir, P.Kaur, N.Zafar, andM.Alrasheedy, “Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior,” J. Retail. Consum. Serv., vol. 51, pp. 72–82, Nov.2019, doi: 10.1016/J.JRETCONSER.2019.05.026. [45] A. M.French, V. C.Storey, andL.Wallace, “The impact of cognitive biases on the believability of fake news,” Eur. J. Inf. Syst., vol. 34, no. 1, pp. 72–93, 2025, doi: 10.1080/0960085X.2023.2272608. [46] J. M.Pierre, “Mistrust and Misinformation: A Two-Component, Socio-Epistemic Model of Belief in Conspiracy Theories,” J. Soc. Polit. Psychol., vol. 8, no. 2, pp. 617–641, Oct.2020, doi: 10.5964/jspp.v8i2.1362. [47] P.Jain, “The COVID-19 Pandemic and Positive Psychology: The Role of News and Trust in News on Mental Health and Well-Being,” J. Health Commun., vol. 26, no. 5, pp. 317–327, 2021, doi: 10.1080/10810730.2021.1946219. [48] T.Buchanan andV.Benson, “Spreading Disinformation on Facebook: Do Trust in Message Source, Risk Propensity, or Personality Affect the Organic Reach of ‘Fake News’?,” Soc. Media Soc., vol. 5, no. 4, Oct.2019, doi: 10.1177/2056305119888654. [49] G.Pennycook andD. G.Rand, “Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning,” Cognition, vol. 188, pp. 39–50, Jul.2019, doi: 10.1016/j.cognition.2018.06.011. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/99138 | - |
| dc.description.abstract | 本研究將大型語言模型(LLM)視為模擬閱聽者,探討不同對話提示是否能提升其對健康新聞真偽判斷的準確性,並降低其轉傳意願。設計的提示包含蘇格拉底式提問,促使LLM反思,以及來自行為經濟學的簡單行為提醒(nudge),用以警示轉傳假訊息的後果。實驗採用六篇經查證的長篇假新聞與六篇真新聞(來自權威機構),並依Big-Five理論建構十種典型人格與五種高風險人格。
本研究的輕量化干預分為兩種:(1) 蘇格拉底式提問,包含五題深度版與三題簡易版;(2) 行為提醒,內容為「轉傳未經查證的新聞可能造成誤導與社會恐慌,請謹慎評估」。 實驗採3 × 4因子設計(蘇格拉底提問 × nudge時機,共8組),以具長文本處理能力之GPT-4.1模擬閱聽者,針對六則假新聞與六則真新聞,結合十五種人格設定,收集初始與最終的判斷及轉傳意願,總計完成1440條對話紀錄。針對假新聞而言,結果顯示,LLM在基線下已達96%真偽判斷正確率,因此蘇格拉底式提問與nudge對判斷準確性無顯著提升,顯示具長文本處理能力之LLM已具備判斷新聞真偽的能力。然而,nudge明顯降低了轉傳意願:僅0.3%對話由「不轉傳」變為「轉傳」,9.6%則反向改變。合併有無nudge組,轉傳意願下降率分別為14.6%與0.8%;僅施以蘇格拉底提問則無顯著效果。 進一步分析發現,nudge出現於蘇格拉底提問之前或之後,對降低假新聞轉傳意願並無顯著差異,顯示只要給予簡潔的行為提醒,不論在提示流程中的順序如何,都能有效抑制轉傳意圖。 值得注意的是,本研究亦比較了干預措施對假新聞與真新聞的影響。結果顯示,無論是行為提醒還是蘇格拉底提問,都不僅有效抑制假新聞的轉傳意願,也會顯著降低真新聞的轉傳行為。在部分條件下,對真新聞的抑制效果甚至高於假新聞,突顯干預措施存在兩難:在遏止假訊息的同時,也可能抑制正確資訊流通。這一結果提醒,未來設計干預策略時,必須更加精細並兼顧情境,平衡抑制假新聞與促進可信內容的需求,特別是在公共衛生領域。 總結而言,對於模擬閱聽者的LLM而言,簡短的行為提醒比高成本的深度提問更有效且節省資源,能抑制健康假新聞的擴散。此輕量化策略適合大規模平台應用於假訊息防治,但其對真新聞的普遍抑制效應亦值得進一步探討。 | zh_TW |
| dc.description.abstract | This study treats large language models (LLMs) as simulated readers to examine whether different dialogue prompts can improve their accuracy in judging the veracity of health news — including both fake and true news — and reduce their willingness to share such news. The prompts include the Socratic questioning method to guide LLMs to reflect, and a simple behavioral nudge from behavioral economics to remind LLMs of the consequences of sharing misinformation. For the experiment, we used six long fact-checked fake news articles and six true news articles (sourced from authoritative organizations). We also built ten typical personalities based on the Big Five theory and five high-risk personalities.
There are two main types of lightweight interventions: (1) Socratic questioning, which guides the model to reflect and includes a deep version with five questions and a simple version with three questions; (2) a behavioral nudge, which is a single reminder such as “Sharing unchecked news may mislead others and cause social panic. Please think carefully before sharing.” The experiment adopted a 3 × 4 factorial design (Socratic questioning × nudge timing, 8 groups in total), using GPT-4.1 with long-text processing capabilities to simulate readers. For six fake news and six true news articles, combined with fifteen personality profiles, both initial and final judgments and sharing intentions were collected, resulting in a total of 1,440 dialogue sessions. For fake news, the results show that LLMs already reached 96% accuracy in veracity judgment at baseline, so neither Socratic questioning nor the nudge significantly improved accuracy, suggesting that current LLMs with long text processing abilities can already judge news veracity well. However, the nudge clearly reduced willingness to share: only 0.3% of conversations changed from “not sharing” to “sharing,” while 9.6% changed in the opposite direction. When combining groups into “with nudge” and “without nudge,” the decrease in sharing intention was 14.6% and 0.8%, respectively; Socratic questioning alone had no significant effect. Further analysis revealed that the timing of the nudge—whether before or after Socratic questioning—did not significantly affect its ability to reduce sharing of fake news. This indicates that a single, concise behavioral reminder can effectively suppress sharing intention, regardless of placement within the prompt sequence. Notably, this study also compared the effects of interventions on both fake news and true news. Results show that while behavioral nudges and Socratic questioning both suppressed sharing of fake news, these interventions also significantly reduced the willingness to share true news. In some conditions, the reduction in sharing intention for true news was even greater than for fake news, highlighting a critical trade-off: interventions that curb misinformation may also inadvertently suppress accurate information. This comparative finding underscores the importance of developing nuanced and context-aware intervention strategies that balance suppression of fake news with the promotion of trustworthy content, especially in public health domains. In conclusion, for LLMs as simulated readers, a brief behavioral reminder is more effective and efficient than costly deep questioning in curbing the spread of health fake news. This lightweight approach offers practical value for scalable deployment on digital platforms to combat misinformation, but its broad suppressive effect on true news warrants further consideration and research. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-21T16:32:17Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-08-21T16:32:17Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
中文摘要 ii 英文摘要 iv 目次 vii 圖次 ix 表次 x Chapter 1. Introduction 1 Chapter 2. Related Work 9 2.1. Misinformation and Interventions 9 2.2. Socratic questions and misinformation 10 2.3. Nudging and misinformation 12 2.4. LLMs as Behavioral Simulators 13 2.5. Prompt Hierarchies 15 Chapter 3. Methodology 18 3.1. Dataset 18 3.2. Socratic method 21 3.3. Nudge Design 25 3.4. Personality design 27 Chapter 4. Experiment Design 31 Chapter 5. Results 37 5.1. Effects of the Intervention on News Veracity Judgement 37 5.1.1. Effects of the Intervention on Fake News Veracity Judgement 37 5.1.2. Effects of the Intervention on True News Veracity Judgement 40 5.1.3. Effect of the Interventions on Sharing Intention for Fake News 44 5.1.4. Effect of the Interventions on Sharing Intention for True News 49 5.2. Discussion 54 Chapter 6. Conclusion 56 Bibliography 60 Appendix 69 | - |
| dc.language.iso | en | - |
| dc.subject | 大型語言模型 | zh_TW |
| dc.subject | LLM模擬 | zh_TW |
| dc.subject | 假新聞 | zh_TW |
| dc.subject | 錯誤資訊干預 | zh_TW |
| dc.subject | 行為提醒 | zh_TW |
| dc.subject | 蘇格拉底式提問 | zh_TW |
| dc.subject | 轉傳意願 | zh_TW |
| dc.subject | LLM Simulation | en |
| dc.subject | Sharing Intention | en |
| dc.subject | Socratic Questioning | en |
| dc.subject | Behavioral Nudge | en |
| dc.subject | Misinformation Intervention | en |
| dc.subject | Fake News | en |
| dc.subject | Large Language Model | en |
| dc.title | 行為提醒優於深度提問:在大型語言模型模擬中抑制健康假新聞的輕量化策略 | zh_TW |
| dc.title | Behavioral Nudges Outperform Deep Questioning: A Lightweight Approach to Curb Health Fake News in Large Language Model Simulations | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.coadvisor | 鄭卜壬 | zh_TW |
| dc.contributor.coadvisor | Pu-Jen Cheng | en |
| dc.contributor.oralexamcommittee | 王道一;古倫維;黃喬敬 | zh_TW |
| dc.contributor.oralexamcommittee | Joseph Tao-yi Wang;Lun-Wei Ku;Chiao-Ching Huang | en |
| dc.subject.keyword | 大型語言模型,LLM模擬,假新聞,錯誤資訊干預,行為提醒,蘇格拉底式提問,轉傳意願, | zh_TW |
| dc.subject.keyword | Large Language Model,LLM Simulation,Fake News,Misinformation Intervention,Behavioral Nudge,Socratic Questioning,Sharing Intention, | en |
| dc.relation.page | 108 | - |
| dc.identifier.doi | 10.6342/NTU202503358 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2025-08-06 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| dc.date.embargo-lift | 2030-08-02 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 1.41 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
