Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98500
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor莊裕澤zh_TW
dc.contributor.advisorYuh-Jzer Joungen
dc.contributor.author劉姝豆zh_TW
dc.contributor.authorShu-Dou Liuen
dc.date.accessioned2025-08-14T16:21:29Z-
dc.date.available2025-08-15-
dc.date.copyright2025-08-14-
dc.date.issued2025-
dc.date.submitted2025-08-01-
dc.identifier.citationAnthropic. (2024). Measuring the persuasiveness of language models. Anthropic Research; Anthropic. https://www.anthropic.com/research/measuring-model-persuasiveness
Bai, H., Voelkel, J., Eichstaedt, J., & Willer, R. (2023). Artificial Intelligence Can Persuade Humans on Political Issues. https://doi.org/10.21203/rs.3.rs-3238396/v1
Bsharat, S. M., Myrzakhan, A., & Shen, Z. (2024). Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4 (No. arXiv:2312.16171). arXiv. https://doi.org/10.48550/arXiv.2312.16171
Cairns-Lee, H., Lawley, J., & Tosey, P. (2022). Enhancing Researcher Reflexivity About the Influence of Leading Questions in Interviews. The Journal of Applied Behavioral Science, 58(1), 164–188. https://doi.org/10.1177/00218863211037446
Chen, L.-H. (2024a). Taiwan’s Election and Democratization Study, 2020-2024(I): Kaohsiung City Mayor By-Elections: Telephone Interview (TEDS2020M_BE-T) [Dataset]. Survey Research Data Archive, Academia Sinica. https://doi.org/10.6141/TW-SRDA-D00213-1
Chen, L.-H. (2024b). Taiwan’s Election and Democratization Study, 2020-2024(I): TEDS Benchmark Survey, 2021 (TEDS2021) [Dataset]. Survey Research Data Archive, Academia Sinica. https://doi.org/10.6141/TW-SRDA-D00219-1
Cheng, S.-F. (2023). The New Development and Effect of Taiwan Identity [Dataset]. Survey Research Data Archive, Academia Sinica. https://doi.org/10.6141/TW-SRDA-E10935-1
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In J. Burstein, C. Doran, & T. Solorio (Eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 4171–4186). Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423
Fricker, S., Galesic, M., Tourangeau, R., & Yan, T. (2005). An Experimental Comparison of Web and Telephone Surveys. Public Opinion Quarterly, 69(3), 370–392. https://doi.org/10.1093/poq/nfi027
Galesic, M., & Bosnjak, M. (2009). Effects of Questionnaire Length on Participation and Indicators of Response Quality in a Web Survey. Public Opinion Quarterly, 73(2), 349–360. https://doi.org/10.1093/poq/nfp031
Hackenburg, K., & Margetts, H. (2024). Evaluating the persuasive influence of political microtargeting with large language models. Proceedings of the National Academy of Sciences, 121(24), e2403116121. https://doi.org/10.1073/pnas.2403116121
Huang, C. (2023). Taiwan’s Election and Democratization Study, 2016-2020(IV): The Survey of Presidential and Legislative Elections, 2020 (TEDS2020) [Dataset]. Survey Research Data Archive, Academia Sinica. https://doi.org/10.6141/TW-SRDA-D00208-1
Huang, M.-H. (2024). Asian Barometer Surveys Research Planning Project (5 year term) [Dataset]. Survey Research Data Archive, Academia Sinica. https://doi.org/10.6141/TW-SRDA-D00247-1
King, N., Horrocks, C., & Brooks, J. (2019). Interviews in qualitative research. Sage Publications Ltd. https://uk.sagepub.com/en-gb/eur/interviews-in-qualitative-research/book241444
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2023). Large Language Models are Zero-Shot Reasoners (No. arXiv:2205.11916). arXiv. https://doi.org/10.48550/arXiv.2205.11916
Krebs, D., & Höhne, J. K. (2021). Exploring Scale Direction Effects and Response Behavior across PC and Smartphone Surveys. Journal of Survey Statistics and Methodology, 9(3), 477–495. https://doi.org/10.1093/jssam/smz058
Kreuter, F., McCulloch, S., Presser, S., & Tourangeau, R. (2011). The Effects of Asking Filter Questions in Interleafed Versus Grouped Format. Sociological Methods & Research, 40(1), 88–104. https://doi.org/10.1177/0049124110392342
Li, G., Hammoud, H. A. A. K., Itani, H., Khizbullin, D., & Ghanem, B. (2023). CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (No. arXiv:2303.17760). arXiv. https://doi.org/10.48550/arXiv.2303.17760
Lin, Z. (2024). How to write effective prompts for large language models. Nature Human Behaviour, 8(4), 611–615. https://doi.org/10.1038/s41562-024-01847-2
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach (No. arXiv:1907.11692). arXiv. https://doi.org/10.48550/arXiv.1907.11692
OpView Trend. (n.d.). Retrieved December 11, 2024, from https://trend.opview.com.tw/trend/Explore
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior (No. arXiv:2304.03442). arXiv. https://doi.org/10.48550/arXiv.2304.03442
Rapp, C. (2002). Aristotle’s rhetoric. In The Stanford Encyclopedia of Philosophy. Stanford University.
Ruch, W., & Heintz, S. (2017). Experimentally Manipulating Items Informs on the (Limited) Construct and Criterion Validity of the Humor Styles Questionnaire. Frontiers in Psychology, 8. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2017.00616
Xu, R., Lin, B. S., Yang, S., Zhang, T., Shi, W., Zhang, T., Fang, Z., Xu, W., & Qiu, H. (2024). The Earth is Flat because...: Investigating LLMs’ Belief towards Misinformation via Persuasive Conversation (No. arXiv:2312.09085). arXiv. https://doi.org/10.48550/arXiv.2312.09085
Yeo, A., Legard, R., Keegan, J., Ward, K., Nicholls, C. M., & Lewis, C. (2013). Qualitative Research Practice A Guide for Social Science Students and Researchers (J. Ritchie, J. Lewis, C. M. Nicholls, & R. Ormston, Eds.; pp. 177–208). SAGE.
Yin, Z., Sun, Q., Chang, C., Guo, Q., Dai, J., Huang, X., & Qiu, X. (2023). Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication (No. arXiv:2312.01823). arXiv. https://doi.org/10.48550/arXiv.2312.01823
Zhao, X., Wang, K., & Peng, W. (2023). ORCHID: A Chinese Debate Corpus for Target-Independent Stance Detection and Argumentative Dialogue Summarization. In H. Bouamor, J. Pino, & K. Bali (Eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 9358–9375). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.emnlp-main.582
中華民國內政部戶政司. (2018, May 1). 中華民國 內政部戶政司 全球資訊網 [Text/html]. 中華民國內政部戶政司; 中華民國內政部戶政司. https://www.ris.gov.tw/app/portal/346
國立政治大學選舉研究中心-臺灣民眾政黨偏好趨勢分佈. (2025, January 13). https://esc.nccu.edu.tw/PageDoc/Detail?fid=7806&id=6965
國立臺灣師範大學 秘書室公共事務中心. (2023, November 16). https://pr.ntnu.edu.tw/news/index.php?mode=data&id=22052
電話訪問電話號碼抽取使用與管理原則. (2025, June 4). https://esc.nccu.edu.tw/PageDoc/Detail?fid=13800&id=34482
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98500-
dc.description.abstract隨著大型語言模型(Large Language Models, LLMs)技術的迅速發展,人工智慧在語音辨識、自然語言處理與文本生成等領域已展現出高度成熟的應用潛力。特別是以ChatGPT為代表的語言模型,其高度擬人化的語言理解與生成能力,已逐漸成為商業實務與研究實驗中的關鍵工具。近年來的研究指出,LLM不僅能生成流暢的自然語言文本,亦展現出一定程度的說服能力,能在特定議題上影響人類使用者的立場。然而,現有文獻大多集中於單輪互動情境,對於多輪對話中LLM是否能展現持續而有效的誘導能力,尚缺乏系統性探討。

本研究以結構化的電話訪談作為實驗框架,透過Prompt Engineering技術,設計出具誘導策略的提問方式,模擬LLM作為訪問者(Interviewer),並以另一個LLM扮演受訪者(Interviewee),以多輪對話的形式觀察立場變化。研究設計中將逐輪紀錄受訪者的立場分數變動,並分析誘導性問題對於受訪者的影響。研究將從多個面向進行分析,包括不同主題下的立場變化趨勢、誘導策略的組合效果差異,LLM的誘導能力以及模型間的比較。

綜合而言,本研究不僅驗證了LLM於多輪對話中誘導能力的可行性,也比較了不同模型、主題與策略組合在實際對話中的差異,為理解與規範AI語言互動帶來新的實證依據,並呼籲對誘導性對話可能帶來之倫理風險與使用規範進行更全面的關注與討論。
zh_TW
dc.description.abstractWith the rapid advancement of Large Language Models (LLMs), artificial intelligence has demonstrated significant potential in fields such as speech recognition, natural language processing, and text generation. Among these, language models like ChatGPT have become essential tools in both industrial applications and academic research, owing to their highly human-like capabilities in language understanding and generation. Recent studies have shown that LLMs not only produce fluent natural language output but also exhibit a certain degree of persuasive power, influencing users’ stances on specific issues. However, most existing research focuses on single-turn interactions, leaving the question of whether LLMs can exert sustained and effective persuasive influence in multi-turn dialogues largely unexplored.

This study adopts a structured telephone interview setting as the experimental framework. By leveraging Prompt Engineering, we design questions with embedded persuasive strategies, using one LLM to simulate the role of the interviewer and another to act as the interviewee. The experiment observes how the interviewee’s stance evolves across multiple dialogue turns. Each round records the interviewee’s stance score, and the effects of persuasive questioning are analyzed accordingly. The study takes a multi-faceted approach, examining stance change trends across different topics, the effects of various strategy combinations, the overall persuasive capacity of LLMs, and cross-model comparisons.

In summary, this research not only verifies the feasibility of LLMs exerting persuasive influence in multi-turn dialogues, but also compares how different models, topics, and strategy combinations behave in realistic dialogue settings. The findings provide empirical evidence for understanding and regulating AI-mediated language interactions and call for greater attention to the ethical risks and governance of persuasive AI dialogues.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-08-14T16:21:29Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-08-14T16:21:29Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents誌謝 ii
摘要 iii
Abstract iv
目次 vi
圖次 x
表次 xv
Chapter 1 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 4
1.3 論文架構 5
Chapter 2 文獻探討 6
2.1 LLM的說服能力 6
2.2 Prompt Engineering 7
2.3 LLM間的互動 9
2.4 LLM立場判斷能力 10
2.5 誘導性問題(Leading Question)定義與策略 11
2.6 總結 13
Chapter 3 研究方法 14
3.1 研究架構 14
3.1.1 實驗流程 14
3.1.2 訪問者(Interviewer) 16
3.1.3 受訪者(Interviewee) 16
3.1.4 立場辨識 17
3.2 實驗方法 19
3.2.1 研究主題 19
3.2.2 策略選用 20
3.2.3 受訪者背景資訊 22
3.2.4 對話停止條件 22
3.2.5 立場檢測自動化 23
3.3 研究驗證方法 26
3.3.1 整體對話的回合數 26
3.3.2 實驗中受訪者立場轉變比例 27
3.3.3 立場變化的幅度 28
3.4 預備實驗 28
3.4.1 受訪者立場分數收斂 29
3.4.2 Prompt設計對於受訪者初始立場分布之影響 32
3.4.3 相同問題下對於受訪者立場之變動 34
3.4.4 有無說服指示對於訪問者之誘導能力影響 36
3.4.5 有無策略對於訪問者之誘導能力影響 40
3.4.6 Prompt設計對於訪問者誘導能力之影響 43
Chapter 4 研究結果 46
4.1 訪問結束回合 47
4.2 表態立場句子數 49
4.2.1 主題對於立場表態句數的影響 50
4.2.2 策略組合對於立場表態句數的影響 54
4.3 受訪者立場改變人數 58
4.3.1 主題之間的差異 59
4.4 立場分數變化 67
4.4.1 對話內容之立場態度分析 68
4.4.2 直接詢問受訪者立場態度之分析 71
4.4.3 策略組合個數之影響 75
4.4.4 主題之間的差異 77
4.4.5 根據先前對話,再次直接詢問受訪者立場 78
4.5 ChatGPT(訪問者)與 Gemini(受訪者)之間的互動分析 80
4.5.1 訪問結束回合 81
4.5.2 主題對於表態立場句子數影響 83
4.5.3 策略對於表態立場句子數影響 86
4.5.4 從廢除死刑議題探討受訪者立場改變人數 89
4.5.5 立場分數變化 92
4.5.6 ChatGPT與Gemini受訪者產出內容之比較 96
4.6 ChatGPT(受訪者)與 Gemini(訪問者)之間的互動分析 98
4.6.1 訪問結束回合 98
4.6.2 主題對於表態立場句子數的影響 100
4.6.3 策略組合對於表態立場句子數的影響 103
4.6.4 從廢除死刑議題探討受訪者立場改變人數 105
4.6.5 立場分數變化 107
4.7 Gemini扮演訪問者與受訪者 113
4.7.1 訪問結束回合 114
4.7.2 主題對於表態立場句子數的影響 116
4.7.3 策略組合對於表態立場句子數的影響 119
4.7.4 從廢除死刑議題探討受訪者立場改變人數 121
4.7.5 立場分數變化 123
Chapter 5 結論與建議 128
5.1 研究成果 128
5.2 研究限制 132
5.3 未來研究方向 133
參考文獻 135
附錄 A 140
附錄 B 146
附錄 C 155
附錄 D 161
附錄 E 178
附錄 F 195
附錄 G 212
-
dc.language.isozh_TW-
dc.subject大型語言模型zh_TW
dc.subjectPrompt Engineeringzh_TW
dc.subject誘導能力zh_TW
dc.subject立場改變zh_TW
dc.subject對話內容分析zh_TW
dc.subjectPersuasionen
dc.subjectLarge Language Modelsen
dc.subjectDialogue Analysisen
dc.subjectStance Changeen
dc.subjectPrompt Engineeringen
dc.titleLLM誘導能力與衡量: 以電話訪問為例zh_TW
dc.titleSuggestive Power in LLMs: Case Study in Telephone Surveyen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳建錦;魏志平;楊立偉;林俊叡zh_TW
dc.contributor.oralexamcommitteeChien-Chin Chen;Chin-Ping Wei;Li-wei Yang;Raymund Linen
dc.subject.keyword大型語言模型,Prompt Engineering,誘導能力,立場改變,對話內容分析,zh_TW
dc.subject.keywordLarge Language Models,Prompt Engineering,Persuasion,Stance Change,Dialogue Analysis,en
dc.relation.page228-
dc.identifier.doi10.6342/NTU202502940-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-08-05-
dc.contributor.author-college管理學院-
dc.contributor.author-dept資訊管理學系-
dc.date.embargo-lift2025-08-15-
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf16.94 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved