請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97902完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 畢南怡 | zh_TW |
| dc.contributor.advisor | Nanyi Bi | en |
| dc.contributor.author | 徐尚淵 | zh_TW |
| dc.contributor.author | Shang-Yuan Hsu | en |
| dc.date.accessioned | 2025-07-22T16:08:24Z | - |
| dc.date.available | 2025-07-23 | - |
| dc.date.copyright | 2025-07-22 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-15 | - |
| dc.identifier.citation | [1] L. Alam and S. Mueller. Examining the effect of explanation on satisfaction and trust in ai diagnostic systems. BMC medical informatics and decision making, 21(1):178, 2021.
[2] A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115, 2020. [3] S. Atakishiyev, M. Salameh, H. Yao, and R. Goebel. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access, 2024. [4] G. Bansal, T. Wu, J. Zhou, R. Fok, B. Nushi, E. Kamar, M. T. Ribeiro, and D. Weld. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1–16, 2021. [5] J. A. Bargh, S. Chaiken, R. Govender, and F. Pratto. The generality of the automatic attitude activation effect. Journal of personality and social psychology, 62(6):893, 1992. [6] E. Bernardo and R. Seva. Affective design analysis of explainable artificial intelligence (xai): a user-centric perspective. In Informatics, volume 10, page 32. MDPI, 2023. [7] A. Bhattacherjee. Individual trust in online firms: Scale development and initial test. Journal of management information systems, 19(1):211–241, 2002. [8] S. Brdnik, V. Podgorelec, and B. Šumak. Assessing perceived trust and satisfaction with multiple explanation techniques in xai-enhanced learning analytics. Electronics, 12(12):2594, 2023. [9] N. Castelo, M. W. Bos, and D. R. Lehmann. Task-dependent algorithm aversion. Journal of marketing research, 56(5):809–825, 2019. [10] S. Chang, F. M. Harper, and L. G. Terveen. Crowd-based personalized natural language explanations for recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 175–182, 2016. [11] H. Choung, J. S. Seberger, and P. David. When ai is perceived to be fairer than a human: Understanding perceptions of algorithmic decisions in a job application context. International Journal of Human–Computer Interaction, 40(22):7451–7468, 2024. [12] R. B. Cialdini et al. Influence: Science and practice, volume 4. 2009. [13] D. E. Conlon and P. M. Fasolo. Influence of speed of third-party intervention and out come on negotiator and constituent fairness judgments. Academy of Management Journal, 33(4):833–846, 1990. [14] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, pages 191–198, 2016. [15] F. D. Davis. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, pages 319–340, 1989. [16] B. J. Dietvorst, J. P. Simmons, and C. Massey. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of experimental psychology: General, 144(1):114, 2015. [17] P. M. Doney and J. P. Cannon. An examination of the nature of trust in buyer–seller relationships. Journal of marketing, 61(2):35–51, 1997. [18] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012. [19] P. Giudici. Fintech risk management: A research challenge for artificial intelligence in finance. Frontiers in Artificial Intelligence, 1:1, 2018. [20] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu. A survey of deep learning techniques for autonomous driving. Journal of field robotics, 37(3):362–386, 2020. [21] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42, 2018. [22] V. Gulshan, L. Peng, M. Coram, M. C. Stumpe, D. Wu, A. Narayanaswamy, S. Venugopalan, K. Widner, T. Madams, J. Cuadros, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. jama, 316(22):2402–2410, 2016. [23] D. Gunning and D. Aha. Darpa's explainable artificial intelligence (xai) program. AI magazine, 40(2):44–58, 2019. [24] S. G. Hart. Nasa-task load index (nasa-tlx); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, volume 50, pages 904–908. Sage publications Sage CA: Los Angeles, CA, 2006. [25] D. Johnson and K. Grayson. Cognitive and affective trust in service relationships. Journal of Business research, 58(4):500–507, 2005. [26] M. Kayser, B. Menzat, C. Emde, B. Bercean, A. Novak, A. Morgado, B. Papiez, S. Gaube, T. Lukasiewicz, and O.-M. Camburu. Fool me once? contrasting textual and visual explanations in a clinical decision-support setting. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18891–18919, 2024. [27] P. Khadpe, R. Krishna, L. Fei-Fei, J. T. Hancock, and M. S. Bernstein. Conceptual metaphors impact perceptions of human-ai collaboration. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2):1–26, 2020. [28] S. Kim, J. Lee, and G. Gweon. Comparing data from chatbot and web surveys: Effects of platform and conversational style on survey response quality. In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–12, 2019. [29] E. S. Knowles and J. A. Linn. Resistance and persuasion. Psychology Press, 2004. [30] J. Lewis. Trust as social reality. Social Forces, 1985. [31] J. M. Logg, J. A. Minson, and D. A. Moore. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151:90–103, 2019. [32] N. Luhmann. Trust and power. John Wiley & Sons, 2018. [33] S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume30. Curran Associates, Inc., 2017. [34] A. F. Markus, J. A. Kors, and P. R. Rijnbeek. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of biomedical informatics, 113:103655, 2021. [35] P. Mavrepis, G. Makridis, G. Fatouros, V. Koukos, M. M. Separdani, and D. Kyriazis. Xai for all: Can large language models simplify explainable ai? arXiv preprint arXiv:2401.13110, 2024. [36] R. C. Mayer, J. H. Davis, and F. D. Schoorman. An integrative model of organizational trust. Academy of management review, 20(3):709–734, 1995. [37] B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi. The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2):2053951716679679, 2016. [38] C. Moorman, G. Zaltman, and R. Deshpande. Relationships between providers and users of market research: The dynamics of trust within and between organizations. Journal of marketing research, 29(3):314–328, 1992. [39] S. H. Park and K. Han. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology, 286(3):800–809, 2018. [40] P. Pataranutaporn, R. Liu, E. Finn, and P. Maes. Influencing human–ai interaction by priming beliefs about ai can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence, 5(10):1076–1086, 2023. [41] F. Poursabzi-Sangdeh, D. G. Goldstein, J. M. Hofman, J. W. Wortman Vaughan, and H. Wallach. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages1–52, 2021. [42] A. Rago, B. Palfi, P. Sukpanichnant, H. Nabli, K. Vivek, O. Kostopoulou, J. Kinross, and F. Toni. Exploring the effect of explanation content and format on user comprehension and trust. arXiv preprint arXiv:2408.17401, 2024. [43] M. H. Rezazade Mehrizi, F. Mol, M. Peter, E. Ranschaert, D. P. Dos Santos, R. Shahidi, M. Fatehi, and T. Dratsch. The impact of ai suggestions on radiologists' decisions: a pilot study of explainability and attitudinal priming interventions in mammography examination. Scientific reports, 13(1):9230, 2023. [44] M. T. Ribeiro, S. Singh, and C. Guestrin. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22ndACMSIGKDDinternational conference on knowledge discovery and data mining, pages 1135–1144, 2016. [45] R. Roy and V. Naidoo. Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 126:23–34, 2021. [46] D. L. Schacter and R. L. Buckner. Priming and the brain. Neuron, 20(2):185–195, 1998. [47] B. R. Schlenker, B. Helm, and J. T. Tedeschi. The effects of personality and situational variables on behavioral trust. Journal of personality and social psychology, 25(3):419, 1973. [48] D. Shin. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. International journal of human-computer studies, 146:102551, 2021. [49] K. Siau and W. Wang. Building trust in artificial intelligence, machine learning, and robotics. Cutter business technology journal, 31(2):47, 2018. [50] R. R. Sinha, K. Swearingen, et al. Comparing recommendations made by online systems and friends. DELOS, 106(1):1–6, 2001. [51] J. W. Smith, J. E. Everhart, W. C. Dickson, W. C. Knowler, and R. S. Johannes. Using the adap learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the annual symposium on computer application in medical care, page 261, 1988. [52] J.-P. Stein, T. Messingschlager, T. Gnambs, F. Hutmacher, and M. Appel. Attitudes towards ai: measurement and associations with personality. Scientific Reports, 14(1):2909, 2024. [53] P. Szolovits, R. S. Patil, and W. B. Schwartz. Artificial intelligence in medical diagnosis. Annals of internal medicine, 108(1):80–87, 1988. [54] L. K. Trevino, R. H. Lengel, and R. L. Daft. Media symbolism, media richness, and media choice in organizations: A symbolic interactionist perspective. Communication research, 14(5):553–574, 1987. [55] E. Tulving and D. L. Schacter. Priming and human memory systems. Science, 247(4940):301–306, 1990. [56] M. Yin, J. Wortman Vaughan, and H. Wallach. Understanding the effect of accuracy on trust in machine learning models. In Proceedings of the 2019 chi conference on human factors in computing systems, pages 1–12, 2019. [57] Q. Zhang, J. Lu, and Y. Jin. Artificial intelligence in recommender systems. Complex &Intelligent Systems, 7(1):439–457, 2021. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97902 | - |
| dc.description.abstract | 隨著人工智慧(AI)技術日益融入各領域的決策過程,AI 已成為輔助人類決策的重要工具,但使用者對其透明性與可靠性仍有存疑。雖然已有研究指出,在不同的任務情境和模型特質下,使用者信任可得到提升,其中包括模型的可解釋性,但不同解釋方式及其對持有不同態度的使用者所產生的影響,尚未得到充分探討。本研究運用心理學中的促發效應(Priming Effect),探討不同態度的使用者與模型使用不同解釋方式的影響。參與者完成十題醫療診斷決策任務,並透過認知信任、情感信任、行為信任、感知公平性、感知有用性及任務表現等指標進行測量。研究結果顯示,相較於受促發為演算法厭惡(Algorithm Aversion)的使用者,演算法欣賞(Algorithm Appreciation)態度的使用者在認知信任、行為信任與感知有用性上顯著較高;而自然語言解釋相較於圖表解釋,更能增強使用者的情感信任。我們的研究驗證了促發效應與模型解釋方式如何分別影響使用者的感知與行為,為人機協作在醫療診斷決策領域的改進提供了方向,並為未來 AI 系統在其他領域的設計提供了有價值的參考。 | zh_TW |
| dc.description.abstract | As artificial intelligence (AI) technologies become increasingly integrated into decision-making processes across various domains, AI has become an important tool to assist human decision-making. However, users still have concerns about its transparency and reliability. While previous research has shown that trust in AI can be enhanced under different task conditions and model characteristics, including the explainability of the model, the impact of different explanation methods and different user attitudes toward AI has not been thoroughly explored. This study applies the psychological concept of priming to examine the effects of different explanation methods and different user attitudes on users. Participants completed ten medical diagnostic decision tasks, and their experiences were measured across cognitive trust, affective trust, behavioral trust, perceived fairness, perceived usefulness, and task performance. The results showed that, compared to participants primed with algorithm aversion, those with an algorithm appreciation attitude exhibited significantly higher cognitive trust, behavioral trust, and perceived usefulness. Additionally, natural language explanations were found to enhance users' affective trust more than chart-based explanations. Our study validates how priming effects and explanation methods independently influence users' perceptions and behaviors, offering directions for improving human-AI collaboration in medical decision-making and providing valuable insights for the design of AI systems in other domains. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-22T16:08:24Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-07-22T16:08:24Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii Abstract v Contents vii List of Figures xi List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Related Work 5 2.1 The Role of Explainability in AI Systems 5 2.2 Types of Trust 7 2.3 Explanation Approach 9 2.4 The Application of Priming Effect 12 2.5 Algorithm Aversion and Algorithm Appreciation 13 2.6 Research Model 16 Chapter 3 Method 17 3.1 Study Design 17 3.2 Material 18 3.2.1 Priming Article 18 3.2.2 Dataset and AI Model 20 3.2.3 Explanation Methods 21 3.3 Participants 23 3.4 Procedure 23 3.5 Measurement 25 3.5.1 Manipulation Check 25 3.5.2 Primary Questionnaire 26 3.5.3 Task Data Analysis 27 3.5.4 Control Variables 28 Chapter 4 Results 29 4.1 Data Processing 29 4.1.1 Attention Check 29 4.1.2 Manipulation Check 30 4.1.2.1 Explanation Approach Manipulation Result 31 4.1.2.2 Priming Attitude Manipulation Result 31 4.2 Demographic Statics 32 4.3 Reliability and Validity Analysis 32 4.4 Results of Dependent Variable Measures 33 4.4.1 Post-task Survey Data Analysis 34 4.4.2 User Behavior Data Analysis 36 4.5 Post-hoc Analysis 38 4.6 Domain Familiarity 39 Chapter 5 Discussion 41 5.1 How Attitude Priming Influences Users’ Behaviors and Perceptions Toward a Specific AI Model 41 5.2 The Role of AI Attitudes in Shaping Human-AI Interaction 42 5.3 How Different Explanation Methods Influence User Interaction and Perception of AI Models 44 5.4 The Differential Impacts of Trust Types on Actual User Behavior 45 5.5 Limitations and Future Directions 47 5.5.1 Exploring the Design and Implementation of Priming Techniques 47 5.5.2 Limited Interaction Time with the AI Model 48 5.5.3 Integrating Real Chatbot Interaction in Experimental Design 48 5.5.4 Improving the Evaluation of User Behavior Data 49 5.5.5 Investigating the Effects of Varying Conversational Styles in Chatbot Explanations 49 Chapter 6 Conclusion 51 References 53 Appendix A — Questionnaire 61 A.1 Manipulation Check 61 A.1.1 AI Attitude for General AI 61 A.1.2 AI Attitude for GlucoPredict Model 62 A.1.3 Explanation Approach 62 A.2 Perception Questions 62 A.2.1 Cognitive Trust 62 A.2.2 Affective Trust 62 A.2.3 Perceived Fairness 63 A.2.4 Perceived Usefulness 63 A.2.5 Cognitive Loading 64 Appendix B — Attitude Priming Article 65 B.1 Algorithm Appreciation – Algorithm Breakthrough Achieved: Revolutionizing Early Detection of Diabetes 65 B.2 Algorithm Aversion – Algorithm Alert: The Hidden Risks of Relying on AI for Early Disease Detection 66 B.3 Control – GlucoPredict: A Technical Overview of Diabetes Prediction 67 | - |
| dc.language.iso | en | - |
| dc.subject | 可解釋性 | zh_TW |
| dc.subject | 醫療診斷決策 | zh_TW |
| dc.subject | 信任 | zh_TW |
| dc.subject | 人機互動 | zh_TW |
| dc.subject | 促發效應 | zh_TW |
| dc.subject | 可解釋性 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | 醫療診斷決策 | zh_TW |
| dc.subject | 信任 | zh_TW |
| dc.subject | 人機互動 | zh_TW |
| dc.subject | 促發效應 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | Medical Decision-Making | en |
| dc.subject | Artificial Intelligence | en |
| dc.subject | Explainability | en |
| dc.subject | Priming Effect | en |
| dc.subject | Human-AI Interaction | en |
| dc.subject | Trust | en |
| dc.subject | Medical Decision-Making | en |
| dc.subject | Artificial Intelligence | en |
| dc.subject | Explainability | en |
| dc.subject | Priming Effect | en |
| dc.subject | Human-AI Interaction | en |
| dc.subject | Trust | en |
| dc.title | 人工智慧決策支援任務中解釋性對於不同態度的使用者的影響 | zh_TW |
| dc.title | The Impact of Explainability on Users with Divergent Attitudes in AI-Supported Decision-Making Tasks | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 彭志宏;袁千雯 | zh_TW |
| dc.contributor.oralexamcommittee | Chih-Hung Peng;Chien-Wen Yuan | en |
| dc.subject.keyword | 人工智慧,可解釋性,促發效應,人機互動,信任,醫療診斷決策, | zh_TW |
| dc.subject.keyword | Artificial Intelligence,Explainability,Priming Effect,Human-AI Interaction,Trust,Medical Decision-Making, | en |
| dc.relation.page | 67 | - |
| dc.identifier.doi | 10.6342/NTU202501692 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2025-07-16 | - |
| dc.contributor.author-college | 管理學院 | - |
| dc.contributor.author-dept | 資訊管理學系 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 資訊管理學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 3.05 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
