Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97902
Title: 人工智慧決策支援任務中解釋性對於不同態度的使用者的影響
The Impact of Explainability on Users with Divergent Attitudes in AI-Supported Decision-Making Tasks
Authors: 徐尚淵
Shang-Yuan Hsu
Advisor: 畢南怡
Nanyi Bi
Keyword: 人工智慧,可解釋性,促發效應,人機互動,信任,醫療診斷決策,
Artificial Intelligence,Explainability,Priming Effect,Human-AI Interaction,Trust,Medical Decision-Making,
Publication Year : 2025
Degree: 碩士
Abstract: 隨著人工智慧(AI)技術日益融入各領域的決策過程,AI 已成為輔助人類決策的重要工具,但使用者對其透明性與可靠性仍有存疑。雖然已有研究指出,在不同的任務情境和模型特質下,使用者信任可得到提升,其中包括模型的可解釋性,但不同解釋方式及其對持有不同態度的使用者所產生的影響,尚未得到充分探討。本研究運用心理學中的促發效應(Priming Effect),探討不同態度的使用者與模型使用不同解釋方式的影響。參與者完成十題醫療診斷決策任務,並透過認知信任、情感信任、行為信任、感知公平性、感知有用性及任務表現等指標進行測量。研究結果顯示,相較於受促發為演算法厭惡(Algorithm Aversion)的使用者,演算法欣賞(Algorithm Appreciation)態度的使用者在認知信任、行為信任與感知有用性上顯著較高;而自然語言解釋相較於圖表解釋,更能增強使用者的情感信任。我們的研究驗證了促發效應與模型解釋方式如何分別影響使用者的感知與行為,為人機協作在醫療診斷決策領域的改進提供了方向,並為未來 AI 系統在其他領域的設計提供了有價值的參考。
As artificial intelligence (AI) technologies become increasingly integrated into decision-making processes across various domains, AI has become an important tool to assist human decision-making. However, users still have concerns about its transparency and reliability. While previous research has shown that trust in AI can be enhanced under different task conditions and model characteristics, including the explainability of the model, the impact of different explanation methods and different user attitudes toward AI has not been thoroughly explored. This study applies the psychological concept of priming to examine the effects of different explanation methods and different user attitudes on users. Participants completed ten medical diagnostic decision tasks, and their experiences were measured across cognitive trust, affective trust, behavioral trust, perceived fairness, perceived usefulness, and task performance. The results showed that, compared to participants primed with algorithm aversion, those with an algorithm appreciation attitude exhibited significantly higher cognitive trust, behavioral trust, and perceived usefulness. Additionally, natural language explanations were found to enhance users' affective trust more than chart-based explanations. Our study validates how priming effects and explanation methods independently influence users' perceptions and behaviors, offering directions for improving human-AI collaboration in medical decision-making and providing valuable insights for the design of AI systems in other domains.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97902
DOI: 10.6342/NTU202501692
Fulltext Rights: 未授權
metadata.dc.date.embargo-lift: N/A
Appears in Collections:資訊管理學系

Files in This Item:
File SizeFormat 
ntu-113-2.pdf
  Restricted Access
3.05 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved