請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97902| 標題: | 人工智慧決策支援任務中解釋性對於不同態度的使用者的影響 The Impact of Explainability on Users with Divergent Attitudes in AI-Supported Decision-Making Tasks |
| 作者: | 徐尚淵 Shang-Yuan Hsu |
| 指導教授: | 畢南怡 Nanyi Bi |
| 關鍵字: | 人工智慧,可解釋性,促發效應,人機互動,信任,醫療診斷決策, Artificial Intelligence,Explainability,Priming Effect,Human-AI Interaction,Trust,Medical Decision-Making, |
| 出版年 : | 2025 |
| 學位: | 碩士 |
| 摘要: | 隨著人工智慧(AI)技術日益融入各領域的決策過程,AI 已成為輔助人類決策的重要工具,但使用者對其透明性與可靠性仍有存疑。雖然已有研究指出,在不同的任務情境和模型特質下,使用者信任可得到提升,其中包括模型的可解釋性,但不同解釋方式及其對持有不同態度的使用者所產生的影響,尚未得到充分探討。本研究運用心理學中的促發效應(Priming Effect),探討不同態度的使用者與模型使用不同解釋方式的影響。參與者完成十題醫療診斷決策任務,並透過認知信任、情感信任、行為信任、感知公平性、感知有用性及任務表現等指標進行測量。研究結果顯示,相較於受促發為演算法厭惡(Algorithm Aversion)的使用者,演算法欣賞(Algorithm Appreciation)態度的使用者在認知信任、行為信任與感知有用性上顯著較高;而自然語言解釋相較於圖表解釋,更能增強使用者的情感信任。我們的研究驗證了促發效應與模型解釋方式如何分別影響使用者的感知與行為,為人機協作在醫療診斷決策領域的改進提供了方向,並為未來 AI 系統在其他領域的設計提供了有價值的參考。 As artificial intelligence (AI) technologies become increasingly integrated into decision-making processes across various domains, AI has become an important tool to assist human decision-making. However, users still have concerns about its transparency and reliability. While previous research has shown that trust in AI can be enhanced under different task conditions and model characteristics, including the explainability of the model, the impact of different explanation methods and different user attitudes toward AI has not been thoroughly explored. This study applies the psychological concept of priming to examine the effects of different explanation methods and different user attitudes on users. Participants completed ten medical diagnostic decision tasks, and their experiences were measured across cognitive trust, affective trust, behavioral trust, perceived fairness, perceived usefulness, and task performance. The results showed that, compared to participants primed with algorithm aversion, those with an algorithm appreciation attitude exhibited significantly higher cognitive trust, behavioral trust, and perceived usefulness. Additionally, natural language explanations were found to enhance users' affective trust more than chart-based explanations. Our study validates how priming effects and explanation methods independently influence users' perceptions and behaviors, offering directions for improving human-AI collaboration in medical decision-making and providing valuable insights for the design of AI systems in other domains. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97902 |
| DOI: | 10.6342/NTU202501692 |
| 全文授權: | 未授權 |
| 電子全文公開日期: | N/A |
| 顯示於系所單位: | 資訊管理學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 3.05 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
