Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 醫學院
  3. 醫學教育暨生醫倫理學科所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100233
標題: 人工智慧應用於自殺防治策略中的倫理與法律挑戰:範疇回顧研究
Ethical and Legal Challenges of Applying Artificial Intelligence in Suicide Prevention Strategies: A Scoping Review
作者: 許郡倫
Chun-Lun Hsu
指導教授: 吳建昌
Chien-Chang Wu
關鍵字: 人工智慧,自殺防治,倫理,法律,範疇回顧,隱私,偏見,責任歸屬,
Artificial Intelligence,Suicide Prevention,Ethics,Law,Scoping Review,Privacy,Bias,Responsibility Attribution,
出版年 : 2025
學位: 碩士
摘要: 研究背景: 自殺作為重大公共衛生問題,全球每年約有72萬人死於自殺,相當於每40秒就有一人自殺身亡。隨著人工智慧(AI)技術在健康照護領域的快速發展,其在自殺防治中的應用潛力日益顯現,包括風險預測、早期介入和個人化治療等面向。然而,AI技術應用於此高度敏感且複雜的領域時,不可避免地引發一系列倫理與法律挑戰,涉及隱私保護、演算法公平性、責任歸屬、知情同意等多重議題。本研究旨在透過範疇界定文獻回顧,全面梳理AI技術在自殺防治應用中面臨的倫理與法律議題,探討可能的解決方案,並為政策制定和實務應用提供具體可行的規範建議。
研究方法: 本研究採用範疇界定文獻回顧(Scoping Review)作為主要研究方法,適用於勾勒新興或複雜領域的知識全貌。輔以主題分析法(Thematic Analysis)進行描述性結果整理,並運用法律比較分析(Comparative Legal Analysis)探討台灣、歐盟與美國在AI相關法規框架的差異。文獻檢索於2024年4月進行,從PubMed、SCOPUS、PsyInfo和EMBASE四個資料庫中檢索到79篇文獻,經過重複性檢查、摘要審查和全文評估,最終納入20篇2018-2024年間發表的高品質相關文獻進行深入分析。研究運用質性研究軟體NVivo 12進行系統性歸納整理,並結合世界衛生組織(WHO)的全球自殺防治策略指南進行綜合分析。
研究結果:
1. AI應用模式: AI在自殺防治中主要有三類應用方法:(1)風險預測與評估,透過機器學習算法整合電子健康紀錄、社群媒體資料等,提供個人化風險評分;(2)介入支援與治療輔助,包括AI聊天機器人提供即時、匿名且個人化的支援服務;(3)監測與社區安全維護,透過自然語言處理技術分析網路內容,識別高風險社群並進行積極介入。
2. 核心倫理議題: 研究透過主題分析歸納出八大核心倫理議題:(1)隱私與資料保護,涉及敏感個人資料的收集、儲存與使用安全;(2)知情同意與自主權,探討在AI介入過程中如何確保個人的知情同意和自主決策權;(3)演算法偏見與公平性,關注AI系統可能對特定族群產生歧視性評估的問題;(4)準確性與可靠性,涉及AI系統的誤報與漏報風險及其倫理後果;(5)法律責任歸屬與監督,探討當AI系統造成傷害時的責任界定問題;(6)行善義務與不傷害原則,討論如何在最大化行善效益的同時確保遵循最小侵害原則;(7)人性化關懷與信任,強調AI無法替代人際關懷與同理心的重要性;(8)倫理治理與未來方向,涉及建立健全的AI倫理治理框架和政策指引。
3. 法律框架比較: 透過比較分析發現,歐盟採取全面性、強制性法規框架,如《AI法案》將涉及生命風險的健康預測系統歸類為高風險,要求嚴格的事前評估和人類監督;美國偏向非強制性標準與指南,透過國家標準與技術機構(NIST)提供可信賴AI發展指引,保留更多彈性但可能導致監管不足;台灣正處於過渡階段,「人工智慧基本法」草案雖確立七大基本原則,但在自殺防治等特定領域的具體規範仍有待發展。
4. 利害關係人責任: 研究識別出各利害關係人在AI自殺防治應用中的不同角色與責任:(1)政策制定者需建立規範框架與促進跨部門合作,制定國家級指南確保策略的統一性和可操作性;(2)醫療機構負責整合AI技術與臨床實踐,建立內部使用指南和跨領域合作機制;(3)技術開發者應注重透明度與可解釋性,增強系統可靠性並促進臨床合作;(4)社會大眾需提高認知與參與度,培養數位健康素養並建立支持性社區;(5)親身經驗者(病人與家屬/遺族)作為第五個關鍵利害關係人,能提供寶貴的使用者回饋並協助識別潛在偏見問題。
討論與結論: AI技術雖具備提升風險識別效率、擴大防治覆蓋面和提供個人化介入的巨大潛力,但同時面臨隱私保護、資料安全、演算法偏見、誤判風險、數位排除與人性化關懷等多層次挑戰。研究強調AI應定位為專業人員的輔助工具而非替代者,需要各方利害關係人基於共同的倫理原則與科學證據協同行動。AI在健康領域的既有倫理問題(基礎層面)、自殺防治的傳統倫理問題(專業層面)與AI導入所產生的新挑戰(技術整合層面)相互交織,因此須建立以生命保護為核心的倫理治理模式。具體建議包括:(1)建立全面的法律和倫理框架,明確規定個人健康資料的收集、使用和儲存標準;(2)設立獨立的倫理審查委員會,評估AI輔助自殺防治項目的倫理合規性;(3)促進跨部門合作,整合衛生、教育、社會服務等領域資源;(4)開發文化敏感性AI工具,減少演算法偏見;(5)建立混合照護模式,結合技術效率與人性關懷。未來研究應著重於跨文化驗證、技術-倫理協同發展、弱勢群體研究及政策影響評估,確保AI技術在自殺防治領域的負責任應用,最終實現科技與人文關懷的和諧共存。
Research Background: Suicide represents a major global public health crisis, with approximately 720,000 deaths annually worldwide, equivalent to one suicide every 40 seconds. As artificial intelligence (AI) technology rapidly advances in healthcare, its potential applications in suicide prevention are increasingly evident, encompassing risk prediction, early intervention, and personalized treatment. However, the application of AI technology in this highly sensitive and complex domain inevitably raises a series of ethical and legal challenges, involving multiple issues including privacy protection, algorithmic fairness, responsibility attribution, and informed consent. This study aims to comprehensively examine the ethical and legal issues faced in AI applications for suicide prevention through a scoping review, explore potential solutions, and provide concrete and feasible regulatory recommendations for policy development and practical implementation.
Research Methods: This study employed a scoping review as the primary research method, appropriate for mapping the knowledge landscape of emerging or complex fields. Thematic analysis was used for descriptive results organization, supplemented by comparative legal analysis to explore differences in AI-related regulatory frameworks among Taiwan, the European Union, and the United States. Literature searches were conducted in April 2024, retrieving 79 articles from four databases: PubMed, SCOPUS, PsyInfo, and EMBASE. Following duplicate removal, abstract screening, and full-text evaluation, 20 high-quality relevant articles published between 2018-2024 were ultimately included for in-depth analysis. The study utilized qualitative research software NVivo 12 for systematic categorization and organization, combined with the World Health Organization (WHO) global suicide prevention strategy guidelines for comprehensive analysis.
Research Results:
1. AI Application Models: AI applications in suicide prevention primarily encompass three categories: (1) Risk prediction and assessment, utilizing machine learning algorithms to integrate electronic health records, social media data, and other sources to provide personalized risk scores; (2) Intervention support and treatment assistance, including AI chatbots providing immediate, anonymous, and personalized support services; (3) Monitoring and community safety maintenance, analyzing online content through natural language processing technologies to identify high-risk communities and implement proactive interventions.
2. Core Ethical Issues: Through thematic analysis, the study identified eight core ethical issues: (1) Privacy and data protection, involving the collection, storage, and safe use of sensitive personal data; (2) Informed consent and autonomy, exploring how to ensure individual informed consent and autonomous decision-making during AI interventions; (3) Algorithmic bias and fairness, addressing concerns about discriminatory assessments AI systems may produce for specific populations; (4) Accuracy and reliability, involving false positive and false negative risks of AI systems and their ethical consequences; (5) Legal responsibility attribution and oversight, exploring responsibility delineation when AI systems cause harm; (6) Beneficence and non-maleficence principles, discussing how to maximize beneficial effects while ensuring adherence to minimal harm principles; (7) Humanized care and trust, emphasizing the importance of interpersonal care and empathy that AI cannot replace; (8) Ethical governance and future directions, involving the establishment of robust AI ethical governance frameworks and policy guidelines.
3. Legal Framework Comparison: Comparative analysis revealed that the EU adopts a comprehensive, mandatory regulatory framework, with the AI Act classifying health prediction systems involving life risks as high-risk, requiring strict pre-assessment and human oversight; the United States tends toward non-mandatory standards and guidelines, providing trustworthy AI development guidance through the National Institute of Standards and Technology (NIST), maintaining greater flexibility but potentially leading to regulatory insufficiency; Taiwan is in a transitional phase, with the "Artificial Intelligence Basic Act" draft establishing seven fundamental principles, but specific regulations for particular fields such as suicide prevention remain to be developed.
4. Stakeholder Responsibilities: The study identified different roles and responsibilities of various stakeholders in AI suicide prevention applications: (1) Policymakers need to establish regulatory frameworks and promote cross-sector cooperation, developing national guidelines to ensure strategy uniformity and operability; (2) Healthcare institutions are responsible for integrating AI technology with clinical practice, establishing internal usage guidelines and interdisciplinary collaboration mechanisms; (3) Technology developers should focus on transparency and explainability, enhancing system reliability and promoting clinical cooperation; (4) The general public needs to improve awareness and participation, cultivating digital health literacy and building supportive communities; (5) Individuals with lived experience (patients and families/survivors) serve as the fifth key stakeholder group, whose personal experiences and unique perspectives are crucial for ensuring AI tool practicality, acceptability, and cultural appropriateness, providing valuable user feedback and helping identify potential bias issues.
Discussion and Conclusions: While AI technology possesses tremendous potential for improving risk identification efficiency, expanding prevention coverage, and providing personalized interventions, it simultaneously faces multi-layered challenges including privacy protection, data security, algorithmic bias, misdiagnosis risks, digital exclusion, and humanized care. The study emphasizes that AI should be positioned as an assistive tool for professionals rather than a replacement, requiring collaborative action from all stakeholders based on shared ethical principles and scientific evidence. The existing ethical issues of AI in healthcare (foundational level), traditional ethical problems in suicide prevention (professional level), and new challenges introduced by AI implementation (technological integration level) are interconnected, necessitating the establishment of an ethical governance model centered on life protection. Specific recommendations include: (1) Establishing comprehensive legal and ethical frameworks that clearly specify standards for personal health data collection, use, and storage; (2) Setting up independent ethical review committees to assess ethical compliance of AI-assisted suicide prevention projects; (3) Promoting cross-sector cooperation to integrate resources from health, education, social services, and other fields; (4) Developing culturally sensitive AI tools to reduce algorithmic bias; (5) Establishing hybrid care models that combine technological efficiency with humanized care. Future research should focus on cross-cultural validation, technology-ethics collaborative development, vulnerable population studies, and policy impact assessment to ensure responsible application of AI technology in suicide prevention, ultimately achieving harmonious coexistence between technology and humanistic care.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100233
DOI: 10.6342/NTU202502994
全文授權: 同意授權(限校園內公開)
電子全文公開日期: 2030-07-31
顯示於系所單位:醫學教育暨生醫倫理學科所

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  未授權公開取用
1.42 MBAdobe PDF檢視/開啟
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved