請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100233完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 吳建昌 | zh_TW |
| dc.contributor.advisor | Chien-Chang Wu | en |
| dc.contributor.author | 許郡倫 | zh_TW |
| dc.contributor.author | Chun-Lun Hsu | en |
| dc.date.accessioned | 2025-09-30T16:06:18Z | - |
| dc.date.available | 2025-10-01 | - |
| dc.date.copyright | 2025-09-30 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-08-04 | - |
| dc.identifier.citation | 中文文獻
吳建昌、王昱涵(2021)。醫師運用醫療人工智慧時之說明義務──以病人自主、醫療決策及醫療人工智慧發展為中心。月旦民商法雜誌, 74, 49–67。https://doi.org/10.53106/172717627403 陳擷安、吳建昌(2024)。人工智慧醫療系統的法律主體性與民事歸責架構之探討。財產法暨經濟法, 76, 75–113。https://doi.org/10.53106/181646412024060076003 英文文獻 Abdelmoteleb, S., Ghallab, M., & IsHak, W. W. (2025). Evaluating the ability of artificial intelligence to predict suicide: A systematic review of reviews. Journal of affective disorders. Ajdacic-Gross, V., Weiss, M. G., Ring, M., Hepp, U., Bopp, M., Gutzwiller, F., & Rössler, W. (2008). Methods of suicide: international suicide patterns derived from the WHO mortality database. Bulletin of the World Health Organization, 86, 726-732. Ala-Pietilä, P., & Smuha, N. A. (2021). A framework for global cooperation on artificial intelligence and its governance. Reflections on artificial intelligence for humanity, 237-265. Arnberg, F. K., Gudmundsdóttir, R., Butwicka, A., Fang, F., Lichtenstein, P., Hultman, C. M., & Valdimarsdóttir, U. A. (2015). Psychiatric disorders and suicide attempts in Swedish survivors of the 2004 southeast Asia tsunami: a 5 year matched cohort study. The Lancet Psychiatry, 2(9), 817-824. Arowosegbe, A., & Oyelade, T. (2023). Application of natural language processing (NLP) in detecting and preventing suicide ideation: a systematic review. International journal of environmental research and public health, 20(2), 1514. Atmakuru, A., Shahini, A., Chakraborty, S., Seoni, S., Salvi, M., Hafeez-Baig, A., Rashid, S., San Tan, R., Barua, P. D., & Molinari, F. (2024). Artificial Intelligence-based Suicide Prevention and Prediction: A Systematic Review (2019-2023). Information Fusion, 102673. Balagurunathan, Y., Mitchell, R., & El Naqa, I. (2021). Requirements and reliability of AI in the medical context. Physica Medica, 83, 72-78. Balasubramaniam, N., Kauppinen, M., Hiekkanen, K., & Kujala, S. (2022). Transparency and explainability of AI systems: ethical guidelines in practice. International working conference on requirements engineering: foundation for software quality, Balcombe, L., & De Leo, D. (2022). Human-computer interaction in digital mental health. Informatics, Barak-Corren, Y., Castro, V. M., Javitt, S., Hoffnagle, A. G., Dai, Y., Perlis, R. H., Nock, M. K., Smoller, J. W., & Reis, B. Y. (2017). Predicting suicidal behavior from longitudinal electronic health records. American journal of psychiatry, 174(2), 154-162. Barua, P. D., Vicnesh, J., Lih, O. S., Palmer, E. E., Yamakawa, T., Kobayashi, M., & Acharya, U. R. J. C. N. (2022). Artificial intelligence assisted tools for the detection of anxiety and depression leading to suicidal ideation in adolescents: a review. Cognitive Neurodynamics, 1-22. Beghi, M., Butera, E., Cerri, C. G., Cornaggia, C. M., Febbo, F., Mollica, A., Berardino, G., Piscitelli, D., Resta, E., Logroscino, G., Daniele, A., Altamura, M., Bellomo, A., Panza, F., & Lozupone, M. (2021). Suicidal behaviour in older age: A systematic review of risk factors associated to suicide attempts and completed suicides. Neurosci Biobehav Rev, 127, 193-211. https://doi.org/10.1016/j.neubiorev.2021.04.011 Belsher, B. E., Smolenski, D. J., Pruitt, L. D., Bush, N. E., Beech, E. H., Workman, D. E., Morgan, R. L., Evatt, D. P., Tucker, J., & Skopp, N. A. (2019). Prediction models for suicide attempts and deaths: a systematic review and simulation. JAMA psychiatry, 76(6), 642-651. Bernert, R. A., Hilberg, A. M., Melia, R., Kim, J. P., Shah, N. H., & Abnousi, F. (2020). Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. International journal of environmental research and public health, 17(16), 5929. Bernert, R. A., Hilberg, A. M., Melia, R., Kim, J. P., Shah, N. H., Abnousi, F. J. I. j. o. e. r., & health, p. (2020). Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. International journal of environmental research and public health, 17(16), 5929. Boot, K., Wiebenga, J. X., Eikelenboom, M., van Oppen, P., Thomaes, K., van Marle, H. J., & Heering, H. D. (2022). Associations between personality traits and suicidal ideation and suicide attempts in patients with personality disorders. Comprehensive psychiatry, 112, 152284. Braun, V., & Clarke, V. (2012). Thematic analysis. In APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. (pp. 57-71). American Psychological Association. https://doi.org/10.1037/13620-004 Brezo, J., Paris, J., & Turecki, G. (2006). Personality traits as correlates of suicidal ideation, suicide attempts, and suicide completions: a systematic review. Acta psychiatrica scandinavica, 113(3), 180-206. Calboli, I. (2021). Comparative Legal Analysis and Intellectual Property Law: A Guide for Research. Cavanagh, J. T., Carson, A. J., Sharpe, M., & Lawrie, S. M. (2003). Psychological autopsy studies of suicide: a systematic review. Psychological medicine, 33(3), 395-405. Celedonia, K. L., Corrales Compagnucci, M., Minssen, T., & Lowery Wilson, M. (2021). Legal, ethical, and wider implications of suicide risk detection systems in social media platforms. Journal of Law and the Biosciences, 8(1), lsab021. Chan, L. F., & Thambu, M. (2016). Cultural factors in suicide prevention. The international handbook of suicide prevention, 541-555. Chatterjee, M., Kumar, P., Samanta, P., & Sarkar, D. (2022). Suicide ideation detection from online social media: A multi-modal feature based technique. International Journal of Information Management Data Insights, 2(2), 100103. Chen, C.-C., Chen, J. A., Liang, C.-S., & Lin, Y.-H. (2025). Large language models may struggle to detect culturally embedded filicide-suicide risks. Asian Journal of Psychiatry, 105, 104395. https://doi.org/https://doi.org/10.1016/j.ajp.2025.104395 Chiou, W.-T. (2019). What Roles Can Lay Citizens Play in the Making of Public Knowledge? East Asian Science, Technology and Society: An International Journal, 13(2), 257-277. https://doi.org/10.1215/18752160-7542785 Ciecierski-Holmes, T., Singh, R., Axt, M., Brenner, S., & Barteit, S. (2022). Artificial intelligence for strengthening healthcare systems in low-and middle-income countries: a systematic scoping review. NPJ digital medicine, 5(1), 162. Coppersmith, D. D., Kleiman, E. M., Glenn, C. R., Millner, A. J., & Nock, M. K. (2019). The dynamics of social support among suicide attempters: A smartphone-based daily diary study. Behaviour research and therapy, 120, 103348. Coppersmith, D. D., Kleiman, E. M., Millner, A. J., Wang, S. B., Arizmendi, C., Bentley, K. H., DeMarco, D., Fortgang, R. G., Zuromski, K. L., & Maimone, J. S. (2024). Heterogeneity in suicide risk: Evidence from personalized dynamic models. Behaviour research and therapy, 180, 104574. Coppersmith, G., Leary, R., Crutchley, P., & Fine, A. (2018). Natural language processing of social media as screening for suicide risk. Biomedical informatics insights, 10, 1178222618792860. D'Hotman, D., Loh, E., & Savulescu, J. (2020). AI enabled suicide prediction tools–ethical considerations for medical leaders. BMJ Leader, 5(2). D’Hotman, D., & Loh, E. (2020). AI enabled suicide prediction tools: a qualitative narrative review. BMJ health & care informatics, 27(3). Dakanalis, A., Wiederhold, B. K., & Riva, G. (2024). Artificial intelligence: a game-changer for mental health care. Cyberpsychology, Behavior, and Social Networking, 27(2), 100-104. Eckersley, R., & Dear, K. (2002). Cultural correlates of youth suicide. Social Science & Medicine, 55(11), 1891-1904. Edavally, S., Miller, D. D., & Youssef, N. A. (2021). Artificial intelligence to aid detection and diagnostic accuracy of mood disorders and predict suicide risk: A systematic review. Ann Clin Psychiatry, 33(4), 270-281. https://doi.org/10.12788/acp.0041 Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding explainability: Towards social transparency in ai systems. Proceedings of the 2021 CHI conference on human factors in computing systems, Farmani, A., Rahimianbougar, M., Mohammadi, Y., Faramarzi, H., Khodarahimi, S., & Nahaboo, S. (2023). Psychological, Structural, Social and Economic Determinants of Suicide Attempt: Risk Assessment and Decision Making Strategies. Omega (Westport), 86(4), 1144-1166. https://doi.org/10.1177/00302228211003462 Flett, G. L., & Hewitt, P. L. (2024). The need to focus on perfectionism in suicide assessment, treatment and prevention. World psychiatry, 23(1), 152. Fonseka, T. M., Bhat, V., & Kennedy, S. H. (2019). The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Australian & New Zealand Journal of Psychiatry, 53(10), 954-964. Frangione, B., Villamizar, L. A. R., Lang, J. J., Colman, I., Lavigne, E., Peters, C., Anisman, H., & Villeneuve, P. J. (2022). Short-term changes in meteorological conditions and suicide: A systematic review and meta-analysis. Environmental research, 207, 112230. Fusé, T. (1980). Suicide and culture in Japan: A study of seppuku as an institutionalized form of suicide. Social psychiatry, 15, 57-63. Glenn, J. J., Nobles, A. L., Barnes, L. E., & Teachman, B. A. (2020). Can text messages identify suicide risk in real time? A within-subjects pilot examination of temporally sensitive markers of suicide risk. Clinical Psychological Science, 8(4), 704-722. Goldberg, C. B., Adams, L., Blumenthal, D., Brennan, P. F., Brown, N., Butte, A. J., Cheatham, M., DeBronkart, D., Dixon, J., & Drazen, J. (2024). To do no harm—and the most good—with AI in health care. In (Vol. 1, pp. AIp2400036): Massachusetts Medical Society. Goldenthal, E., Park, J., Liu, S. X., Mieczkowski, H., & Hancock, J. T. (2021). Not all AI are equal: Exploring the accessibility of AI-mediated communication technology. Computers in Human Behavior, 125, 106975. Gomes de Andrade, N. N., Pawson, D., Muriello, D., Donahue, L., & Guadagno, J. (2018). Ethics and artificial intelligence: suicide prevention on Facebook. Philosophy & Technology, 31, 669-684. Greaves, J., & Colucci, E. (2024). AI Suicide Prevention: A Qualitative Exploration of Risk and Opportunity. Gunnell, D., Appleby, L., Arensman, E., Hawton, K., John, A., Kapur, N., Khan, M., O'Connor, R. C., Pirkis, J., & Caine, E. D. (2020). Suicide risk and prevention during the COVID-19 pandemic. The Lancet Psychiatry, 7(6), 468-471. Halsband, A., & Heinrichs, B. (2022). AI, suicide prevention and the limits of beneficence. Philosophy & Technology, 35(4), 103. Hardy, R. C., Glastonbury, K., Onie, S., Josifovski, N., Theobald, A., & Larsen, M. E. (2024). Attitudes among the Australian public toward AI and CCTV in suicide prevention research: A mixed methods study. American Psychologist, 79(1), 65. Hawton, K., & Pirkis, J. (2024). Preventing suicide: a call to action. The Lancet Public Health, 9(10), e825-e830. Hawton, K., Sutton, L., Haw, C., Sinclair, J., & Deeks, J. J. (2005). Schizophrenia and suicide: systematic review of risk factors. The British Journal of Psychiatry, 187(1), 9-20. Heinz, M. V., Mackin, D. M., Trudeau, B. M., Bhattacharya, S., Wang, Y., Banta, H. A., Jewett, A. D., Salzhauer, A. J., Griffin, T. Z., & Jacobson, N. C. (2025). Randomized trial of a generative AI chatbot for mental health treatment. Nejm Ai, 2(4), AIoa2400802. Higgins, O., Short, B. L., Chalup, S. K., & Wilson, R. L. (2023). Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review. International Journal of Mental Health Nursing, 32(4), 966-978. Hsu, N. M.-H., Ying-Yeh, C., Shu-Sen, C., Ying-Chen, C., & and Wu, K. C.-C. (2024). The relationship between normative beliefs towards suicide and support for suicide prevention in Taiwan: addressing suicide as human rights, individual choice, or irrationality. International Review of Psychiatry, 36(4-5), 350-360. https://doi.org/10.1080/09540261.2024.2324423 Hu, F. H., Zhao, D. Y., Fu, X. L., Zhang, W. Q., Tang, W., Hu, S. Q., Shen, W. Q., & Chen, H. L. (2023). Effects of social support on suicide-related behaviors in patients with severe mental illness: A systematic review and meta-analysis. J Affect Disord, 328, 324-333. https://doi.org/10.1016/j.jad.2023.02.070 Huang, Z., & Hu, Q. (2024). Tree hole rescue: an AI approach for suicide risk detection and online suicide intervention. Health information science and systems, 12(1), 45. Jacobucci, R., & Grimm, K. J. (2020). Machine learning and psychological research: The unexplored effect of measurement. Perspectives on Psychological Science, 15(3), 809-816. Jafari, H., Heidari, M., Heidari, S., & Sayfouri, N. (2020). Risk factors for suicidal behaviours after natural disasters: a systematic review. The Malaysian journal of medical sciences: MJMS, 27(3), 20. Jahangiri, K., Yousefi, K., Mozafari, A., & Sahebi, A. (2020). The prevalence of suicidal ideation after the earthquake: A systematic review and meta-analysis. Iranian journal of public health, 49(12), 2330. Jaspal, R. (2020). Content analysis, thematic analysis and discourse analysis. Research methods in psychology, 1, 285-312. Jiang, W., Stickley, A., & Ueda, M. (2021). Green space and suicide mortality in Japan: An ecological study. Social Science & Medicine, 282, 114137. Johnson, S. L. (2019). AI, machine learning, and ethics in health care. Journal of Legal Medicine, 39(4), 427-441. Kalam, K. T., Rahman, J. M., Islam, M. R., & Dewan, S. M. R. (2024). ChatGPT and mental health: Friends or foes? Health Science Reports, 7(2), e1912. Kirchner, S., & Niederkrotenthaler, T. (2024). Experiences of suicide survivors of sharing their stories about suicidality and overcoming a crisis in media and public talks: a qualitative study. BMC Public Health, 24(1), 142. https://doi.org/10.1186/s12889-024-17661-4 Kirtley, O. J., van Mens, K., Hoogendoorn, M., Kapur, N., & de Beurs, D. (2022). Translating promise into practice: a review of machine learning in suicide research and prevention. Lancet Psychiatry, 9(3), 243-252. https://doi.org/10.1016/s2215-0366(21)00254-6 Klonsky, E. D., May, A. M., & Saffer, B. Y. (2016). Suicide, suicide attempts, and suicidal ideation. Annual review of clinical psychology, 12(1), 307-330. Kornfield, R., Meyerhoff, J., Studd, H., Bhattacharjee, A., Williams, J. J., Reddy, M., & Mohr, D. C. (2022). Meeting users where they are: user-centered design of an automated text messaging tool to support the mental health of young adults. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, Kurian, N. (2024). ‘No, Alexa, no!’: designing child-safe AI and protecting children from the risks of the ‘empathy gap’in large language models. Learning, Media and Technology, 1-14. Lawrence, R. E., Oquendo, M. A., & Stanley, B. (2016). Religion and suicide risk: a systematic review. Archives of suicide research, 20(1), 1-21. Lee, E. E., Torous, J., De Choudhury, M., Depp, C. A., Graham, S. A., Kim, H.-C., Paulus, M. P., Krystal, J. H., & Jeste, D. V. (2021). Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 6(9), 856-864. Lee, H., Park, C. H. K., Rhee, S. J., Kim, J., Kim, B., Lee, S. S., Ha, K., Baik, C. J., & Ahn, Y. M. (2021). An integrated model for the relationship between socio-cultural factors, Attitudes Toward Suicide, and intensity of suicidal ideation in Korean, Japanese, and American populations. Journal of affective disorders, 280, 203-210. Lejeune, A., Le Glaz, A., Perron, P.-A., Sebti, J., Baca-Garcia, E., Walter, M., Lemey, C., & Berrouiguet, S. J. E. p. (2022). Artificial intelligence and suicide prevention: a systematic review. European psychiatry, 65(1), e19. Lester, D. (2021). The environment and suicide–why suicidologists should support climate change policies. In: Hogrefe Publishing. Levkovich, I., Shinan-Altman, S., & Elyoseph, Z. (2024). Can large language models be sensitive to culture suicide risk assessment? Journal of Cultural Cognitive Science, 8(3), 275-287. https://doi.org/10.1007/s41809-024-00151-9 Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E., & Mohr, D. C. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ digital medicine, 6(1), 236. Li, X., Chen, F., & Ma, L. (2024). Exploring the potential of artificial intelligence in adolescent suicide prevention: current applications, challenges, and future directions. Psychiatry, 87(1), 7-20. Liang, J., Kõlves, K., Lew, B., De Leo, D., Yuan, L., Abu Talib, M., & Jia, C.-x. (2020). Coping strategies and suicidality: A cross-sectional study from China. Frontiers in psychiatry, 11, 129. Lin, W., Wang, H., Gong, L., Lai, G., Zhao, X., Ding, H., & Wang, Y. (2020). Work stress, family stress, and suicide ideation: A cross-sectional survey among working women in Shenzhen, China. J Affect Disord, 277, 747-754. https://doi.org/10.1016/j.jad.2020.08.081 Liu, Q., Wang, W., Gu, X., Deng, F., Wang, X., Lin, H., Guo, X., & Wu, S. (2021). Association between particulate matter air pollution and risk of depression and suicide: a systematic review and meta-analysis. Environmental Science and Pollution Research, 28, 9029-9049. Liu, T., & Xiao, X. (2021). A framework of AI-based approaches to improving eHealth literacy and combating infodemic. Frontiers in public health, 9, 755808. Luk, J. W., Pruitt, L. D., Smolenski, D. J., Tucker, J., Workman, D. E., & Belsher, B. E. (2022). From everyday life predictions to suicide prevention: Clinical and ethical considerations in suicide predictive analytic tools. Journal of clinical psychology, 78(2), 137-148. Luxton, D. (2023). Watson., E; Psychological and Psychosocial Consequences of Super Disruptive AI: Public Health Implications and Recommendations. Intersections, Reinforcements, Cascades, 60-74. Mak, S., & Thomas, A. (2022). An introduction to scoping reviews. Journal of Graduate Medical Education, 14(5), 561-564. Mann, J. J. (2003). Neurobiology of suicidal behaviour. Nature Reviews Neuroscience, 4(10), 819-828. Mann, J. J., Currier, D., Stanley, B., Oquendo, M. A., Amsel, L. V., & Ellis, S. P. (2006). Can biological tests assist prediction of suicide in mood disorders? International Journal of Neuropsychopharmacology, 9(4), 465-474. Mann, J. J., Michel, C. A., & Auerbach, R. P. (2021). Improving suicide prevention through evidence-based strategies: a systematic review. American journal of psychiatry, 178(7), 611-624. Mann, J. J., & Rizk, M. M. (2020). A brain-centric model of suicidal behavior. American journal of psychiatry, 177(10), 902-916. Maples, B., Cerit, M., Vishwanath, A., & Pea, R. (2024). Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj mental health research, 3(1), 4. Marek, F., & Oexle, N. (2024). Supportive and non-supportive social experiences following suicide loss: a qualitative study. BMC Public Health, 24(1), 1190. https://doi.org/10.1186/s12889-024-18545-3 Margetts, H., & Dorobantu, C. (2019). Rethink government with AI. Nature, 568(7751), 163-165. McCradden, M., Hui, K., & Buchman, D. Z. (2023). Evidence, ethics and the promise of artificial intelligence in psychiatry. Journal of Medical Ethics, 49(8), 573-579. McKernan, L. C., Clayton, E. W., & Walsh, C. G. (2018). Protecting life while preserving liberty: ethical recommendations for suicide prevention with artificial intelligence. Frontiers in psychiatry, 9, 650. Mhlanga, D. (2024). Artificial intelligence in elderly care: Navigating ethical and responsible AI adoption for seniors. In Fostering Long-Term Sustainable Development in Africa: Overcoming Poverty, Inequality, and Unemployment (pp. 411-440). Springer. Mikhaylov, S. J., Esteve, M., & Campion, A. (2018). Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration. Philosophical transactions of the royal society a: mathematical, physical and engineering sciences, 376(2128), 20170357. Miller, J. N., & Black, D. W. (2020). Bipolar disorder and suicide: a review. Current psychiatry reports, 22, 1-10. Mirkovic, B., Labelle, R., Guilé, J.-M., Belloncle, V., Bodeau, N., Knafo, A., Condat, A., Bapt-Cazalets, N., Marguet, C., & Breton, J.-J. (2015). Coping skills among adolescent suicide attempters: results of a multisite study. Canadian journal of psychiatry. Revue canadienne de psychiatrie, 60(2 Suppl 1), S37. Morales, S., Barros, J., Echávarri, O., García, F., Osses, A., Moya, C., Maino, M. P., Fischman, R., Núñez, C., & Szmulewicz, T. J. F. i. p. (2017). Acute mental discomfort associated with suicide behavior in a clinical sample of patients with affective disorders: ascertaining critical variables using artificial intelligence tools. Frontiers in psychiatry, 8, 7. Mörch, C.-M., Gupta, A., & Mishara, B. L. (2020). Canada protocol: An ethical checklist for the use of artificial Intelligence in suicide prevention and mental health. Artificial intelligence in medicine, 108, 101934. Motillon-Toudic, C., Walter, M., Séguin, M., Carrier, J. D., Berrouiguet, S., & Lemey, C. (2022). Social isolation and suicide risk: Literature review and perspectives. Eur Psychiatry, 65(1), e65. https://doi.org/10.1192/j.eurpsy.2022.2320 Munn, Z., Peters, M. D., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC medical research methodology, 18, 1-7. Nebeker, C., Parrish, E. M., & Graham, S. (2022). The AI-powered digital health sector: ethical and regulatory considerations when developing digital mental health tools for the older adult demographic. In Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues (pp. 159-176). Springer. Nock, M. K., Park, J. M., Finn, C. T., Deliberto, T. L., Dour, H. J., & Banaji, M. R. (2010). Measuring the suicidal mind: Implicit cognition predicts suicidal behavior. Psychological science, 21(4), 511-517. Paranjape, K., Schinkel, M., Panday, R. N., Car, J., & Nanayakkara, P. (2019). Introducing artificial intelligence training in medical education. JMIR medical education, 5(2), e16048. Park, S., Yim, Y., Lee, M., Lee, H., Park, J., Lee, J. H., Woo, S., Kim, T., Kang, J., & Smith, L. (2024). Longitudinal trends in depression, suicidal ideation, and suicide attempts by family structure in South Korean adolescents, 2009–2022: a nationally representative serial study. Asian Journal of Psychiatry, 104122. Patwary, M. M., Bardhan, M., Haque, M. A., Moniruzzaman, S., Gustavsson, J., Khan, M. M. H., Koivisto, J., Salwa, M., Mashreky, S. R., & Rahman, A. F. (2024). Impact of extreme weather events on mental health in South and Southeast Asia: A two decades of systematic review of observational studies. Environmental research, 118436. Pickering, B. (2021). Trust, but verify: informed consent, AI technologies, and public health emergencies. Future Internet, 13(5), 132. Pigoni, A., Delvecchio, G., Turtulici, N., Madonna, D., Pietrini, P., Cecchetti, L., & Brambilla, P. (2024). Machine learning and the prediction of suicide in psychiatric populations: a systematic review. Translational psychiatry, 14(1), 140. Pirkis, J., Bantjes, J., Dandona, R., Knipe, D., Pitman, A., Robinson, J., Silverman, M., & Hawton, K. (2024). Addressing key risk factors for suicide at a societal level. The Lancet Public Health, 9(10), e816-e824. Pollock, D. K., Khalil, H., Evans, C., Godfrey, C., Pieper, D., Alexander, L., Tricco, A. C., McInerney, P., Peters, M. D., & Klugar, M. (2024). The role of scoping reviews in guideline development. Journal of clinical epidemiology, 169, 111301. Pompili, M., Girardi, P., Ruberto, A., & Tatarelli, R. (2005). Suicide in borderline personality disorder: a meta-analysis. Nordic journal of psychiatry, 59(5), 319-324. Razali, H. Y. H., & Yusof, A. N. M. (2024). Navigating cultural diversity: harnessing AI for mental health diagnosis despite value-laden judgements. Journal of Medical Ethics. Reifels, L., Krysinska, K., & Andriessen, K. (2024). Suicide prevention during disasters and public health emergencies: a systematic review. Frontiers in public health, 12, 1338099. Roy, A., & Segal, N. L. (2001). Suicidal behavior in twins: a replication. Journal of affective disorders, 66(1), 71-74. Sarigül, A., Kaya, A., Aziz, I. A., Yıldırım, M., Özok, H. I., Chirico, F., & Zaffina, S. (2023). General work stress and suicide cognitions in health-care workers: mediating effect of hopelessness and job satisfaction. Front Public Health, 11, 1254331. https://doi.org/10.3389/fpubh.2023.1254331 Segar, L. B., Laidi, C., Godin, O., Courtet, P., Vaiva, G., Leboyer, M., & Durand-Zaleski, I. (2024). The cost of illness and burden of suicide and suicide attempts in France. BMC psychiatry, 24(1), 215. Seiferth, C., Vogel, L., Aas, B., Brandhorst, I., Carlbring, P., Conzelmann, A., Esfandiari, N., Finkbeiner, M., Hollmann, K., & Lautenbacher, H. (2023). How to e-mental health: a guideline for researchers and practitioners using digital technology in the context of mental health. Nature mental health, 1(8), 542-554. Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2023). Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nature Machine Intelligence, 5(1), 46-57. https://doi.org/10.1038/s42256-022-00593-2 Shepard, D. S., Gurewich, D., Lwin, A. K., Reed Jr, G. A., & Silverman, M. M. (2016). Suicide and suicidal attempts in the United States: costs and policy implications. Suicide and Life‐Threatening Behavior, 46(3), 352-362. Sher, L. (2022). Long COVID and the risk of suicide. General hospital psychiatry, 80, 66. Sher, L., & Oquendo, M. A. (2023). Suicide: An Overview for Clinicians. Med Clin North Am, 107(1), 119-130. https://doi.org/10.1016/j.mcna.2022.03.008 Silva, C., McGovern, C., Gomez, S., Beale, E., Overholser, J., & Ridley, J. (2023). Can I count on you? Social support, depression and suicide risk. Clin Psychol Psychother, 30(6), 1407-1415. https://doi.org/10.1002/cpp.2883 Singh, G., Mishra, A., Pattanayak, C., Priyadarshini, A., & Das, R. C. (2023). Artificial intelligence and the institutional ethics committee: a balanced insight into pros and cons, challenges, and future directions in ethical review of clinical research. Journal of Integrative Medicine and Research, 1(4), 164-168. Sinyor, M., Silverman, M., Pirkis, J., & Hawton, K. (2024). The effect of economic downturn, financial hardship, unemployment, and relevant government responses on suicide. The Lancet Public Health, 9(10), e802-e806. Skinner, A., Osgood, N. D., Occhipinti, J. A., Song, Y. J. C., & Hickie, I. B. (2023). Unemployment and underemployment are causes of suicide. Sci Adv, 9(28), eadg3758. https://doi.org/10.1126/sciadv.adg3758 Solaiman, B., Malik, A., & Ghuloum, S. (2023). Monitoring Mental health: legal and ethical considerations of using artificial intelligence in psychiatric wards. American journal of law & medicine, 49(2-3), 250-266. Spitale, G., Schneider, G., Germani, F., & Biller-Andorno, N. (2023). Exploring the role of AI in classifying, analyzing, and generating case reports on assisted suicide cases: feasibility and ethical implications. Frontiers in Artificial Intelligence, 6, 1328865. Stanley, I. H., Boffa, J. W., Rogers, M. L., Hom, M. A., Albanese, B. J., Chu, C., Capron, D. W., Schmidt, N. B., & Joiner, T. E. (2018). Anxiety sensitivity and suicidal ideation/suicide risk: A meta-analysis. Journal of consulting and clinical psychology, 86(11), 946. Sweeney, C., Potts, C., Ennis, E., Bond, R., Mulvenna, M. D., O’neill, S., Malcolm, M., Kuosmanen, L., Kostenius, C., & Vakaloudis, A. (2021). Can chatbots help support a person’s mental health? Perceptions and views from mental healthcare professionals and experts. ACM Transactions on Computing for Healthcare, 2(3), 1-15. Tang, H., Miri Rekavandi, A., Rooprai, D., Dwivedi, G., Sanfilippo, F. M., Boussaid, F., & Bennamoun, M. (2024). Analysis and evaluation of explainable artificial intelligence on suicide risk assessment. Scientific reports, 14(1), 6163. Teismann, T., Joiner, T. E., Robison, M., & Brailovskaia, J. (2024). Self-Burdensomeness, Self-Esteem and Suicidal Ideation. Cognitive Therapy and Research, 1-8. Terra, M., Baklola, M., Ali, S., & El-Bastawisy, K. (2023). Opportunities, applications, challenges and ethical implications of artificial intelligence in psychiatry: a narrative review. The Egyptian Journal of Neurology, Psychiatry and Neurosurgery, 59(1), 80. Torok, M., Han, J., Baker, S., Werner-Seidler, A., Wong, I., Larsen, M. E., & Christensen, H. (2020). Suicide prevention using self-guided digital interventions: a systematic review and meta-analysis of randomised controlled trials. The Lancet Digital Health, 2(1), e25-e36. Tricco, A. C., Lillie, E., Zarin, W., O'Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D., Horsley, T., & Weeks, L. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Annals of internal medicine, 169(7), 467-473. Turecki, G., & Brent, D. A. (2016). Suicide and suicidal behaviour. The Lancet, 387(10024), 1227-1239. Van Heeringen, K. (2012). Stress-diathesis model of suicidal behavior. The neurobiological basis of suicide, 51, 113. Van Soest, J., Sun, C., Mussmann, O., Puts, M., van den Berg, B., Malic, A., van Oppen, C., Towend, D., Dekker, A., & Dumontier, M. (2018). Using the personal health train for automated and privacy-preserving analytics on vertically partitioned data. In Building Continents of Knowledge in Oceans of Data: The Future of Co-Created eHealth (pp. 581-585). IOS Press. Vandoros, S., & Kawachi, I. (2021). Economic uncertainty and suicide in the United States. Eur J Epidemiol, 36(6), 641-647. https://doi.org/10.1007/s10654-021-00770-4 Vasey, B., Nagendran, M., Campbell, B., Clifton, D. A., Collins, G. S., Denaxas, S., Denniston, A. K., Faes, L., Geerts, B., & Ibrahim, M. (2022). Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. bmj, 377. Vijayakumar, L., Daly, C., Arafat, Y., & Arensman, E. (2020). Suicide prevention in the Southeast Asia region. Crisis. Walsh, C. G., Ribeiro, J. D., & Franklin, J. C. (2017). Predicting risk of suicide attempts over time through machine learning. Clinical Psychological Science, 5(3), 457-469. WHO. (2021). Suicide worldwide in 2019–Global Health Estimates World Health Organization. WHO. (2024). Suicide. World Health Organization. Retrieved 8.29 from https://www.who.int/news-room/fact-sheets/detail/suicide Wilhelm, C., Steckelberg, A., & Rebitschek, F. G. (2025). Benefits and harms associated with the use of AI-related algorithmic decision-making systems by healthcare professionals: a systematic review. The Lancet Regional Health–Europe, 48. Woelbert, E., Lundell-Smith, K., White, R., & Kemmer, D. (2021). Accounting for mental health research funding: developing a quantitative baseline of global investments. The Lancet Psychiatry, 8(3), 250-258. Yi, S., Chang, E. C., Chang, O. D., Seward, N. J., McAvoy, L. B., Krause, E. R., Schaffer, M. R., Novak, C. J., Ip, K., & Hirsch, J. K. (2021). Coping and Suicide in College Students. Crisis, 42(1), 5-12. https://doi.org/10.1027/0227-5910/a000662 Young, S. D., & Garett, R. (2018). Ethical issues in addressing social media posts about suicidal intentions during an online study among youth: case study. JMIR mental health, 5(2), e8971. Zalsman, G., Hawton, K., Wasserman, D., van Heeringen, K., Arensman, E., Sarchiapone, M., Carli, V., Höschl, C., Barzilay, R., Balazs, J., Purebl, G., Kahn, J. P., Sáiz, P. A., Lipsicas, C. B., Bobes, J., Cozman, D., Hegerl, U., & Zohar, J. (2016). Suicide prevention strategies revisited: 10-year systematic review. Lancet Psychiatry, 3(7), 646-659. https://doi.org/10.1016/s2215-0366(16)30030-x Zhang, B., You, J., Rolls, E. T., Wang, X., Kang, J., Li, Y., Zhang, R., Zhang, W., Wang, H., & Xiang, S. (2024). Identifying behaviour-related and physiological risk factors for suicide attempts in the UK Biobank. Nature human behaviour, 1-14. Zidaru, T., Morrow, E. M., & Stockley, R. (2021). Ensuring patient and public involvement in the transition to AI‐assisted mental health care: A systematic scoping review and agenda for design justice. Health Expectations, 24(4), 1072-1124. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100233 | - |
| dc.description.abstract | 研究背景: 自殺作為重大公共衛生問題,全球每年約有72萬人死於自殺,相當於每40秒就有一人自殺身亡。隨著人工智慧(AI)技術在健康照護領域的快速發展,其在自殺防治中的應用潛力日益顯現,包括風險預測、早期介入和個人化治療等面向。然而,AI技術應用於此高度敏感且複雜的領域時,不可避免地引發一系列倫理與法律挑戰,涉及隱私保護、演算法公平性、責任歸屬、知情同意等多重議題。本研究旨在透過範疇界定文獻回顧,全面梳理AI技術在自殺防治應用中面臨的倫理與法律議題,探討可能的解決方案,並為政策制定和實務應用提供具體可行的規範建議。
研究方法: 本研究採用範疇界定文獻回顧(Scoping Review)作為主要研究方法,適用於勾勒新興或複雜領域的知識全貌。輔以主題分析法(Thematic Analysis)進行描述性結果整理,並運用法律比較分析(Comparative Legal Analysis)探討台灣、歐盟與美國在AI相關法規框架的差異。文獻檢索於2024年4月進行,從PubMed、SCOPUS、PsyInfo和EMBASE四個資料庫中檢索到79篇文獻,經過重複性檢查、摘要審查和全文評估,最終納入20篇2018-2024年間發表的高品質相關文獻進行深入分析。研究運用質性研究軟體NVivo 12進行系統性歸納整理,並結合世界衛生組織(WHO)的全球自殺防治策略指南進行綜合分析。 研究結果: 1. AI應用模式: AI在自殺防治中主要有三類應用方法:(1)風險預測與評估,透過機器學習算法整合電子健康紀錄、社群媒體資料等,提供個人化風險評分;(2)介入支援與治療輔助,包括AI聊天機器人提供即時、匿名且個人化的支援服務;(3)監測與社區安全維護,透過自然語言處理技術分析網路內容,識別高風險社群並進行積極介入。 2. 核心倫理議題: 研究透過主題分析歸納出八大核心倫理議題:(1)隱私與資料保護,涉及敏感個人資料的收集、儲存與使用安全;(2)知情同意與自主權,探討在AI介入過程中如何確保個人的知情同意和自主決策權;(3)演算法偏見與公平性,關注AI系統可能對特定族群產生歧視性評估的問題;(4)準確性與可靠性,涉及AI系統的誤報與漏報風險及其倫理後果;(5)法律責任歸屬與監督,探討當AI系統造成傷害時的責任界定問題;(6)行善義務與不傷害原則,討論如何在最大化行善效益的同時確保遵循最小侵害原則;(7)人性化關懷與信任,強調AI無法替代人際關懷與同理心的重要性;(8)倫理治理與未來方向,涉及建立健全的AI倫理治理框架和政策指引。 3. 法律框架比較: 透過比較分析發現,歐盟採取全面性、強制性法規框架,如《AI法案》將涉及生命風險的健康預測系統歸類為高風險,要求嚴格的事前評估和人類監督;美國偏向非強制性標準與指南,透過國家標準與技術機構(NIST)提供可信賴AI發展指引,保留更多彈性但可能導致監管不足;台灣正處於過渡階段,「人工智慧基本法」草案雖確立七大基本原則,但在自殺防治等特定領域的具體規範仍有待發展。 4. 利害關係人責任: 研究識別出各利害關係人在AI自殺防治應用中的不同角色與責任:(1)政策制定者需建立規範框架與促進跨部門合作,制定國家級指南確保策略的統一性和可操作性;(2)醫療機構負責整合AI技術與臨床實踐,建立內部使用指南和跨領域合作機制;(3)技術開發者應注重透明度與可解釋性,增強系統可靠性並促進臨床合作;(4)社會大眾需提高認知與參與度,培養數位健康素養並建立支持性社區;(5)親身經驗者(病人與家屬/遺族)作為第五個關鍵利害關係人,能提供寶貴的使用者回饋並協助識別潛在偏見問題。 討論與結論: AI技術雖具備提升風險識別效率、擴大防治覆蓋面和提供個人化介入的巨大潛力,但同時面臨隱私保護、資料安全、演算法偏見、誤判風險、數位排除與人性化關懷等多層次挑戰。研究強調AI應定位為專業人員的輔助工具而非替代者,需要各方利害關係人基於共同的倫理原則與科學證據協同行動。AI在健康領域的既有倫理問題(基礎層面)、自殺防治的傳統倫理問題(專業層面)與AI導入所產生的新挑戰(技術整合層面)相互交織,因此須建立以生命保護為核心的倫理治理模式。具體建議包括:(1)建立全面的法律和倫理框架,明確規定個人健康資料的收集、使用和儲存標準;(2)設立獨立的倫理審查委員會,評估AI輔助自殺防治項目的倫理合規性;(3)促進跨部門合作,整合衛生、教育、社會服務等領域資源;(4)開發文化敏感性AI工具,減少演算法偏見;(5)建立混合照護模式,結合技術效率與人性關懷。未來研究應著重於跨文化驗證、技術-倫理協同發展、弱勢群體研究及政策影響評估,確保AI技術在自殺防治領域的負責任應用,最終實現科技與人文關懷的和諧共存。 | zh_TW |
| dc.description.abstract | Research Background: Suicide represents a major global public health crisis, with approximately 720,000 deaths annually worldwide, equivalent to one suicide every 40 seconds. As artificial intelligence (AI) technology rapidly advances in healthcare, its potential applications in suicide prevention are increasingly evident, encompassing risk prediction, early intervention, and personalized treatment. However, the application of AI technology in this highly sensitive and complex domain inevitably raises a series of ethical and legal challenges, involving multiple issues including privacy protection, algorithmic fairness, responsibility attribution, and informed consent. This study aims to comprehensively examine the ethical and legal issues faced in AI applications for suicide prevention through a scoping review, explore potential solutions, and provide concrete and feasible regulatory recommendations for policy development and practical implementation.
Research Methods: This study employed a scoping review as the primary research method, appropriate for mapping the knowledge landscape of emerging or complex fields. Thematic analysis was used for descriptive results organization, supplemented by comparative legal analysis to explore differences in AI-related regulatory frameworks among Taiwan, the European Union, and the United States. Literature searches were conducted in April 2024, retrieving 79 articles from four databases: PubMed, SCOPUS, PsyInfo, and EMBASE. Following duplicate removal, abstract screening, and full-text evaluation, 20 high-quality relevant articles published between 2018-2024 were ultimately included for in-depth analysis. The study utilized qualitative research software NVivo 12 for systematic categorization and organization, combined with the World Health Organization (WHO) global suicide prevention strategy guidelines for comprehensive analysis. Research Results: 1. AI Application Models: AI applications in suicide prevention primarily encompass three categories: (1) Risk prediction and assessment, utilizing machine learning algorithms to integrate electronic health records, social media data, and other sources to provide personalized risk scores; (2) Intervention support and treatment assistance, including AI chatbots providing immediate, anonymous, and personalized support services; (3) Monitoring and community safety maintenance, analyzing online content through natural language processing technologies to identify high-risk communities and implement proactive interventions. 2. Core Ethical Issues: Through thematic analysis, the study identified eight core ethical issues: (1) Privacy and data protection, involving the collection, storage, and safe use of sensitive personal data; (2) Informed consent and autonomy, exploring how to ensure individual informed consent and autonomous decision-making during AI interventions; (3) Algorithmic bias and fairness, addressing concerns about discriminatory assessments AI systems may produce for specific populations; (4) Accuracy and reliability, involving false positive and false negative risks of AI systems and their ethical consequences; (5) Legal responsibility attribution and oversight, exploring responsibility delineation when AI systems cause harm; (6) Beneficence and non-maleficence principles, discussing how to maximize beneficial effects while ensuring adherence to minimal harm principles; (7) Humanized care and trust, emphasizing the importance of interpersonal care and empathy that AI cannot replace; (8) Ethical governance and future directions, involving the establishment of robust AI ethical governance frameworks and policy guidelines. 3. Legal Framework Comparison: Comparative analysis revealed that the EU adopts a comprehensive, mandatory regulatory framework, with the AI Act classifying health prediction systems involving life risks as high-risk, requiring strict pre-assessment and human oversight; the United States tends toward non-mandatory standards and guidelines, providing trustworthy AI development guidance through the National Institute of Standards and Technology (NIST), maintaining greater flexibility but potentially leading to regulatory insufficiency; Taiwan is in a transitional phase, with the "Artificial Intelligence Basic Act" draft establishing seven fundamental principles, but specific regulations for particular fields such as suicide prevention remain to be developed. 4. Stakeholder Responsibilities: The study identified different roles and responsibilities of various stakeholders in AI suicide prevention applications: (1) Policymakers need to establish regulatory frameworks and promote cross-sector cooperation, developing national guidelines to ensure strategy uniformity and operability; (2) Healthcare institutions are responsible for integrating AI technology with clinical practice, establishing internal usage guidelines and interdisciplinary collaboration mechanisms; (3) Technology developers should focus on transparency and explainability, enhancing system reliability and promoting clinical cooperation; (4) The general public needs to improve awareness and participation, cultivating digital health literacy and building supportive communities; (5) Individuals with lived experience (patients and families/survivors) serve as the fifth key stakeholder group, whose personal experiences and unique perspectives are crucial for ensuring AI tool practicality, acceptability, and cultural appropriateness, providing valuable user feedback and helping identify potential bias issues. Discussion and Conclusions: While AI technology possesses tremendous potential for improving risk identification efficiency, expanding prevention coverage, and providing personalized interventions, it simultaneously faces multi-layered challenges including privacy protection, data security, algorithmic bias, misdiagnosis risks, digital exclusion, and humanized care. The study emphasizes that AI should be positioned as an assistive tool for professionals rather than a replacement, requiring collaborative action from all stakeholders based on shared ethical principles and scientific evidence. The existing ethical issues of AI in healthcare (foundational level), traditional ethical problems in suicide prevention (professional level), and new challenges introduced by AI implementation (technological integration level) are interconnected, necessitating the establishment of an ethical governance model centered on life protection. Specific recommendations include: (1) Establishing comprehensive legal and ethical frameworks that clearly specify standards for personal health data collection, use, and storage; (2) Setting up independent ethical review committees to assess ethical compliance of AI-assisted suicide prevention projects; (3) Promoting cross-sector cooperation to integrate resources from health, education, social services, and other fields; (4) Developing culturally sensitive AI tools to reduce algorithmic bias; (5) Establishing hybrid care models that combine technological efficiency with humanized care. Future research should focus on cross-cultural validation, technology-ethics collaborative development, vulnerable population studies, and policy impact assessment to ensure responsible application of AI technology in suicide prevention, ultimately achieving harmonious coexistence between technology and humanistic care. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-09-30T16:06:18Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-09-30T16:06:18Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 中文摘要 iii 英文摘要 v 第一章、 緒論 1 第一節、研究背景和動機 1 第二節、文獻查證 2 第三節、研究目的與問題 28 第二章、 研究方法 29 第一節、搜尋策略和選擇 31 第二節、資料來源 31 第三節、文獻篩選過程 32 第四節、資料提取過程 34 第五節、資料主題式分析文獻歸納過程 35 第六節、法律比較分析過程 35 第三章、 研究結果 37 第一節、納入文獻之基本介紹 37 第二節、納入文獻之發現、限制和倫理影響分析 44 第三節、人工智慧應用於自殺防治策略中相關倫理主題 57 第四節、文獻內容主題式分析結果:倫理核心議題及其法律挑戰 59 第五節、現行台灣與歐美相關法規比較分析 66 第四章、 討論與結論 75 第一節、AI輔助自殺防治策略的倫理法律挑戰與機遇 75 第二節、AI輔助自殺防治策略提供各利害關係人的實務參考建議 78 第三節、AI輔助自殺防治的多層面分析 85 第四節、研究結論 91 參考文獻 96 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 倫理 | zh_TW |
| dc.subject | 法律 | zh_TW |
| dc.subject | 責任歸屬 | zh_TW |
| dc.subject | 偏見 | zh_TW |
| dc.subject | 範疇回顧 | zh_TW |
| dc.subject | 隱私 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | 自殺防治 | zh_TW |
| dc.subject | Responsibility Attribution | en |
| dc.subject | Bias | en |
| dc.subject | Privacy | en |
| dc.subject | Scoping Review | en |
| dc.subject | Law | en |
| dc.subject | Ethics | en |
| dc.subject | Suicide Prevention | en |
| dc.subject | Artificial Intelligence | en |
| dc.title | 人工智慧應用於自殺防治策略中的倫理與法律挑戰:範疇回顧研究 | zh_TW |
| dc.title | Ethical and Legal Challenges of Applying Artificial Intelligence in Suicide Prevention Strategies: A Scoping Review | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 張書森;邱文聰 | zh_TW |
| dc.contributor.oralexamcommittee | Shu-Sen Chang;Wen-Tsong Chiou | en |
| dc.subject.keyword | 人工智慧,自殺防治,倫理,法律,範疇回顧,隱私,偏見,責任歸屬, | zh_TW |
| dc.subject.keyword | Artificial Intelligence,Suicide Prevention,Ethics,Law,Scoping Review,Privacy,Bias,Responsibility Attribution, | en |
| dc.relation.page | 109 | - |
| dc.identifier.doi | 10.6342/NTU202502994 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2025-08-04 | - |
| dc.contributor.author-college | 醫學院 | - |
| dc.contributor.author-dept | 醫學教育暨生醫倫理研究所 | - |
| dc.date.embargo-lift | 2030-07-31 | - |
| 顯示於系所單位: | 醫學教育暨生醫倫理學科所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 1.42 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
