Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97498Full metadata record
| ???org.dspace.app.webui.jsptag.ItemTag.dcfield??? | Value | Language |
|---|---|---|
| dc.contributor.advisor | 陳肇鴻 | zh_TW |
| dc.contributor.advisor | Chao-hung Chen | en |
| dc.contributor.author | 林建廷 | zh_TW |
| dc.contributor.author | Jian-Ting Lin | en |
| dc.date.accessioned | 2025-07-02T16:10:34Z | - |
| dc.date.available | 2025-07-03 | - |
| dc.date.copyright | 2025-07-02 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-06-22 | - |
| dc.identifier.citation | 一、中文文獻
1. 專書 Michael Negnevitsky(著)(2012),謝政勳、廖珗洲、李聯旺(編譯),《人工智慧:智慧型系統導論》。 台灣金融研訓院(2023),《銀行授信實務》,自版。 台灣金融研訓院編輯委員會(2015),《金融人員企業授信必修12堂課》,自版。 汪海清(2009),《企業徵信調查實務》,財團法人台灣金融研訓院。 邱潤容(2022),《銀行實務》,三民。 陳石進(2009),《銀行授信經典300講》,自版。 陳石進(2011),《銀行授信管理》,財團法人台灣金融研訓院。 葉國興(1980),《授信管理理論與應用》,三民。 蔡敏川(2020),《銀行實務》,財團法人台灣金融研訓院。 2. 期刊文章暨書籍篇章 王志誠(2024),〈人工智慧在金融業運用之法律風險及監控〉,《當代法律》,28期。 王怡蘋(2023),〈金融消費者保護法之懲罰性賠償適用範圍與人事保證契約之適用範圍-最高法院111年度台上字第373號與最高法院110年度台上字第71號判決〉,《當代法律》,15期。 林祐辰(2023),〈從商業判斷原則觀察銀行授信審查〉,《財產法暨經濟法》,72期。 林勤富(2024),〈人工智慧自動決策系統的透明度難題〉,李建良(等著),《人工智慧的法制基礎:破壞式創新的建構式法制》,元照。 林勤富(2025),〈歐盟《人工智慧法》之制度設計、規範內涵與治理侷限〉,《中研院法學期刊》,36期。 張麗卿(2023),〈臺灣人工智慧基本法制定之必要與倡議〉,《月旦法學雜誌》,340期。 莊瑞珠(2006),〈邏輯斯迴歸模型運用在女性信用卡評分制度之研究〉,《輔仁管理評論》,14卷1期。 郭戎晉(2023),〈人工智慧風險治理與監管機制建構之研究──以歐盟監管專法(AIA)與美國風險管理標準為核心〉,《世新法學》,17卷1期。 游國治(1990),〈概說〉,葉國興(等著),《銀行對企業授信規範》,財團法人金融人員研究訓練中心。 楊岳平(2021),〈論人工智慧的金融應用與金融監理法制〉,張麗卿(等著),《人工智慧與法律挑戰》,元照。 葉啟洲(2015),〈民事交易關係上之反歧視原則-德國一般平等待遇法之借鏡〉,《東吳法律學報》,26卷3期。 廖淑君(2022),〈人工智慧與普惠金融–淺析演算法於徵信/授信應用之金融消費者保護議題〉,《財金法學研究》,5卷1期。 劉祐君(2020),〈我國信貸反歧視規範初探—以美國《公平信貸機會法》為比較〉,《萬國法律》,233期。 簡安泰(1990),〈授信評估五原則〉,葉國興(等著),《銀行對企業授信規範》,財團法人金融人員研究訓練中心。 3. 學位論文 高國強(2013),《房屋抵押貸款違約因素研究:以L銀行高雄市某分行為例》,國立高雄應用科技大學企業管理系碩士班碩士論文。 4. 法令 中華民國刑法。 中華民國銀行公會會員授信準則。 公司法。 民法。 金融消費爭議處理機構評議委員資格條件聘任解任及評議程序辦法。 金融消費者保護法。 金融控股公司及銀行業內部控制及稽核制度實施辦法。 信用卡業務機構管理辦法。 個人資料保護法。 財團法人中華民國證券櫃檯買賣中心證券商營業處所買賣有價證券業務規則。 會計師法。 銀行內部控制三道防線實務守則。 銀行法。 銀行資本適足性及資本等級管理辦法。 證券交易法。 5. 法院判決暨評議決定 財團法人金融消費評議中心104年評字第0079號不受理決定書。 財團法人金融消費評議中心104年評字第1093號不受理決定書。 財團法人金融消費評議中心112年評字第879號不受理決定書。 最高法院111年度台上字第373號民事判決。 臺灣新北地方法院107年度板簡字第2227號民事判決。 6. 網路資源 MoneyDJ(04/09/2021),〈台新銀「手t貸」AI智能徵審 數位信貸撥款加速10倍〉,https://www.moneydj.com/kmdj/news/newsviewer.aspx?a=bb582d0c-b722-4dc1-aa0c-7d9873ae0005。 工商時報(2023),〈AI專法放緩 吳正忠:法律限制太多恐壓抑產業發展〉,https://www.ctee.com.tw/news/20230903700602-439803。 中央銀行,〈財務及營運比率定義說明〉,https://www.cbc.gov.tw/tw/dl-121908-5B076CE7B536404D9B9 305104260D319.html。 中時新聞網(02/26/2020),〈沒看錯!玉山銀推AI貸款審件 58秒就撥款了〉,https://www.chinatime s.com/realtimenews/20200226002705-260410?chdtv。 中華民國銀行商業同業公會全國聯合會(05/06/2024),〈金融機構運用人工智慧技術作業規範〉,https://www.ba.org.tw/PublicInformation/Detail/5100?enumtype= ImportantnormType&type=99537959-bc87-4d24-bcb7-83c8e7767e65。 台灣積體電路製造股份有限公司,〈合併財務報告暨會計師查核報告 民國112及111年度〉,https://investor.tsmc.com/sites/ir/financial-report/2023/TSMC%202023Q 4%20Consolidated%20Financial%20Statements_C.pdf。 立法院議案關係文書(2019),《院總第1021號 委員提案第23371號》,https://ppg.ly.gov.tw/ppg/download/agenda1/02/pdf/09/07/14/LCEWA01_090714_00037.pdf。 立法院議案關係文書(2024),《院總第1021號 委員提案第23371號》,https://ppg.ly.gov.tw/ppg/download/agenda1/02/pdf/11/01/09/LCEWA01_110109_00007.pdf。 余紀忠文教基金會(02/12/2025),〈巴黎AI峰會落幕,全球AI治理出現分歧〉,https://www.yucc.org.tw/info/6928?。 金融監督管理委員會(06/20/2024),〈金融業運用人工智慧(AI)指引〉,https://www.fsc.gov.tw/uploaddowndoc?file=news/202406201527520.pdf&filedisplay=%E9%99%84%E4%BB%B6_%E9%87%91%E8%9E%8D%E6%A5%AD%E9%81%8B%E7%94%A8AI%E6%8C%87%E5%BC%95.pdf&flag=doc。 金融監督管理委員會(10/17/2023),〈金融業運用人工智慧(AI)之核心原則與相關配套措施〉,https://www.fsc.gov.tw/uploaddowndoc?file=news/2023101715182 50.pdf&filedisplay=%E9%99%84%E4%BB%B6_%E9%87%91%E8%9E%8D%E6%A5%AD%E9%81%8B%E7%94%A8%E4%BA%BA%E5%B7%A5%E6%99%BA%E6%85%A7%28AI%29%E4%B9%8B%E6%A0%B8%E5%BF%83%E5%8E%9F%E5%89%87%E8%88%87%E7%9B%B8%E9%97%9C%E6%8E%A8%E5%8B%95%E6%94%BF%E7%AD%96.pdf&flag=doc。 金融監督管理委員會(2022),〈金融機構間資料共享指引總說明〉,https://www.fsc.gov.tw/websitedowndoc?file=chfsc/202112231015490.pdf&filedisplay=%E9%87%91%E8%9E%8D%E6%A9%9F%E6%A7%8B%E9%96%93%E8%B3%87%E6%96%99%E5%85%B1%E4%BA%AB%E6%8C%87%E5%BC%95%E7%B8%BD%E8%AA%AA%E6%98%8E%E5%8F%8A%E9%80%90%E9%BB%9E%E8%AA%AA%E6%98%8E.pdf。 金融監督管理委員會保險局(11/29/2024),〈富邦人壽保險股份有限公司辦理與金融機構間保戶資料共享作業,未落實執行內部控制制度,依保險法第171條之1第4項規定,核處罰鍰新臺幣60萬元整。〉,https://www.ib.gov.tw/ch/home.jsp?id =42&parentpath=0&mcustomize=multimessages_view.jsp&dataserno=202411290001&dtable=Penalty。 財團法人金融消費評議中心,〈委員名單〉,https://www.foi.org.tw/Article.aspx?La ng=1&Arti=47。 財團法人金融消費評議中心,〈評議流程圖〉,https://www.foi.org.tw/Article.aspx? Lang=1&Arti=35 8。 財團法人金融聯合徵信中心,〈與個人信用評分相關問題〉,https://www.jcic.org.t w/main_ch/index.aspx。 國家科學及技術委員會(07/15/2024),〈人工智慧基本法草案預告 促進創新兼顧人權與風險〉,https://www.nstc.gov.tw/folksonomy/detail/87e76bcd-a19f-4aa3-9707-ca8927dcb663?l=CH。 7. 其他 會計師職業道德規範公報第五號。 會計師職業道德規範公報第十號。 二、外文文獻 1. 專書 BANAFA, AHMED (2024), INTRODUCTION TO ARTIFICIAL INTELLIGENCE (AI). JEVONS, WILLIAM STANLEY (1888), THE THEORY OF POLITICAL ECONOMY. KHAN, ARSHAD (2024), ARTIFICIAL INTELLIGENCE: A GUIDE FOR EVERYONE. PASQUALE, FRANK (2015), THE BLACK BOX SOCIETY: THE SECRET ALGORITHMS THAT CONTROL MONEY AND INFORMATION. 2. 期刊文章暨書籍篇章 Aggarwal, Nikita (2021), The Norms of Algorithmic Credit Scoring, 80(1) THE CAMBRIDGE LAW JOURNAL 42. Apley, Daniel W. & Jingyu Zhu (2020), Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models, 82(4) JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B, STATISTICAL METHODOLOGY 1059. Bartlett, Robert et al. (2022), Consumer-Lending Discrimination in the Fintech Era, 143(1) JOURNAL OF FINANCIAL ECONOMICS 30. Ben-Shahar, Omri & Carl E. Schneider (2010), The Failure of Mandated Disclosure, 159 UNIVERSITY OF PENNSYLVANIA LAW REVIEW 647. Berg, Tobias et al. (2019), On the Rise of FinTechs: Credit Scoring Using Digital Footprints, 33(7) THE REVIEW OF FINANCIAL STUDIES 2845. Bradford, Anu (2012), The Brussels Effect, 107 NORTHWESTERN UNIVERSITY LAW REVIEW 1. Brieman, Leo (2001), Random Forests, 45 MACHINE LEARNING 5. Burrell, Jenna (2016), How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms, 3 BIG DATA & SOCIETY 1. Busuioc, Madalina et al. (2023), Reclaiming Transparency: Contesting the Logics of the Secrecy Within the AI Act, 2(1) EUROPEAN LAW OPEN 79. Bygrave, Lee A. (2019), Minding the Machine v 2.0–The EU General Data Protection Regulation and Automated Decision-Making, in ALGORITHMIC REGULATION 248 (Karen Yeung & Martin Lodge eds.). Chapman, Dane (2023), The Ideal Approach to Artificial Intelligence Legislation: A Combination of the United States and European Union, 78(1) UNIVERSITY OF MIAMI LAW REVIEW 265. Chen, Tiangi & Carlos Guestrin (2016), XGBoost: A Scalable Tree Boosting System, ACM 785, https://dl.acm.org/doi/pdf/10.1145/2939672.2939785. Chergykalo, D. O. & D. A. Klyushin (2023), Fundamental Fallacies in Definitions of Explainable AI: Explainable to Whom and Why?, in EXPLAINABLE AI: FOUNDATIONS, METHODOLOGIES, AND APPLICATIONS 25 (Mayuri Mehta et al. eds.). de Vries, Katja (2021), Transparent Dreams (Are Made of This): Counterfactuals as Transparency Tools in ADM, 8(1) CRITICAL ANALYSIS OF LAW 121. Fakhari, Ali & Amir Masoud Eftekhari Moghadam (2013), Combination of Classification and Regression in Decision Tree for Multi-Labeling Image Annotation and Retrieval, 13(2) APPLIED SOFT COMPUTING 1292. Fenster, Mark (2006), The Opacity of Transparency, 91 IOWA LAW REVIEW 885. Fisher, Aaron et al. (2019), All Models Are Wrong, but Many Are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously, 20 JOURNAL OF MACHINE LEARNING RESEARCH 1. Gamba, Giulia Del (2022), Machine Learning Decision-Making: When Algorithms Can Make Decisions According to the GDPR, in LAW AND TECHNOLOGY IN A GLOBAL DIGITAL SOCIETY: AUTONOMOUS SYSTEMS, BIG DATA, IT SECURITY AND LEGAL TECH 75 (Georg Borges & Christoph Sorge eds.). Grochowski, Mateusz et al. (2021), Algorithmic Transparency and Explainability for EU Consumer Protection: Unwrapping the Regulatory Premises, 8 CRITICAL ANALYSIS OF LAW 44. Gunnarsson, Björn Rafn et al. (2021), Deep Learning for Credit Scoring: Do or Don’t?, 295 EUROPEAN JOURNAL OF OPERATIONAL RESEARCH 292. Jobin, Anna et al. (2019), The Global Landscape of AI Ethics Guidelines, 1 NATURE MACHINE INTELLIGENCE 389. Jones, Meg Leta (2017), The Right to a Human in the Loop: Political Constructions of Computer Automation and Personhood, 47 SOCIAL STUDIES OF SCIENCE 216. Kaminski, Margot E. (2019), Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability, 92 SOUTHERN CALIFORNIA LAW REVIEW 1529. Kaminski, Margot E. (2021), Understanding Transparency in Algorithmic Accountability, in THE CAMBRIDGE HANDBOOK OF THE LAW OF ALGORITHMS 121 (Woodrow Barfield ed.). Karnow, Curtis E. A. (2021), The Opinion of Machines, in THE CAMBRIDGE HANDBOOK OF THE LAW OF ALGORITHMS 16 (Woodrow Barfield ed.). Kroll, Joshua A. et al. (2017), Accountable Algorithms, 165 UNIVERSITY OF PENNSYLVANIA LAW REVIEW 633. Kumar, Dheeraj & Mayuri A. Mehta (2023), An Overview of Explainable AI Methods, Forms and Frameworks, in EXPLAINABLE AI: FOUNDATIONS, METHODOLOGIES, AND APPLICATIONS 43 (Mayuri Mehta et al. eds.). Littlefield, Neil O. (1973), Sex-Based Discrimination and Credit Granting Practices, 5(4) CONNECTICUT LAW REVIEW 575. Liu, Han-Wei et al. (2019), Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability, 27(2) INTERNATIONAL JOURNAL OF LAW & INFORMATION TECHNOLOGY 122. Maltz, Earl M. & Fred H. Miller (1978), The Equal Credit Opportunity Act and Regulation B, 31(1) OKLAHOMA LAW REView 1. Marotta-Wurgler, Florencia (2011), Will Increased Disclosure Help? Evaluating the Recommendations of the ALI’s “Principles of the Law of Software Contracts”, 78(1) UNIVERSITY OF CHICAGO LAW REVIEW 165. Mendoza, Isak & Lee A. Bygrave (2017), The Right Not to be Subject to Automated Decisions Based on Profiling, in EU INTERNET LAW 77 (Tatiana-Eleni Synodinou et al. eds.). Mökander, Jakob et al. (202), The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What Can They Learn from Each Other?, 32 MINDS & MACHINES 751. Price II, W. Nicholson & Arti K. Rai (2021), Clearing Opacity Through Machine Learning, 106 IOWA LAW REVIEW 775. Rohner, Ralph J. (1979), Equal Credit Opportunity Act, 34(3) THE BUSINESS LAWYER 1423. Seizov, Ognyan & Alexander J. Wulf (2020), Artificial Intelligence and Transparency: A Blueprint for Improving the Regulation of AI Applications in the EU, 31(4) EUROPEAN BUSINESS LAW REVIEW 611. Shi, Lei et al. (2011), Credit Assessment with Random Forests, in EMERGING RESEARCH IN ARTIFICIAL INTELLIGENCE AND COMPUTATIONAL INTELLIGENCE 24 (Hepu Deng et al. eds.). Slack, Dylan et al. (2020), Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods, in AIES’20: PROCEEDINGS OF THE AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY 180 (Annette Markhem et al. eds.). Soon, Chun Siong et al. (2008), Unconscious Determinants of Free Decisions in the Human Brain, 11(5) NATURE NEUROSCIENCE 543. Tai, Le Quy & Giang Thi Thu Huyen (2019), Deep Learning Techniques for Credit Scoring, 7(3) JOURNAL OF ECONOMIC, BUSINESS & MANAGEMENT 93. Taylor, Winnie F. (2018), The ECOA and Disparate Impact Theory: A Historical Perspective, 26(2) JOURNAL OF LAW & POLICY 575. Tolosi, Laura & Thomas Lengauer (2011), Classification with Correlated Features: Unreliability of Feature Ranking and Solutions, 27(14) BIOINFOMATICS 1986. Turing, Alan (1950), Computing Machinery and Intelligence, 59(236) MIND 433. Tversky, Amos & Eldar Shafir (1992), The Disjunction Effect in Choice Under Uncertainty, 3(5) PSYCHOLOGICAL SCIENCE 305. Waldman, Ari Ezra (2021), Algorithmic Legitimacy, in THE CAMBRIDGE HANDBOOK OF THE LAW OF ALGORITHMS 107 (Woodrow Barfield ed.). Wulf, Alexander J. & Ognyan Seizov (2022), “Please Understand We Cannot Provide Further Information”: Evaluating Content and Transparency of GDPR-Mandated AI Disclosures, 39 AI & SOCIETY 235. Zalnieriute, Monika (2021), “Transparency Washing” in the Digital Age: A Corporate Agenda of Procedural Fetishism, 8(1) CRITICAL ANALYSIS OF LAW 139. Zarsky, Tal Z. (2017), Incompatible: The GDPR in the Age of Big Data, 47 SETON HALL LAW REVIEW 995. 3. 法令 15 U.S.C § 45 (2012). 42 U.S.C. § 1320d-2 (2010). 45 C.F.R. § 164.308 (2003). 45 C.F.R. § 164.316 (2003). AI Foundation Model Transparency Act of 2023, H.R. 6881, 118th Congress (2023-2024). Algorithm Accountability Act of 2019, H.R. 2231, 116th Congress (2019-2020). Algorithm Accountability Act of 2022, H.R. 6530, 117th Congress (2021-2022). CALIFORNIA BUSINESS & PROFESSIONS CODE (2023). COLORADO REVISED STATUTES (2024). Consolidated Version of the Treaty on the Functioning of the European Union, May 9, 2008, 2008 O.J. (C 115) 1. Consumer Credit Protection Act, 15 U.S.C. § 1601-93. Directive (EU) 2023/2225 of the European Parliament and of the Council of 18 October 2023 on Credit Agreements for Consumers and Repealing Directive 2008/48/EC, 2023 O.J. (L series) 1. Directive 2008/48/EC of the European Parliament and of the Council of 23 April 2008 on Credit Agreements for Consumers and Repealing Council Directive 87/102/EEC, 2008 O.J. (L 133) 66. Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data (Data Protection Directive), 1995 O.J. (L 281) 31. Equal Credit Opportunity Act (Regulation B), 12 C.F.R. § 1002 (2023). Equal Credit Opportunity Act, 15 U.S.C. § 1691 (2010). Executive Order No. 13859, 84 Federal Register 3967 (February 11, 2019). Executive Order No. 14110, 88 Federal Register 75191 (October 30, 2023). Executive Order No. 14179, 90 Federal Register 8741 (January 23, 2025). Federal A.I. Governance and Transparency Act of 2024, H.R. 7532, 118th Congress (2023-2024). Federal Artificial Intelligence Risk Management Act of 2023, S. 3205, 118th Congress (2023-2024). NEW JERSEY STATUTE (2024). NEW YORK CITY ADMINISTRATIVE CODE (2021). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), 2024 O.J. (L 144) 1. 4. 法院判決 Carroll v. Exxon Co., U.S.A., Civ. A. No.76-3302, 1977 U.S. Dist. LEXIS 14978 (D. La. July 14, 1977). ECLI:NL:RBDHA:2020:1878, DE RECHTSPRAAK (February 5, 2020), https://uitsprake n.rechtspraak.nl/details?id=ECLI:NL:RBDHA:2020:1878. Fischl v. General Motors Acceptance Corp., No. 81-3611, 1983 U.S. App. LEXIS 26353 (5th Cir. June 27, 1983). O’Quinn v. Diners Club, Inc., No. 77 C 3491, 1978 U.S. Dist. LEXIS 15733 (D. Ill. September 1, 1978). 5. 研究報告 Gambacorta, Leonardo et al. (2019), How Do Machine Learning and Non-Traditional Data Affect Credit Scoring? New Evidence from a Chinese Fintech Firm (Bank for International Settlements, BIS Working Papers No. 834). Gambacorta, Leonardo et al. (2020), Data vs Collateral (Bank for International Settlements & CEPR et al., BIS Working Papers No. 881). Klosok, Marta & Marcin Chlebus (2020), Towards Better Understanding of Complex Machine Learning Models Using Explainable Artificial Intelligence (XAI) - Case of Credit Scoring Modelling (University of Warsaw, Faculty of Economic Sciences, Working Paper No. 18). Langenbucher, Katja (2023), Consumer Credit in the Age of AI – Beyond Anti-Discrimination Law (European Corporation Governance Institute, Working Paper No. 663/2022). Malgieri, Gianclaudio & Frank Pasquale (2022), From Transparency to Justification: Toward Ex Ante Accountability for AI (Brussel Privacy Hub Working Paper, Paper Vol. 8, No. 33). Moscatelli, Mirko et al. (2019), Corporate Default Forecasting with Machine Learning (Banka d’Italia, Working Paper No. 1256). 6. 網路資源 Agarwal, Sumit et al. (2020), Financial Inclusion and Alternate Credit Scoring: Role of Big Data and Machine Learning in Fintech, INDIAN SCHOOL OF BUSINESS, https:// www.jbs.cam.ac.uk/wp-content/uploads/2020/08/2020-06-conference-paper-agarwal-alok-ghosh-gupta.pdf. AI Act, EUROPEAN COMMISSION, https://digital-strategy.ec.europa.eu/en/policies/regul atory-framework-ai. AI in Finance, CITI (2024), https://ir.citi.com/gps/9j79xHIa-vfPi785TYiSciffO0j4I0D 52fI9LrahsLZEo6MpT4aM7SpwSFagAL9CIukqn2fwiJ_GNvDsLy4b6XEjftdK1abu. AI Risk Management Framework, NIST, https://www.nist.gov/itl/ai-risk-management-framework. Ali, Moez, Introduction to Activation Functions in Neural Networks, DATACAMP (September 2024), https://www.datacamp.com/tutorial/introduction-to-activation-func tions-in-neural-networks. Alvarez-Melis, David & Tommi S. Jaakkola, On the Robustness of Interpretability Methods, ARXIV:1806.08049 66 (June 21, 2018), https://arxiv.org/pdf/1806.08049. Appendix C to Part 1002 — Sample Notification Forms, CONSUMER FINANCIAL PROTECTION BUREAU, https://www.consumerfinance.gov/rules-policy/regulateons/100 2/c/. Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, EUROPEAN PARLIAMENT NEWS (September 12, 2023, 12:04 AM), https://www.eu roparl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai. Artificial Intelligence and Data Act, GOVERNMENT OF CANADA, https://ised-isd e.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act. Banking Industry Cuts Millions of Jobs Due to Digitalization and AI, UXDA, https://the uxda.com/blog/banks-will-cut-millions-of-jobs-in-the-next-decade. BAYESIAN SERVER LEARNING CENTER, https://www.bayesserver.com/docs/introduction /bayesian-networks/. Boroweic, Steven, AlphaGo Seals 4-1 Victory over Go Grandmaster Lee Sedol, THE GUARDIAN (March 15, 2016, 10:12 AM), https://www.theguardian.com/technology/20 16/mar/15/googles-alphago-seals-4-1-victory-over-grandmaster-lee-sedol. Brandeis, Louis D., What Publicity Can Do, HARPER’S WEEKLY 10 (December 20, 1913), https://www.sechistorical.org/collection/papers/1910/1913_12_20_What_Publi city_Ca.pdf. Brkan, Maja, Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond, SSRN 1 (February 28, 2018), https://papers.ssr n.com/sol3/papers.cfm?abstract_id=3124901. Browne, Ryan, World’s First Major Law for Artificial Intelligence Gets Final EU Green Light, CNBC (May 21, 2024, 1:38 PM), https://www.cnbc.com/2024/05/21/worlds-first-major-law-for-artificial-intelli gence-gets-final-eu-green-light.html. CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms, CONSUMER FINANCIAL PROTECTION BUREAU (May 26, 2022), https://ww w.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-blac k-box-credit-models-using-complex-algorithms/. CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence, CONSUMER FINANCIAL PROTECTION BUREAU (September 19, 2023), https://ww w.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-credit-denials-by-lenders-using-artificial-intelligence/. Chopra, Rohit et al., Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, CONSUMER FINANCIAL PROTECTION BUREAU ET AL. (April 25, 2023), https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFP B-AI-Joint-Statement%28final%29.pdf. Colorado Anti-Discrimination in AI Law (ADAI) Rulemaking, PHIL WEISER - COLORADO ATTORNEY GENERAL, https://coag.gov/ai/. Comment for 1002.9 - Notifications, CONSUMER FINANCIAL PROTECTION BUREAU, https://www.consumerfinance.gov/rules-policy/regulations/1002/interp-9/#9-a-3-Inter p-4. Consumer Financial Protection Circular 2022-03, CONSUMER FINANCIAL PROTECTION BUREAU (May 26, 2022), https://www.consumerfinance.gov/compliance/circulars/circ ular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-deci sions-based-on-complex-algorithms/. Consumer Financial Protection Circular 2023-03, CONSUMER FINANCIAL PROTECTION BUREAU (September 19, 2023), https://www.consumerfinance.gov/compliance/circular s/circular-2023-03-adverse-action-notification-requirements-and-the-proper-use-of-th e-cfpbs-sample-forms-provided-in-regulation-b/#footnote-source-17. De Vynck, Gerrit, ChatGPT Maker OpenAI Faces a Lawsuit over How It Used People’s Data, THE WASHINGTON POST (June 28, 2023, 3:01 PM), https://www.washington post.com/technology/2023/06/28/openai-chatgpt-lawsuit-class-action/. EBF Position Paper on AI in the Banking Industry, EUROPEAN BANKING FEDERATION (2019), https://www.ebf.eu/wp-content/uploads/2020/03/EBF-AI-paper-_final-.pdf. Ethics Guidelines for Trustworthy AI, EUROPEAN COMMISSION (April 8, 2019), https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. EUROPEAN DATA PROTECTION BOARD & EUROPEAN DATA PROTECTION SUPERVISOR, JOINT OPINION 5/2021 ON THE PROPOSAL FOR A REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) (2021), https://www.edpb.europ a.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf Everyday Ethics for Artificial Intelligence, IBM (2022), https://www.ibm.com/down loads/documents/us-en/10a99803ac2fd86f. EXECUTIVE OFFICE OF THE PRESIDENT NATIONAL SCIENCE & TECHNOLOGY COUNCIL COMMITTEE ON TECHNOLOGY, PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE (2016). Garcia, Juan Carlos Reyes, EU’s Transparency Obligations in the Era of AI: Is Transparency Enough to End Privacy Fatigue?, SSRN 1 (February 13, 2024), https: //papers.ssrn.com/sol3/papers.cfm?abstr act_id=4695423. Goldstein, Alex et al., Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation, ARXIV:1309.6392 1 (March 20, 2014), https://arxiv.org/pdf/130 9.6392. Goodman, Bryce & Seth Flaxman, European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”, ARXIV:1606.08813 1(August 31, 2016), https://arxiv.org /pdf/1606.08813. Grennepois, Nadège et al., Point of View: Using Random Forest for Credit Risk Models, DELOITTE (August 2019), https://www2.deloitte.com/content/dam/Deloitte/sg/Docume nts/financial-services/sg-fsi-machine-learning-credit-risk.pdf?. Grobelnik, Marko et al., What Is AI? Can you Make a Clear Distinction Between AI and Non-AI Systems?, OECD.AI POLICY OBSERVATORY (March 6, 2024), https://oecd. ai/en/wonk/definition. Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, ARTICLE 29 DATA PROT. WORKING PARTY (2018), https://ec.eu ropa.eu/newsroom/article29/items/612053. Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing Is “Likely to Result in a High Risk” for the Purposes of Regulation 2016/679, ARTICLE 29 DATA PROT. WORKING PARTY (2017), https://ec.europ a.eu/newsroom/article29/items/611236/en. Guidelines on Transparency Under Regulation 2016/679, ARTICLE 29 DATA PROT. WORKING PARTY (2018), https://ec.europa.eu/newsroom/article29/items/622227. Hardy, Quintein, Just the Facts. Yes, All of Them., NEW YORK TIMES (March 25, 2012), https://archive.nytimes.com/query.nytimes.com/gst/fullpage-9A0CE7DD153CF936A 15750C0A9649D8B63.html. Heikkilä, Melissa, AI: Decoded: A Dutch Algorithm Scandal Serves a Warning to Europe–The AI Won’t Save Us, POLITICO (March 30, 2022, 11:00 AM), https://w ww.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/. High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, EUROPEAN COMMISSION 18 (2019), https://digital-strategy.ec.europa.eu/en/library/ ethics-guidelines-trustworthy-ai. Historic Timeline, EU ARTIFICIAL INTELLIGENCE ACT, https://artificialintelligenceac t.eu/developments/. Hutson, Matthew, Rules to Keep AI in Check: Nations Carve Different Paths for Tech Regulation, NATURE (August 8, 2023), https://www.nature.com/articles/d41586-023-02491-y. IBM’s Principles for Trust and Transparency, IBM, https://www.ibm.com/policy/wp-content/uploads/2018/06/IBM_Principles_SHORT.V4.3.pdf. Introducing ChatGPT, OPENAI (November 30, 2022), https://openai.com/Index /chatgpt/. Introducing ChatGPT, OPENAI (November 30, 2022), https://openai.com/index/chat gpt/. John, Brain, How to Use SHAP Values to Optimize and Debug ML Models, NEPTUNE.AI (July 23, 2024), https://neptune.ai/blog/shap-values. Kaye, Kate, This Senate Bill Would Force Companies to Audit AI Used for Housing and Loans, PROTOCOL (February 8, 2022), https://www.protocol.com/enter-prise/rev ised-algorithmic-accountability-bill-ai. KIRKPATRICKPRICE, https://kirkpatrickprice.com /audit/hipaa-audit/. Larson, Jeff et al., How We Analyzed the COMPAS Recidivism Algorithm, PROPUBLICA (May 23, 2016), https://www.propublica.org/article/how-we-analyzed-the-compas-reci divism-algorithm. LEARN ABOUT QUALITY, ASQ, https://asq.org/quality-resources/ce-marking. Local Law 144, THE NEW YORK CITY COUNCIL (December 11, 2021), https://legista r.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-45 1E-81F8-6596032FA3F9&Options=ID%7CT ext%7C&Search=. Lundberg, Scott & Su-In Lee, A Unified Approach to Interpreting Model Predictions, ARXIV:1705.07874 1 (November 25, 2017), https://arxiv.org/pdf/1705.07874. McCarthy, J. et al. (1955), A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE, DARTMOUTH COLLEGE, http://jmc.stanford.edu/articles/dartmouth/d artmouth.pdf. MICHIGAN STATE UNIVERSITY LIBRARIES, https://openbooks.lib.msu.edu/neuro science/chapter /the-neuron/. Microsoft Responsible AI Standard, v2, MICROSOFT (2022), https://blogs.microsoft. com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2 -General-Requirements-3.pdf. Molnar, Christoph (2024), Interpretable Machine Learning: A Guide for Making Black Box Models Explainable, GITHUB, https://christophm.github.io/interpretable-ml-book/. Narahari, Y., Game Theory, GAME THEORY LAB (October 2012), https://gtl.csa.iis c.ac.in/gametheory/ln/ web-cp5-shapley.pdf. NATIONAL ARTIFICIAL INTELLIGENCE ADVISORY COMMITTEE, RECOMMENDATIONS: IMPLEMENTING THE NIST AI RMF WITH A RIGHTS-RESPECTING APPROACH (2023). National Assembly Passes the AI Basic Act, SHIN & KIM (December 30, 2024), https://www.shinkim.com/eng/media/newsletter/2667. NATIONAL INSTITUTE OF STANDARDS & TECHNOLOGY, AI RMF PLAYBOOK (2023). NATIONAL INSTITUTE OF STANDARDS & TECHNOLOGY, ARTIFICIAL INTELLIGENCE RISK MANAGEMENT FRAMEWORK (AI RMF 1.0) (2023). NATIONAL INSTITUTE OF STANDARDS & TECHNOLOGY, U.S. LEADERSHIP IN AI: A PLAN FOR FEDERAL ENGAGEMENT IN DEVELOPING TECHNICAL STANDARDS AND RELATED TOOLS (August 9, 2019). OFFICE OF MANAGEMENT & BUDGET, EXECUTIVE OFFICE OF THE PRESIDENT M-21-06, GUIDANCE FOR REGULATION OF ARTIFICIAL INTELLIGENCE APPLICATIONS (2020). OFFICE OF MANAGEMENT & BUDGET, EXECUTIVE OFFICE OF THE PRESIDENT M-24-10, ADVANCING THE RESPONSIBLE ACQUISITION OF ARTIFICIAL INTELLIGENCE IN GOVERNMENT (March 28, 2024). OFFICE OF MANAGEMENT & BUDGET, EXECUTIVE OFFICE OF THE PRESIDENT M-24-18, ADVANCING GOVERNANCE, INNOVATION, AND RISK MANAGEMENT FOR AGENCY USE OF ARTIFICIAL INTELLIGENCE (September 24, 2024). Perel, Maayan & Ruth Plato-Shinar, AI-Based Consumer Credit Underwriting: The Role of a National Credit Database, SSRN 1 (September 21, 2022), https://reur l.cc/zpgp0N. Phillips, P. Jonathon et al., Four Principles of Explainable Artificial Intelligence, NATIONAL INSTITUTE OF STANDARDS & TECHNOLOGY (September 2021), https://nvlpub s.nist.gov/nistpubs/ir/2021/NIST.IR.8312.pdf. Projeto de Lei n° 2338, de 2023, SENADO FEDERAL, https://www25.senado.le g.br/web/atividade/materias/-/materia/157233. Real-World Applications of AI Credit Scoring Software, GINIMACHINE (June 23, 2023), https://ginimachine.com/blog/real-world-applications-of-ai-credit-scoring-soft ware/. Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning, DEPARTMENT OF TREASURY ET AL. (February 25, 2021), https://www.fdic.gov/news/press-releases/2021/pr21025a.pdf. Responsible AI at Microsoft, MICROSOFT, https://www.microsoft.com/en-us/ai/respons ible-ai. SCIKIT-LEARN, https://scikit-learn.org/stable/modules/ensemble.html#forest. Statlog (German Credit Data), UC IRVINE MACHINE LEARNING REPOSITORY (November 16, 1994), https://archive.ics.uci.edu/datasett/144/statlog+german+credit+ data. Stryker, Cole, What Is Responsible AI?, IBM, https://www.ibm.com/think/topics/res ponsible-ai. TECHUMEN, https://techumen.com/hipaa-audit/. The 2019 AI Index Report, STANFORD UNIVERSITY HUMAN-CENTERED ARTIFICIAL INTELLIGENCE (2019), https://hai-production.s3.amazonaws.com/files/ai_index_2019_ report.pdf. THE ORGANISATION FOR ECONOMIC COOPERATION & DEVELOPMENT (2019), SCOPING THE OECD AI PRINCIPLES. THE ORGANISATION FOR ECONOMIC COOPERATION & DEVELOPMENT (2021), AI IN BUSINESS AND FINANCE: OECD BUSINESS AND FINANCE OUTLOOK 2021. THE ORGANISATION FOR ECONOMIC COOPERATION & DEVELOPMENT (2021), ARTIFICIAL INTELLIGENCE, MACHINE LEARNING AND BIG DATA IN FINANCE. THE ORGANISATION FOR ECONOMIC COOPERATION & DEVELOPMENT (2024), EXPLANATORY MEMORANDUM ON THE UPDATED OECD DEFINITION OF AN AI SYSTEM. THE WHITE HOUSE, BLUEPRINT FOR AN AI BILL OF RIGHTS (October 2022). Tosoni, Luca, The Right to Object to Automated Individual Decisions: Resolving the Ambiguity of Article 22(1) of the General Data Protection Regulation, SSRN 1 (May 14, 2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3845913. Using the AI Risk Management Framework–Workday, NATIONAL INSTITUTE OF STANDARDS & TECHNOLOGY & WORKDAY (September 14, 2023), https://airc.nis t.gov/docs/workday-success-story.pdf. Wachter, Sandra et al., Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, SSRN 1 (January 24, 2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469. Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 to Require New Transparency and Accountability for Automated Decision Systems, RON WYDEN UNITED STATES SENATOR FOR OREGON (February 3, 2022), https://www.wyde n.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-algorithmic-acc ountability-act-of-2022-to-require-new-transparency-and-accountability-for-automate d-decision-systems. Wyden, Booker, Clarke Introduce Bill Requiring Companies to Target Bias in Corporate Algorithms, RON WYDEN UNITED STATES SENATOR FOR OREGON (April 10, 2019), https://www.wyden.senate.gov/news/press-releases/wyden-booker-clarke-intro duce-bill-requiring-companies-to-target-bias-in-corporate-algorithms-. ZEST AI, https://www.zest.ai/. Zest AI’s Credit Models Proven to Increase Loan Approvals for Every Protected Class, PR NEWSWIRE (January 31, 2024), https://www.prnewswire.com/news-releases/zest-ais-credit-models-proven-to-increase-loan-approvals-for-every-protected-class-30204 8828.html. 7. 其他 SENATE REPORT NO. 94-589 (1976), as reprinted in 1976 United States Code Congressional and Administrative News 403. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97498 | - |
| dc.description.abstract | 人工智慧技術之發展改變了人們的生活,亦為各行各業帶來前所未有之機遇與挑戰,而銀行授信作業作為最早導入人工智慧的領域之一,藉此迎來了服務效率與準確度之全面提升,惟同時亦須面臨因演算法不透明所衍生之黑盒子(Black Box)風險。透明度監理之目的即在於使利害關係人,包含監理機關與終端消費者等,能夠了解人工智慧運作之邏輯,以及人工智慧所做成的決策將帶來什麼樣的影響,藉此達到控制人工智慧風險以及保護消費者之效果。然而,在法制仍存在許多模糊與缺漏之狀態下,應如何調整以引導金融機構回應不同利害關係人對於透明度之需求與保障,即為本文主要之研究問題。
本文從專家透明度與個體透明度之雙重切入角度出發,並透過對我國現行法制之觀察得知,我國法制雖已初具透明度監理之雛形,惟仍存在許多可強化之處,例如揭露內容要求模糊、自我評估機制易生利益衝突、缺乏對消費者之保障機制等,使其難以滿足消費者對人工智慧透明度之要求。經由對美國與歐盟法制之探討,尤其著重於其監理思維與規範模式之差異,以及所對應之具體手段,同時彙整相關學術理論與實務作法,本文認為應分別從專家透明度與個體透明度角度重新思考我國法制,就前者,應加強法定資訊揭露義務,引進獨立第三人審查機制,並考量是否需要在整體規範架構層面上加強監理機關之權限;就後者,則應在現有法制框架上重構事前資訊揭露之內容,並建立事後保障機制,以保障消費者異議與救濟之權利。 | zh_TW |
| dc.description.abstract | The development of artificial intelligence (hereinafter “AI”) technologies has significantly transformed modern life and brought unprecedented opportunities and challenges across various sectors. In particular, credit granting operations in the banking industry—one of the earliest adopters of AI—have benefited from enhanced efficiency and accuracy. However, this transformation also introduces the inherent risk of algorithmic opacity, commonly referred to as the “black box” problem. The objective of transparency regulation is to enable stakeholders, including regulatory authorities and end users, to understand the logic underlying the operations of AI and the potential impact of AI-driven decisions, thereby facilitating effective risk control and ensuring consumer protection. In light of persistent ambiguities and gaps in the existing legal framework, this thesis seeks to explore how regulatory systems may be adjusted to ensure that financial institutions respond appropriately to the diverse transparency needs and interests of different stakeholders.
This thesis adopts a dual-perspective approach by analyzing both expert transparency and individual transparency. Through a detailed examination of the current legal framework in Taiwan, the thesis finds that, although the regulatory structure for AI transparency has begun to take shape, substantial deficiencies remain. These include vague disclosure requirements, potential conflicts of interest in self-assessment mechanisms, and a lack of protective measures for consumers, all of which hinder the fulfillment of transparency expectations. By conducting a comparative analysis of the regulatory approaches in the United States and the European Union, especially highlighting their divergent supervisory philosophies and regulatory techniques, and synthesizing relevant academic theories and practical measures, the study proposes a framework for rethinking Taiwan’s legal system in light of the two transparency dimensions. With respect to expert transparency, the thesis recommends enhancing statutory disclosure obligations, introducing independent third-party audit mechanisms, and considering the expansion of regulatory authority at a structural level. For individual transparency, the thesis suggests reconstructing ex ante disclosure requirements and establishing ex post redress mechanisms, in order to safeguard consumers’ rights to contest and understand AI-based credit decisions. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-02T16:10:34Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-07-02T16:10:34Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii Abstract v 目次 vii 圖次 xiii 表次 xiv 第一章 緒論 1 第一節 研究動機與目的 1 第二節 研究方法與範圍 4 第三節 研究架構 5 第二章 銀行授信與人工智慧 8 第一節 傳統授信概論 8 第一項 授信基本原則 9 第一款 安全性 9 第二款 流動性 10 第三款 公益性 10 第四款 收益性 10 第五款 成長性 10 第二項 信用評估要素 10 第一款 信用5C要素 11 第二款 授信5P原則 12 第三項 基礎作業流程 14 第四項 傳統授信作業之不足 16 第二節 運用人工智慧之新型態授信 16 第一項 人工智慧概論 17 第一款 人工智慧之發展與演進 18 第二款 現代人工智慧之基石:機器學習 19 第三款 深度學習與類神經網路 21 第二項 人工智慧與授信之結合 24 第三項 人工智慧應用於授信之風險效益分析 25 第一款 應用人工智慧之效益 25 第二款 應用人工智慧之風險 28 第三款 透明度需求之初探 31 第三節 小結:以透明度為監理方向 32 第三章 透明度監理 34 第一節 透明度之意義 34 第一項 美國 35 第二項 歐盟 37 第三項 民間企業 39 第四項 小結 39 第二節 透明度監理之規範內涵 40 第一項 透明度之必要性論辯 40 第二項 透明度概念之細緻化—有限範圍透明度 43 第一款 專家透明度 45 第二款 個體透明度 47 第三款 將技術發展納入規範考量之必要 48 第三節 技術面之視角:可解釋人工智慧 50 第一項 可解釋人工智慧之分類與應用 51 第一款 本質可解釋(Intrinsic)與事後可解釋(Post Hoc) 51 第二款 專用模型解釋(Model-Specific)與通用模型解釋(Model- Agnostic) 52 第三款 全域解釋方法(Global)或局部解釋方法(Local) 52 第二項 全域模型解釋技術 53 第一款 特徵重要性(Permutation Feature Importance) 54 第二款 部分依賴圖(Partial Dependence Plot) 57 第三款 累積局部效應(Accumulated Local Effects) 60 第四款 全域SHAP解釋(Global SHapley Additive exPlanations) 63 第三項 局部模型解釋技術 68 第一款 個別條件期望值(Individual Conditional Expectation) 68 第二款 通用模型局部解釋(Local Interpretable Model-Agnostic Explanations) 71 第三款 局部SHAP解釋(Local SHapley Additive exPlanations) 73 第四款 反事實解釋(Counterfactual Explanations) 75 第四項 可解釋人工智慧技術對透明度監理之應用意義 77 第四節 小結 78 第四章 人工智慧授信應用之透明度監理法制比較 80 第一節 專家透明度之法制比較 82 第一項 我國法制現狀 82 第一款 銀行法暨內部控制相關法規 82 第二款 為因應人工智慧發展而提出之新規範 83 第三款 現有法制之不足 87 第四款 小結 91 第二項 美國模式:分散式監理 92 第一款 法制發展歷程 93 第二款 整體性架構之提出:人工智慧風險管理框架(AI RMF) 96 第三款 立法上之嘗試 100 第四款 小結 104 第三項 歐盟模式:集中式監理 105 第一款 GDPR下之監理措施 106 第二款 AI Act下之監理措施 110 第三款 小結 117 第四項 美國與歐盟之制度比較 118 第二節 個體透明度之法制比較 120 第一項 我國法制現狀 120 第一款 既有法制相關規定與可能之適用情形 120 第二款 AI指引之補充 125 第三款 現有法制之不足 125 第四款 小結 129 第二項 美國模式:以事後保障為核心 129 第一款 平等信貸機會法與不利行為通知 130 第二款 其他保障措施 134 第三款 小結 136 第三項 歐盟模式:以事前揭露為主 136 第一款 GDPR—以事前揭露為核心 137 第二款 從CCD 到CCD II 145 第三款 AI Act—事後保障之補充 146 第三款 小結 148 第四項 美國與歐盟之制度比較 149 第三節 小結 150 第五章 對我國透明度監理法制之再檢視 152 第一節 對專家透明度法制之再檢視 153 第一項 我國法制問題回顧 153 第一款 文件紀錄缺乏內容說明與客觀標準 153 第二款 自我評估之侷限 154 第三款 監理機關之權限不足 155 第二項 美國與歐盟法制之啟示與反思 156 第一款 文件紀錄內容之要求 158 第二款 第三方評估機制之引入 159 第三款 規範模式與監理機關得採取之手段 160 第三項 建議改進方向 162 第一款 文件紀錄之內容具體化與制度重整 162 第二款 建立獨立第三人審查機制 163 第三款 規範模式之再思考與監理機關權限之強化 166 第二節 對個體透明度法制之再檢視 167 第一項 我國法制問題回顧 168 第一款 對於事前揭露之內涵與定位不清 168 第二款 事後保障機制之欠缺 169 第二項 美國與歐盟法制之啟示與反思 170 第一款 事前資訊揭露之功能差異 171 第二款 事後保障機制之義務分配與運作 172 第三項 建議改進方向 174 第一款 重新釐清事前資訊揭露對消費者之意義 174 第二款 事後保障機制之建構 175 第六章 結論 183 參考文獻 188 附錄 207 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 透明度 | zh_TW |
| dc.subject | 消費者保護 | zh_TW |
| dc.subject | 金融監理 | zh_TW |
| dc.subject | 銀行授信 | zh_TW |
| dc.subject | 可解釋性 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | Bank Credit | en |
| dc.subject | Explainability | en |
| dc.subject | Transparency | en |
| dc.subject | Artificial Intelligence | en |
| dc.subject | Consumer Protection | en |
| dc.subject | Financial Regulation | en |
| dc.title | 論人工智慧之透明度監理法制—以銀行授信之應用為中心 | zh_TW |
| dc.title | The Legal Regulation of AI Transparency: A Study Centered on Its Application in Bank Credit | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 楊岳平;林育廷 | zh_TW |
| dc.contributor.oralexamcommittee | Yueh-Ping Yang;Yu-Ting Lin | en |
| dc.subject.keyword | 人工智慧,透明度,可解釋性,銀行授信,金融監理,消費者保護, | zh_TW |
| dc.subject.keyword | Artificial Intelligence,Transparency,Explainability,Bank Credit,Financial Regulation,Consumer Protection, | en |
| dc.relation.page | 207 | - |
| dc.identifier.doi | 10.6342/NTU202501235 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2025-06-23 | - |
| dc.contributor.author-college | 法律學院 | - |
| dc.contributor.author-dept | 法律學系 | - |
| dc.date.embargo-lift | 2025-07-03 | - |
| Appears in Collections: | 法律學系 | |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-113-2.pdf Access limited in NTU ip range | 4.91 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
