Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 法律學院
  3. 法律學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79478
標題: 公部門中的人工智慧—人為介入作為正當使用人工智慧的必要條件
Artificial Intelligence in the Public Sector: Human Intervention as the Necessary Condition for Legitimate Use of Artificial Intelligence
作者: 呂胤慶
Yinn-Ching Lu
指導教授: 林子儀
Tzu-Yi Lin
共同指導教授: 蘇慧婕
Hui-Chieh Su
關鍵字: 人工智慧,自動化決定,演算法,巨量資料,公部門,人為介入,歐盟一般資料保護規則,法治原則,法律的續造,原則與規則,課責,透明,說明權,具說明能力的人工智慧,
Artificial Intelligence,automated decision-making (ADM),algorithm,Big Data,the public sector,human intervention,European Union General Data Protection Regulation (GDPR),the Rule of Law,legal construction (Rechtsfortbildung),principle and rule,accountability,transparency,the right to explanation (RtE),explainable Artificial Intelligence (xAI),
出版年 : 2021
學位: 碩士
摘要: 第二波人工智慧的廣泛應用,激起了法律與科技之間的緊張關係。人工智慧的建置目的,在於輔助人類更有效率從事任務,甚至根本的取代人類。然而,法律適用的任務是否能由人工智慧加以取代,有所疑義。本文的問題意識在於釐清,透過人工智慧從事法律適用的任務,是否與憲法的價值有所違背。關於這個問題意識,歐盟的一般資料保護規則已經作成了初步的價值判斷,原則上禁止以人工智慧從事法律適用的任務。這個價值判斷是否正確則必須進一步釐清。基於此,本文的研究目的即在探究憲法法治原則關於法律適用任務的要求,藉以評價人工智慧的法律適用,並審視歐盟立法者的前述立場,最後再針對「公部門使用人工智慧適用法律是否牴觸憲法」的問題提出回答。
本文認為若使用人工智慧從事法律適用的任務,將造成無法對於新的個案從事法律適用,以及區分個案的差異進行法律的續造。這樣的結果與憲法法治原則有所違背。換言之,歐盟法對於人工智慧的管制具有正當性。基於此立場,本文指出,公部門僅有在「人為介入」的條件中,方能夠利用人工智慧輔助公部門從事法律適用的任務。為確保人為介入的目的能夠達成,本文主張應揭露人工智慧在建置階段中的重要資訊,使公部門中的決定者能在個案中實質的評估是否採納或拒絕人工智慧的輔助。
在論證順序上,本文於第二章說明「人為介入」的內涵。人為介入目的在於分配機器與人類在任務執行上的互動關係,以人類作為任務課責的對象。在規範模式上,人為介入可以分為「機器輔助人類作成決定」與「人類事後推翻機器決定」的兩種類型。以目前歐盟一般資料保護規則中的「受人為決定」為例,本文指出「人為介入」管制模式背後的事實判斷是「人為決定」與「人工智慧的決定」有所不同,人工智慧無法完全取代人為決定。換言之,人為介入的目的在於維持現狀,維持以「人類」作為最終決策者的決定方式。
為釐清蘊含於「人為介入」的事實判斷是否正確,本文於第三章分析人為與人工智慧在適用法律、作成決定的任務上是否有差距。在釐清人工智慧的本質,及其在運作上的特色後,本文以法律適用的內涵作為基礎,指出人工智慧與人類決定的差異。針對人工智慧在運作上的特性,本文指出人工智慧在從事法律適用任務上所生的兩個問題:一、沒有辦法針對新個案從事法律適用;二、沒有辦法區分個案之間的差異從事法律之續造。
在說明人工智慧對於適用法律過程所造成的影響後,本文於第四章從法治原則出發,從規範面上評價這些特色。本文檢視法律續造在法治原則中的地位,並分析公部門若使用人工智慧從事法律適用的任務,是否存在不用進行法律續造的特殊原因。在法治原則在現今的社會中仍有其重要性的前提下,本文認為「人為介入」的規範性要求,使適用法律、作成決定的過程,能夠由人類區分事實的差異而作成個案的決定,保持規範適用典範變遷的可能性,並進一步證成人為介入是公部門正當使用人工智慧的必要條件。為確保「人為介入」在現實中能夠有效的落實,本文主張應建構人工智慧設計階段為揭露對象的透明法制框架,透過揭露人工智慧在建置階段之中的重要資訊,使人類決策者能夠有效的評估個案之中是否採納或者拒絕人工智慧的決定,以有效的落實人為介入的規範性要求。
A decade ago, nobody could have envisaged such a world, in which robots replaced the majority of laborers. Now, the widespread usages of data-based Artificial Intelligence (AI), is already intensifying the tension between law and technology. One of the harshest issues is whether AI can replace humans in the field of legal reasoning. In this thesis, the author will explore the differences between AI and humans when they conduct legal reasoning, and analyze under what conditions the public sector can use artificial intelligence legitimately. The author indicates two failures AI makes in legal reasoning. First, AI cannot apply the law on the new facts that have never taken place before. Second, Al is incapable of distinguishing between cases and creating new reasons to foster the change of the law. With these two failures, the thesis argues that the public sector violates the constitutional principle of the “rule of law” when using AI to apply the law automatically. Finally, the thesis tries to construct a regulatory framework including the obligations of human intervention and disclosure of formational information of AI. Under this human-centered framework, the public sector can enjoy the benefits brought by AI and, simultaneously, conform to the obligation set forth by the rule of law.
Chapter 2 illustrates what “human intervention” means. Among all the dazzling AI regulation proposals, one of the most familiar approaches is keeping a human in the loop of AI. Unfortunately, circling in the fog for the right of explanation, scholars across the Atlantic shed little light on the basic meaning of the human intervention regulatory model in the European Union General Data Protection Regulation (GDPR). The thesis contributes to the scholarship with a thorough analysis of this model. Interestingly, it shows that underlying this model is the implicit legislative intent that human-made decisions and AI-made decisions are totally different. Moreover, instead of following the global trend of AI fanaticism, the legislator decides not to jump on the train of technology revolution, but embrace the classic decision approach, human-made decision. This regulatory measure, obviously, raises a deeper question: what are the differences between human decisions and AI decisions?
Chapter 3 discusses the question posed by the EU legislator directly. Under practical scenarios, AI has two main aims. One is technological: imitating what humans can do. The other is scientific: answering what humans don’t know. The author distinguishes between AI and humans on the work of legal reasoning, with a focus on AI with imitating ability. A “human” jurist in legal reasoning should give a word-elaborated reason on what law shall be interpreted under certain situations, justify a meaning-based extension of the legal requirements to be applied under certain facts, and vindicate a principle-based requirement of the legal rules that shall be recognized under a certain legal order. In other words, the law, as an institution, possesses the ability to adapt along with the variances between facts and the change of society. Contrasting these characteristics with AI, the imitating AI cannot in theory apply new facts, distinguish interpretations between individual cases, and contribute to changing the legal paradigm continually. However, what these distinctions shall mean under the constitution becomes a significant question.
Chapter 4 reviews these distinctions under the “rule of law”. Based on the preconditions that the law includes both rule and principle, legal practitioners, along with judges and executive body, have the obligation to apply the law according to the facts and to attune the interpretation in line with the environment. The public sector would be in violation of the rule of law if it uses AI to apply the law directly without justification, for AI is incapable of altering legal interpretation itself. This thesis does not call for a complete ban on AI decision-making, but to impose conditions on the use of AI. Considering the importance of humans' ability, the author argues that a “human intervention” model is a suitable safeguard for constructing an accountable and organic legal reasoning system. Maintaining the human subject as the decision-maker and allowing the assistance of AI are the core obligations of this model. The issue this model triggers, nevertheless, is how to fulfill good human-computer interaction. The key to ideal human-computer interaction, in my opinion, is transparency. Seeing with knowing. The disclosure of the formational information of AI, such as information on data collection (e.g. collection methods, sample range, labelling criteria, set-cleaning methods) and algorithm (e.g. reason for choosing a certain algorithm), helps the decision-maker evaluate whether to accept AI assistance by examining the relationship between every individual case and the database.
Chapter 5, finally concludes with some key points extracted from each chapter, clarifies the limits of this study, and sets future research agenda.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79478
DOI: 10.6342/NTU202103631
全文授權: 同意授權(全球公開)
顯示於系所單位:法律學系

文件中的檔案:
檔案 大小格式 
ntu-109-2.pdf2.47 MBAdobe PDF檢視/開啟
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved