請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79478完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 林子儀 | zh_TW |
| dc.contributor.advisor | Tzu-Yi Lin | en |
| dc.contributor.author | 呂胤慶 | zh_TW |
| dc.contributor.author | Yinn-Ching Lu | en |
| dc.date.accessioned | 2022-11-23T09:01:30Z | - |
| dc.date.available | 2023-11-10 | - |
| dc.date.copyright | 2021-11-04 | - |
| dc.date.issued | 2021 | - |
| dc.date.submitted | 2002-01-01 | - |
| dc.identifier.citation | Greco, Luís(著)、鍾宏彬(譯)(2021),〈沒有法官責任的法官權力:為什麼不許有機器人法官〉,《月旦法學雜誌》,315期,頁170-195。 Larenz, Karl(著),陳愛娥(譯)(2013),《法學方法論》,初版九刷,臺北:五南。 Mayer-Schönberger, Viktor(著),林俊宏(譯)(2015),《大數據:隱私篇》,1,臺北:天下。 王怡蘋(2020),〈人工智慧創作與著作權之相關問題〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁563-598,臺北: 王紀軒(2019),〈人工智慧於司法實務的應用〉,《月旦法學雜誌》,293期,頁93-114。 王鵬翔(2004),〈目的性限縮之論證結構〉,《月旦民商法雜誌》,4期,頁16-29。 --------(2005),〈論基本權的規範結構〉,《臺大法學論叢》,34卷2期,頁1-60。 --------(2005),〈論涵攝的邏輯結構—兼評Larenz的類型理論〉,《成大法學》,9期,頁1-45。 --------(2007),〈規則、原則與法律說理〉,《月旦法學教室》,53期,頁74-83。 --------(2008),〈規則是法律推理的排它性理由嗎?〉,收於:王鵬翔(編),《2008法律思想與社會變遷》,頁345-386,臺北:中央研究院法律研究所籌備處。 王鵬翔、張永健(2015),〈經驗面向的規範意義-論實證研究在法學中的角色〉,《中研院法學期刊》,17期,頁205-294。 何之行、廖貞(2020),〈AI個資爭議在英國與歐盟之經驗-以Google DeepMind一案為例〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁333-383,臺北:元照。 吳全峰(2020),〈初探人工智慧與生命倫理之關係〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁167-222,臺北:元照。 吳從周(2018),〈初探AI的民事責任-聚焦反思臺灣之實務見解〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁87-117,臺北:元照。 李建良(2020),〈人工智慧與法學變遷-法律人面對科技的反思〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁1-87,臺北:元照。 --------(2018),〈法學方法與基本權解釋方法導論〉,《人文及社會科學集刊》,30卷2期,頁237-277。 李榮耕(2018),〈初探刑事程序法的人工智慧應用-以犯罪熱區為例〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁118-148,臺北:元照。 沈宗倫(2018),〈人工智慧科技與智慧財產法治的交會與調和-以著作權法與專立法之權利歸屬為中心〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁177-214,臺北:元照。 --------(2020),〈人工智慧科技對於專利侵權法制的衝擊與因應之道〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁523-562,臺北:元照。 林子儀(2015),〈隱私權法制的新議題:監控與隱私自我管理〉,收於:國立臺灣大學法律學院(編),《第五屆馬漢寶講座論文彙編》,頁65-115,臺北:財團法人馬氏思上文教基金會。 林勤富(2018),〈人工智慧法律議題初探〉,《月旦法學雜誌》,274期,頁195-215。 林勤富、李怡俐(2020),〈人工智慧時代下的國際人權法-規範與制度的韌性探索與再建構〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁413-465,臺北:元照。 邱文聰(2018),〈初探人工智慧中的個資保護發展趨勢與潛在的反歧視難題〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁145-175,臺北:元照。 --------(2020),〈第二波人工智慧知識學習與生產對法學的挑戰—資訊、科技與社會研究及法學的對話〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁137-168,臺北:元照。 --------(2021),〈亦步亦趨的模仿還是超前部署的控制?AI的兩種能力和它們帶來的挑戰〉,作者手稿。即將出版,收於:《智慧新世界?人文社科學人眼中的AI》,新竹:國立清華大學。 洪德欽(2015),〈歐盟法的淵源〉,收於:洪德欽、陳淳文(編),《歐盟法之基礎原則與實務發展(上)》,頁1-56,臺北:國立臺灣大學出版中心。 張嘉尹(2019),〈憲法解釋作為法律續造—一個方法論的反思〉,《中原財經法學》,43期,頁1-38。 張陳弘、莊植寧(2019),《新時代之個人資料保護法制:歐盟GDPR與臺灣個人資料保護法的比較說明》,臺北:新學林。 張嘉尹(2002),〈法律原則、法律體系與法概念論-Robert Alexy法律原則理論初探〉,《輔仁法學》,24期,頁1-48。 陳弘儒(2020),〈初探目的解釋在法律人工智慧之運用可能〉,《歐美研究》,50卷2期,頁293-347。 --------(2020),〈初探目的解釋在法律人工智慧系統之運用可能〉,收於:李建良(編),《法律思維與制度的思維轉型》,頁227-302,臺北:元照。 陳柏良(2020),〈AI時代之分裂社會與民主-以美國法之表意自由與觀念市場自由競爭理論為中心〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁301-331,臺北:元照。 陳顯武(2005),〈論法學上規則與原則的區分-由非單調邏輯之觀點出發〉,《臺大法學論叢》,34卷1期,頁1-45。 黃居正(2018),〈與人工智慧相關的國際法議題-從國際人道法到生命體法〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁215-241,臺北:元照。 黃舒芃(2013),〈法律保留原則在德國法秩序下的意涵與特徵〉,收於:黃舒芃(編),《民主國家的憲法及其守護者》,頁7-53,臺北:元照。 黃詩淳、邵軒磊(2017),〈運用機器學習預測法院裁判─法資訊學之實踐〉,《月旦法學雜誌》,270期,頁86-96。 --------(2019),〈人工智慧與法律資料分析之方法與應用:以單獨親權酌定裁判的預測模型為例 〉,《臺大法學論叢》,48卷4期,頁2023-2073。 --------(2020),〈以人工智慧讀取親權酌定裁判文本:自然語言與文字探勘之實踐 〉,《臺大法學論叢》,49卷1期,頁195-224。 黃銘傑(2019),〈人工智慧發展對法律及法律人的影響 〉,《月旦法學教室》,200期,頁51-54。 楊岳平(2020),〈人工智慧時代下的金融監理議題-以理財機器人監理為例〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁467-500,臺北:元照。 雷磊(2009),〈法律規範的同位階衝突及解決:以法律規則與法律原則的關係為出發點〉,《臺大法學論叢》,38卷4期,頁1-66。 劉靜怡(2018),〈人工智慧潛在倫理與法律議題鳥瞰與初步分析〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁1-45,臺北:元照。 --------(2020),〈人工智慧時代的法學研究路徑初探〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁91-134,臺北:元照。 --------(2021),〈科技正當法律程序的憲法意涵—美國判決與學說發展的檢視〉,收錄於氏著:《網路時代的隱私保護困境》,頁356-411,臺北:元照。 謝碩駿(2021),〈論全自動作成之行政處分〉,收於:黃丞儀(編),《2017行政管制與行政爭訟》,先期出版,頁1-86,臺北:中央研究院法律研究所。http://publication.iias.sinica.edu.tw/92015091.pdf 顏厥安(1998),〈法、理性與論證〉,收於:顏厥安(編),《法與實踐理性》,頁95-212,臺北:允晨。 --------(1998),〈法與道德-由一個法哲學的核心問題檢討德國戰後法思想的發展〉,收於:顏厥安(編),《法與理性實踐》,頁31-76,臺北:允晨叢刊。 --------(2002),〈規則、理性與法治〉,《臺大法學論叢》,31卷2期,頁1-58。 --------(2018),〈人之苦難,機器恩典必看顧安慰-人工智慧、心靈與演算法社會〉,收於:劉靜怡(編),《人工智慧相關法律議題芻議》,頁47-85,臺北:元照。 Alarie, B. (2016). The path of the law: Towards legal singularity. University of Toronto Law Journal, 66(4), 443-455. https://doi.org/10.3138/UTLJ.4008 Alderucci, D. (2020). The Automation of Legal Reasoning: Customized AI Techniques for the Patent Field. Duquesne Law Review, 58(1), 50-81. Alexy, R. (1978). A Theory of Legal Arguementation. Translated by R. Adler and N. MacCormick. U.S.: Oxford University Press. -------- (1994). A Theory of Constitutional Rights. Translated by J. Rivers (2nd. ed). New York: Oxford University Press. Alpaydin, E. (2014). Introduction to Machine Learning (3rd. ed). Massachusetts: The MIT Press. Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645 van Andel, P., & Bourcier, D. (2002). Serendipity and Abduction in Proofs, Presumptions and Emerging Laws. In M. MacCrimmon & P. Tillers (Eds.), The Dynamics of Judicial Proof: Computation, Logic, and Common Sense (pp.273-286). Heidelberg: Physica. Appel, S. M., & Coglianese, C. (2020). Algorithmic Governance and Administrative Law. In W. Barfield (Ed.), The Cambridge Handbook of the Law of Algorithms (pp. 162-181). New York: Cambridge University Press. Article 29 Data Working Party. (2018). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. https://ec.europa.eu/newsroom/article29/redirection/document/49826 -------- (2018). Guidelines on Transparency under Regulation 2016/679. https://ec.europa.eu/newsroom/article29/redirection/document/51025 Ashley, K. D. (2017). Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (1 ed.). New York: Cambridge University Press. Barocas, S., & Selbst, Andrew D. (2016). Big Data's Disparate Impact. California Law Review, 104, 671-732. Bayamlıoğlu, E. (2018). Contesting Automated Decisions. European Data Protection Law Review, 4, 433-446. Berman, E. (2018). A Government of Laws and not of Machines. Boston College Law Review, 87, 1277-1355. Bermejo, J. M. P. (2012). Principles, Comflicts, and Defeats: An Approach from a Coherentist Theroy. In J. F. Beltrán and G. B. Ratti (Eds.), The Logic of Legal Requirements: Essays on Defeasibility (pp. 288-308). U.K.: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199661640.003.0018 Berryhill, J., Heang, K. K., Clogher, R., & McBride, K. (2019). Hello, World: Artificial intelligence and its use in the public sector (OECD Working Papers on Public Governance), available at: https://www.oecd-ilibrary.org/governance/hello-world_726fd39d-en [ogy.de/v9ie]. Binns, R. (2017). Algorithmic Accountability and Public Reason. Philosophy & Technology, 31(4), 543-556. https://doi.org/10.1007/s13347-017-0263-5 -------- (2021). Human Judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance. https://doi.org/10.1111/rego.12358 Bishop, C. M. (2006). Pattern Recognition and Machine Learning (1 ed.). U.K.: Springer. Bloch-Wehba, H. (2021). Transparency's AI Problem. Knight First Amendment Institute and Law and Political Economy Project's Data & Democracy Essay Series. Texas A&M University School of Law Legal Studies Research Paper No. 21-13. Available at: https://knightcolumbia.org/content/transparencys-ai-problem [ogy.de/cuq1]. Boden, M. A. (2018). Artificial Intelligence: A Very Short Introduction. U.K.: Oxford University Press. Borgesius, F. J. Z. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24, 1572-1593. https://doi.org/10.1080/13642987.2020.1743976 boyd, d., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for A Cultural, Technological, and Scholarly Phenomenon. Information, Communication & Society, 15, 662-679. https://doi.org/10.1080/1369118X.2012.678878 Branting, L. K. (2017). Data-centric and Logic-based Models for automated legal problem solving. Artificial Intelligence and Law, 25, 5-27. https://doi.org/10.1007/s10506-017-9193-x Brennan-Marquez, K., & Henderson, S. E. (2019). Artificial Intelligence and Role-Reversible Judgement. The Journal of Criminal Law & Criminology, 109(2), 137-164. Brennan-Marquez, K., Susser, D., & Levy, K. (2019). Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making. Berkeley Technology Law Journal, 34, 745-771. Brennan-Marquez, K. (2017). “Plausible Cause”: Explanatory Standards in the Age of Powerful Machines. Vanderbilt Law Review, 70(4), 1249-1301. Brkan, M., & Bonnet, G. (2020). Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas. European Journal of Risk Regulation, 11(1), 18-50. https://doi.org/10.1017/err.2020.10 Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International Journal of Law and Information Technology, 27, 91-121. https://doi.org/10.1093/ijlit/eay017 Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 16. https://doi.org/10.1177/2053951715622512 Bygrave, L. A. (2019). Minding the Machine v2.0: The EU General Data Protection Regulation and Automated Decision Making. In K. Yeung and M. Lodge (Eds.), Algorithmic Regulation (pp. 248-262). U.K.: Oxford University Press. https://doi.org/10.1093/oso/9780198838494.003.0011 -------- (2020). Article 22 Automated individual decision-making, including profiling. In C. Kuner, L. A. Bygrave and C. Docksey (Eds.), The EU General Data Protection Regulation (GDPR): A Commentart (1 ed., pp. 522-542). U.K.: Oxford University Press. https://doi.org/10.1093/oso/9780198826491.003.0055 Calo, R., & Citron, D. K. (2021). The Automated Administrative State: A Crisis of Legitimacy. Emory Law Review, 70, 797-845. Carlos, J. B. (2001). Why is Legal Reasoning Defeasible? In A. Soeteman (Ed.), Pluralism and Law (pp. 327-346). Switzerland: Springer. https://doi.org/10.1007/978-94-017-2702-0_17 Casey, A., & Niblett, A. (2015). The Death of Rules and Standards. Indiana Law Journal, 92, 1401-1448. Casey, B., Farhangi, A., & Vogl, R. (2019). Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise. Berkeley Technology Law Journal, 34, 145-189. Chander, A. (2016). The Racist Algorithm. Michigan Law Review, 115, 1023-1046. Citron, D. K., & Pasquale, F. (2014). The Scored Society: Due Process For Automated Predictions. Washington Law Review, 89, 1-33. Citron, D. K. (2007). Technological Due Process. Washington University Law Review, 85, 1249-1314. Cobbe, J., Lee, M. S. A., & Singh, J. (2021). Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21), 598-609. https://doi.org/10.1145/3442188.344592 Cobbe, J. (2019). Administrative Law and the Machines of Government: Judicial Review of Automated Public Sector Decision-making. Legal Studies, 39, 636-655. https://doi.org/10.1017/lst.2019.9 Coglianese, C., & Lampmann, E. (2021). Contracting for Algorithmic Accountability. Administrative Law Review Accord, 6, 175-199. Coglianese, C., & Lehr, D. (2017). Regulating by Robot: Administrative Decision Making in the Machine-Learning Era. The Georgetown Law Journal, 105, 1147-1223. -------- (2019). Transparency and Algorithmic Governance. Administrative Law Review, 71, 1-56. Cohen, J. E. (2012). Configuring the Networked Self: Law Code and the Play of Everyday Practice (1 ed.). New Heaven: Yale University Press. -------- (2013). What Privacy is for. Harvard Law Review, 126, 1904-1933. -------- (2019). Between Truth and Power: The Legal Constructions of Information Capitalism (1 ed.). U.S.: Oxford University Press. Cohen, M. (2010). The Rule of Law as the Rule of Reasons. Archiv für Rechts- und Sozialphilosophie 96(1), 1-16. Comment. (2017). State v. Loomis: Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing. Harvard Law Review, 130, 1530-1537. Crawford, K., & Schultz, J. (2014). Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms. Boston College Law Review, 55, 93-128. Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (1 ed.). New Haven: Yale University Press. Crootof, R. (2019). “Cyborg Justice” and the Risk of Technological-Legal Lock-In. Columbia Law Review Forum, 119, 233-251. Cukier, K., Mayer-Schönberger, V., & de Vericourt, F. (2021). Framers: Human Advantage in an Age of Technology and Turmoil (1 ed.). U.S.: Dutton. Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011). Extraneous Factors in Judicial Decisions. Proceedings of the National Academy of Sciences of the United States of America, 108, 6889-6892. Deakin, S., & Markou, C. (2020). Ex Machina Lex: The Limits of Legal Computability. In C. Markou and S. Deakin (Eds.), Is Law Computable? Critical Perspectives on Law and Artificial Intelligence (pp. 31-66). U.K.: Hart Publishing. Deeks, A. (2019). The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Review, 119, 1829-1850. -------- (2020). Secret Reason-Giving. Yale Law Journal, 129, 612-689. Diakopoulos, N. (2020). Transparency. In M. D. Dubber, F. Pasquale & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 197-214). New York: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.11 Djeffal, C. (2020). Artificial Intelligence and Public Governance: Normative Guidelines for Artificial Intelligence in Government and Public Administration. In T. Wischmeyer and T. Rademacher (Eds.), Regulating Artificial Intelligence (pp. 277-293). Switzerland: Springer Nature Switzerland AG. https://doi.org/10.1007/978-3-030-32361-5_12 Dworkin, R. (1977). Taking Rights Seriously (1 ed.). Cambridge: Harvard University Press. Eaglin, J. M. (2017). Constructing Recidivism Risk. Emory Law Review, 67, 59-122. Edwards, L. & Veale, M. (2017). Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking for. Duke Law & Technology Review, 16, 18-84. Engstrom, D. F., Ho, D. E., Sharkey, C. M., & Cuéllar, M.-F. (2020). Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies. Retrieved from https://www.acus.gov/sites/default/files/documents/Government%20by%20Algorithm.pdf [https://ogy.de/b1kr]. Ernst, C. (2020). Artificial Intelligence and Autonomy: Self-Determination in the Age of Automated Systems. In T. Wischmeyer and T. Rademacher (Eds.), Regulating Artificial Intelligence (1 ed., pp. 53-73). Springer: Springer International Publishing. https://doi.org/10.1007/978-3-030-32361-5_3 European Commission. (2019). Building Trust in Human-Centric Artificial Intelligence. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52019DC0168 [ogy.de/26vz] -------- (2020). White Paper on Artificial Intelligence – A European Approach to excellence and trust. https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en [ogy.de/kgps] Fagen, F., & Levmore, S. (2019). The Impact of Artificial Intelligence on Rules, Standards, and Judicial Discretion. Southern California Law Review, 93(1), 1-36. Fallon, R. H., Jr. (1997). “The Rule of Law” as a Concept in Constitutional Discourse. Columbia Law Review, 97, 1-56. Franklin, S. (2014). History, motivation, and core themes. In K. Frankish and W. M. Ramsey (Eds.), The Cambridge Handbook of Artifitial Intelligence (pp. 15-33). U.K.: Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.003 Fuller, L. (1969). The Morality of Law (revised edition). U.S.: Yale University Press. Gillis, T. B., & Simons, J. (2019). Explanation < Justification: GDPR and the Perils of Privacy. Journal of Law and Innovation, 2, 71-99. Gillis, T. B., & Spiess, J. L. (2019). Big Data and Discrimination. The University of Chicago Law Review, 86, 459-487. Goldenfein, J. (2019). Algorithmic Transparency and Decision-Making Accountability: Thought for Buying Machine Learning Algorithms. In S. Bluemmel (Ed.), Closer to the Machine: Technical, Social, and Legal Aspects of AI (pp. 41-61). Australia: Office of the Victorian Information Commissioner. Goodman, B., & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”. AI Magazine, 38(3), 50-57. https://doi.org/10.1609/aimag.v38i3.2741 Governmental Accountability Office. (2021). Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities. https://www.gao.gov/assets/gao-21-519sp.pdf [ogy.de/zqze] Hage, J., & Peczenik, A. (2000). Law, Moral and Defeasibility. Ratio Juris, 13(3), 305-325. Hage, J. (1997). Reasoning with Rules: An Essay on Legal Reasoning and Its Underlying Logic (1 ed.). Switzerland: Springer Science. -------- (2003). Law and Defeasibility. Artificial Intelligence and Law, 11, 221-243. https://doi.org/10.1023/B:ARTI.0000046011.13621.08 Hart, H. L. A. (1948-1949). The Ascription of Responsibility and Rights. Proceedings of the Aristotelian Society, 49, 171-194. Hildebrandt, M. (2008). A Vision of Ambient Law. In R. Brownsword and K. Yeung (Eds.), Regulating Technologies: Legal Futures, Regulatory Frames and Technologocal Fixes (1 ed., pp. 175-192). U.S.: Hart Publishing. https://doi.org/10.5040/9781472564559.ch-008 -------- (2015). Smart Technologies and the End(s) of Law (1 ed.). U.K.: Edward Elgar. -------- (2019). Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. Theoretical Inquiries in Law, 20, 83-121. https://doi.org/10.1515/til-2019-0004 Hong, S.-H. (2020). Technologies of Speculation: The Limits of Knowledge in a Data-Driven Society (1 ed.). New York: New York University Press. Hosanagar, K. (2019). A Human Guide to Machine Intelligence (1 ed.). N.Y.: Viking. Huq, A. Z. (2018). Racial Equity in Algorithmic Criminal Justice. Duke Law Journal, 68, 1043-1134. -------- (2020). A Right to a Human Decision. Virginia Law Review, 106, 611-688. Jones, M. L. (2017). The right to a human in the loop: Political constructions of computer automation and personhood. Social Studies of Science, 47(2), 216-239. https://doi.org/10.1177/0306312717699716. Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise (1 ed). U.S.: William Collins. Kaminski, M. E., & Malgieri, G. (2021). Algorithmic impact assessments under the GDPR: producing multi-layered explanations. International Data Privacy Law. 11, 125-144. https://doi.org/10.1093/idpl/ipaa020. Kaminski, M. E. (2018). Binary Governance: Lessons from the GDPR's Approach to Algorithmic Accountability. Southern California Law Review, 92, 1529-1616. -------- (2020). Understanding Transparency in Algorithmic Accountability. In W. Barfield (Ed.), The Cambridge Handbook of the Law of Algorithms (pp. 121-138). New York: Cambridge University Press. http://doi.org/10.1017/9781108680844.006 Katyal, S. K. (2019). Private Accountability in the Age of Artificial Intelligence. UCLA Law Review, 66, 54-141. Kennedy, R. (2020). The Rule of Law and Algorithmic Governance. In W. Barfield (Ed.), The Cambridge Handbook of the Law of Algorithms (pp. 209-232). New York: Cambridge University Press. https://doi.org/10.1017/9781108680844.012 Kim, P. T. (2017). Data-Driven Discrimination at Work. William & Mary Law Review, 58, 857-936. Klimas, T., & Vaitiukait, J. (2008). The Law of Recitals in European Community Legislation. ILSA Journal of International & Comparative Law, 15(1), 61-93. Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable Algorithms. University of Pennsylvania Law Review, 165, 633-706. Larson, E. J. (2021). The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (1 ed.). U.S.: Harvard University Press. Launchbury, J. (2017). A DARPA Perspective on Artificial Intelligence. https://www.darpa.mil/attachments/AIFull.pdf [ogy.de/nh5g] Lehr, D., & Ohm, P. (2017). Playing with the Data: What Legal Scholars Should Learn About Machine Learning. University of California, Davis Law Review, 51, 653-717. Leonelli, S. (2020). Scientific Research and Big Data. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2020/entries/science-big-data [ogy.de/zbca] Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2021). Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, 23(1)(18), 1-45. https://doi.org/10.1017/10.3390/e23010018 Liu, H.-W., Lin, C.-F., & Chen, Yu-Jie. (2019). Beyond State v Loomis: artificial intelligence, government algorithmization and accountability. International Journal of Law and Information Technology, 27(2), 122-141. http://doi.org/10.1093/ijlit/eaz001 MacCormick, N. (2005). Rhetoric and the Rule of Law: A Theory of Legal Reasoning (1 ed.). New York: Oxford University Press. Malgieri, G., & Comandé, G. (2017). Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation. International Data Privacy Law, 7, 243-265. https://doi.org/10.1093/idpl/ipx019 Malgieri, G. (2019). Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations. Computer Law & Security Review, 35(5). https://doi.org/10.1016/j.clsr.2019.05.002 Mayer-Schonberger, V., & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think (1 ed.). New York: Houghton Mifflin Harcourt Publishing Company. Mayson, S. G. (2019). Bias In, Bias Out. Yale Law Journal, 128, 2218-2300. Mazzocchi, F. (2015). Could Big Data be the end of theory in science?, A few remarks on the epistemology of data-driven science. EMBO Reports, 16, 1250-1255. https://doi.org/10.15252/embr.201541001 Medvedeva, M., Vols, M., & Wieling, M. (2020). Using machine learning to predict decisions of the European Court of Human Rights. Artificial Intelligence and Law, 28, 237-266. Mendoza, I., & Bygrave, L. A. (2017). The Right Not to be Subject to Automated Decisions Based on Profiling. In T.-E. Synodinou, P. Jougleux, C. Markou and T. Prastitou (Eds.), EU Internet Law: Regulation and Enforcement (pp. 77-98): Springer. https://doi.org/10.1007/978-3-319-64955-9_4 Michaels, A. C. (2020). Artificial intelligence, Legal Change, and Separation of Powers. University of Cincinnati Law Review, 88, 1083-1104. Miller, T. (2019). Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 267, 38. https://doi.org/10.1016/j.artint.2018.07.007 Mormann, F. (2021). Beyond Algorithms: Toward a Normative Theory of Automated Regulation. Boston College Law Review, 62, 1-58. Moses, L. B. (2020). Not a Single Singularity. In S. Deakin and C. Markou (Eds.), Is Law Computable?: Critical Perspectives on Law and Artificial Intelligence (pp. 205-222). U.K.: Hart Publishing. Mulligan, D. K., & Bamberger, K. A. (2019). Procurement As Policy: Administrative Process for Machine Learning. Berkeley Technology Law Journal, 34, 781-858. Nay, J., & Standburg, K. J. (2021). Generalizability: Machine Learning and humans-in-the-loop. In R. Vogel (Ed.), Research Handbook on Big Data Law (pp. 285-303). USA: Edward Elgar Publishing. https://doi.org/10.4337/9781788972826.00020 Organization for Economic Cooperation and Development. (2019). OECD Principles on AI. https://www.oecd.org/going-digital/ai/principles/ [ogy.de/gvhs] Oswald, M. (2018). Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(20170359), 1-20. https://doi.org/10.1098/rsta.2017.0359 Pasquale, F. A. (2019). A Rule of Persons, Not Machines: The Limits of Legal Automation. George Washington Law Review, 87, 1-55. -------- (2016). The Black Box Society: The Secret Algorithms That Control Money and Information (1 ed.). MA: Harvard University Press. -------- (2020). New Laws of Robotics: Defending Human Expertise in the Age of AI (1 ed.). Cambridge, Massachusetts: Harvard University Press. Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. New York: Basic Books. Personal Data Protection Commission in Singapore. (2020). Model Artificial Intelligence Governance Framework (Second Edition). https://www.pdpc.gov.sg/help-and-resources/2020/01/second-edition-of-model-artificial-intelligence-governance-framework [ogy.de/7bra] Postman, N. (1993). Technopoly: The Surrender of Culture to Technology (1 ed.). New York: Vintage Books. Prince, A. E. R., & Schwarcz, D. (2020). Proxy Discrimination in the Age of Artificial Intelligence and Big Data. Iowa Law Review, 105, 1257-1318. Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending certain Union Legislative acts. (2021). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 [ogy.de/j92v] Re, R. M., & Solow-Niederman, A. (2019). Developing Artificially Intelligent Justice. Stanford Technology Law Review, 22, 242-289. Remus, D., & Levy, F. (2017). Can Robots Be Lawyers: Computers, Lawyers, and the Practice of Law. Georgetown Journal of Legal Ethics, 30(3), 501-558. Rissland, E. L. (1990). Artificial Intelligence and Law: Stepping Stones to a Model of Legal Reasoning. Yale Law Journal, 99(8), 1957-1982. Russel, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4 ed.). U.K.: Pearson. Sancho, D. (2020). Automated Decision-Making under Article 22 GDPR:Towards a More Substantial Regime for Solely Automated Decision-Making. In M. Ebers and S. Navas (Eds.), Algorithms and Law (pp. 136-156). U.K.: Cambridge University Press. https://doi.org/10.1017/9781108347846.005 de Sio, F. S., & Mecacci, G. (2021). Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology, 1-28. https://doi.org/10.1007/s13347-021-00450-x Sartor, G. (1995). Defeasibility in Legal Reasoning. In Z. Bankowski, I. White and U. Hahn (Eds.), Informatics and the Foundations of Legal Reasoning (pp. 119-158). Switzerland: Springer. Schauer, F. (2012). Is Defeasibility an Essential Property of Law?. In J. F. Beltrán and G. B. Ratti (Eds.), The Logic of Legal Requirements: Essays on Defeasibility (pp. 77-88). U.K.: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199661640.003.0005 Selbst, A. D., & Barocas, S. (2018). The Intuitive Appeal of Explainable Machines. Fordham Law Review, 87, 1085-1139. Selbst, A. D., & Powles, J. (2017). Meaningful Information and the Right to Explanation. International Data Privacy Law, 7, 233-242. https://doi.org/233-242. 10.1093/idpl/ipx022 Sunstein, C. R. (2021). Governing by Algorithm? No Noise and (Potentially) Less Bias. Duke Law Journal, forthcoming. Available at: https://ssrn.com/abstract=3925240 [ogy.de/d8t8] Surden, H. (2014). Machine Learning and Law. Washington Law Review, 89, 87-115. -------- (2019). Artificial Intelligence and Law: An Overview. Georgia State University Law Review, 35, 1305-1337. Tene, O., & Polonets, J. (2013). Judged by the Tin Man: Individual Rights in the Age of Big Data. Journal on Telecommunications and High Technology Law, 11, 351-368. Tischbirek, A. (2020). Artificial Intelligence and Discrimination: Discriminating Against Discriminatory Systems. In T. Wischmeyer and T. Rademacher (Eds.), Regulating Artificial Intelligence (1 ed., pp. 103-121). Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-030-32361-5_5 Tosoni, L. (2021). The right to object to automated individual decisions: resolving the ambiguity of Article 22(1) of the General Data Protection Regulation. International Data Privacy Law, 11, 145-162. https://doi.org/10.1093/idpl/ipaa024 Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433 Veale, M., & Edwards, L.. (2018). Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling. Computer Law & Security Review, 34(2), 398-404. https://doi.org/398-404. 10.1016/j.clsr.2017.12.002 Voigt, P., & von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR): A Practical Guide (1 ed.). Switzerland: Springer International Publishing. Volokh, E. (2019). Chief Justice Robots. Duke Law Journal, 68, 1135-1192. Vrabec, H. U. (2021). Data Subject Right under the GDPR (1 ed.). U.K.: Oxford University Press. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7, 76-99. https://doi.org/10.1093/idpl/ipx005 Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology, 31, 841-888. Wachter, S. (2020). Affinity Profiling and Discrimination by Association in Online Behaviorial Advertising. Berkeley Technology Law Journal, 35, 367-430. Waldman, A. E. (2019). Power, Process, and Automated Decision-Making. Fordham Law Review, 88, 613-632. Waldron, J. (2011). The Rule of Law and the Importance of Procedure. In J. E. Fleming (Ed.), Getting to the Rule of Law (pp. 3-31). New York: New York University Press. Wilkinson, J. H., III. (1989). The Role of Reason in the Rule of Law. The University of Chicago Law Review, 56, 779-809. Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505-523. https://doi.org/505-523. 10.1111/rego.12158 -------- (2019). Why Worry about Decision-Making by Machine? In K. Yeung and M. Lodge (Eds.), Algorithmic Regulation (pp. 21-48). New York: Oxford University Press. Yu, P. K. (2020). Artificial Intelligence, the Law-Machine Interface, and Fair Use Automation. Alabama Law Review, 72(1), 187-238. Zanfir-Fortuna, G. (2020). Article 13 Information to be provided where personal data are collected from the data subject. In C. Kuner, L. A. Bygrave & C. Docksey (Eds.), The EU General Data Protection Regulation (GDPR): A Commentary (pp. 413-433). U.K.: Oxford University Press. https://doi.org/10.1093/oso/9780198826491.003.0044 -------- (2020). Article 15 Right of access by the data subject. In C. Kuner, L. A. Bygrave & C. Docksey (Eds.), The EU General Data Protection Regulation (GDPR): A Commentary (pp. 449-468). U.K.: Oxford University Press. https://doi.org/10.1093/oso/9780198826491.003.0046 Zarsky, T. Z. (2013). Transparency in Data Mining: From Theory to Practice. In B. Custers, T. Calders, B. Schermer and T. Zarsky (Eds.), Discrimination and Privacy in the Information Society (pp. 301-324). Berlin: Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-30487-3_17 -------- (2013). Transparent Predictions. University of Illinois Law Review, 2013, 1503-1570. Wolff, H. A., & Brink, S. (Hrsg.) (2020). BeckOK Datenschutzrecht (32. Aulf.). München: C.H.Beck. Paul, B. P., & Pauly, D. A. (Hrsg.) (2018). DS-GVO GDSG (2. Aulf.). München: C.H.Beck. Kumkar, L. K., & Roth-Isigkeit, D. (2020), Erklärungspflichten bei automatisierten Datenverarbeitungen nach der DSGVO. JuristenZeitung, 277-286. Käde, L., & von Maltzan, S. (2020), Die Erklärbarkeit von Künstlicher Intelligenz (KI): Entmystifizierung der Black Box und Chancen für das Recht. Computer und Recht, 66-72. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/79478 | - |
| dc.description.abstract | 第二波人工智慧的廣泛應用,激起了法律與科技之間的緊張關係。人工智慧的建置目的,在於輔助人類更有效率從事任務,甚至根本的取代人類。然而,法律適用的任務是否能由人工智慧加以取代,有所疑義。本文的問題意識在於釐清,透過人工智慧從事法律適用的任務,是否與憲法的價值有所違背。關於這個問題意識,歐盟的一般資料保護規則已經作成了初步的價值判斷,原則上禁止以人工智慧從事法律適用的任務。這個價值判斷是否正確則必須進一步釐清。基於此,本文的研究目的即在探究憲法法治原則關於法律適用任務的要求,藉以評價人工智慧的法律適用,並審視歐盟立法者的前述立場,最後再針對「公部門使用人工智慧適用法律是否牴觸憲法」的問題提出回答。 本文認為若使用人工智慧從事法律適用的任務,將造成無法對於新的個案從事法律適用,以及區分個案的差異進行法律的續造。這樣的結果與憲法法治原則有所違背。換言之,歐盟法對於人工智慧的管制具有正當性。基於此立場,本文指出,公部門僅有在「人為介入」的條件中,方能夠利用人工智慧輔助公部門從事法律適用的任務。為確保人為介入的目的能夠達成,本文主張應揭露人工智慧在建置階段中的重要資訊,使公部門中的決定者能在個案中實質的評估是否採納或拒絕人工智慧的輔助。 在論證順序上,本文於第二章說明「人為介入」的內涵。人為介入目的在於分配機器與人類在任務執行上的互動關係,以人類作為任務課責的對象。在規範模式上,人為介入可以分為「機器輔助人類作成決定」與「人類事後推翻機器決定」的兩種類型。以目前歐盟一般資料保護規則中的「受人為決定」為例,本文指出「人為介入」管制模式背後的事實判斷是「人為決定」與「人工智慧的決定」有所不同,人工智慧無法完全取代人為決定。換言之,人為介入的目的在於維持現狀,維持以「人類」作為最終決策者的決定方式。 為釐清蘊含於「人為介入」的事實判斷是否正確,本文於第三章分析人為與人工智慧在適用法律、作成決定的任務上是否有差距。在釐清人工智慧的本質,及其在運作上的特色後,本文以法律適用的內涵作為基礎,指出人工智慧與人類決定的差異。針對人工智慧在運作上的特性,本文指出人工智慧在從事法律適用任務上所生的兩個問題:一、沒有辦法針對新個案從事法律適用;二、沒有辦法區分個案之間的差異從事法律之續造。 在說明人工智慧對於適用法律過程所造成的影響後,本文於第四章從法治原則出發,從規範面上評價這些特色。本文檢視法律續造在法治原則中的地位,並分析公部門若使用人工智慧從事法律適用的任務,是否存在不用進行法律續造的特殊原因。在法治原則在現今的社會中仍有其重要性的前提下,本文認為「人為介入」的規範性要求,使適用法律、作成決定的過程,能夠由人類區分事實的差異而作成個案的決定,保持規範適用典範變遷的可能性,並進一步證成人為介入是公部門正當使用人工智慧的必要條件。為確保「人為介入」在現實中能夠有效的落實,本文主張應建構人工智慧設計階段為揭露對象的透明法制框架,透過揭露人工智慧在建置階段之中的重要資訊,使人類決策者能夠有效的評估個案之中是否採納或者拒絕人工智慧的決定,以有效的落實人為介入的規範性要求。 | zh_TW |
| dc.description.abstract | A decade ago, nobody could have envisaged such a world, in which robots replaced the majority of laborers. Now, the widespread usages of data-based Artificial Intelligence (AI), is already intensifying the tension between law and technology. One of the harshest issues is whether AI can replace humans in the field of legal reasoning. In this thesis, the author will explore the differences between AI and humans when they conduct legal reasoning, and analyze under what conditions the public sector can use artificial intelligence legitimately. The author indicates two failures AI makes in legal reasoning. First, AI cannot apply the law on the new facts that have never taken place before. Second, Al is incapable of distinguishing between cases and creating new reasons to foster the change of the law. With these two failures, the thesis argues that the public sector violates the constitutional principle of the “rule of law” when using AI to apply the law automatically. Finally, the thesis tries to construct a regulatory framework including the obligations of human intervention and disclosure of formational information of AI. Under this human-centered framework, the public sector can enjoy the benefits brought by AI and, simultaneously, conform to the obligation set forth by the rule of law. Chapter 2 illustrates what “human intervention” means. Among all the dazzling AI regulation proposals, one of the most familiar approaches is keeping a human in the loop of AI. Unfortunately, circling in the fog for the right of explanation, scholars across the Atlantic shed little light on the basic meaning of the human intervention regulatory model in the European Union General Data Protection Regulation (GDPR). The thesis contributes to the scholarship with a thorough analysis of this model. Interestingly, it shows that underlying this model is the implicit legislative intent that human-made decisions and AI-made decisions are totally different. Moreover, instead of following the global trend of AI fanaticism, the legislator decides not to jump on the train of technology revolution, but embrace the classic decision approach, human-made decision. This regulatory measure, obviously, raises a deeper question: what are the differences between human decisions and AI decisions? Chapter 3 discusses the question posed by the EU legislator directly. Under practical scenarios, AI has two main aims. One is technological: imitating what humans can do. The other is scientific: answering what humans don’t know. The author distinguishes between AI and humans on the work of legal reasoning, with a focus on AI with imitating ability. A “human” jurist in legal reasoning should give a word-elaborated reason on what law shall be interpreted under certain situations, justify a meaning-based extension of the legal requirements to be applied under certain facts, and vindicate a principle-based requirement of the legal rules that shall be recognized under a certain legal order. In other words, the law, as an institution, possesses the ability to adapt along with the variances between facts and the change of society. Contrasting these characteristics with AI, the imitating AI cannot in theory apply new facts, distinguish interpretations between individual cases, and contribute to changing the legal paradigm continually. However, what these distinctions shall mean under the constitution becomes a significant question. Chapter 4 reviews these distinctions under the “rule of law”. Based on the preconditions that the law includes both rule and principle, legal practitioners, along with judges and executive body, have the obligation to apply the law according to the facts and to attune the interpretation in line with the environment. The public sector would be in violation of the rule of law if it uses AI to apply the law directly without justification, for AI is incapable of altering legal interpretation itself. This thesis does not call for a complete ban on AI decision-making, but to impose conditions on the use of AI. Considering the importance of humans' ability, the author argues that a “human intervention” model is a suitable safeguard for constructing an accountable and organic legal reasoning system. Maintaining the human subject as the decision-maker and allowing the assistance of AI are the core obligations of this model. The issue this model triggers, nevertheless, is how to fulfill good human-computer interaction. The key to ideal human-computer interaction, in my opinion, is transparency. Seeing with knowing. The disclosure of the formational information of AI, such as information on data collection (e.g. collection methods, sample range, labelling criteria, set-cleaning methods) and algorithm (e.g. reason for choosing a certain algorithm), helps the decision-maker evaluate whether to accept AI assistance by examining the relationship between every individual case and the database. Chapter 5, finally concludes with some key points extracted from each chapter, clarifies the limits of this study, and sets future research agenda. | en |
| dc.description.provenance | Made available in DSpace on 2022-11-23T09:01:30Z (GMT). No. of bitstreams: 1 U0001-0910202111094900.pdf: 2525089 bytes, checksum: 36b31294546d01bfc785ff4d8645f10d (MD5) Previous issue date: 2021 | en |
| dc.description.tableofcontents | 第一章 緒論 1 1.1 問題意識與研究目的 3 1.2 研究範圍 10 1.2.1 「公部門」的決定 10 1.2.2 適用法律產生效果的決定 10 1.2.3 發現並應用「資料關聯性」為原理的自動化決定 11 1.3 名詞定義 13 1.4 本文論點 18 1.5 本文架構 20 第二章 「人為介入」的內涵 23 2.1 人為介入作為自動化決定的管制手段 25 2.2 歐盟一般資料保護規則對於自動化決定的管制內涵 29 2.2.1 歐盟一般資料保護規則的規範結構與適用範圍 29 2.2.2 歐盟一般資料保護規則對於自動化決定的管制內涵 31 2.2.3 管制自動化決定規範的要件釐清 32 2.2.3.1 自動化決定拘束拒絕權與人為介入權 32 2.2.3.2 涉及自動化決定重要資訊的告知權/義務與近用權 41 2.2.3.3 定位不明的說明權 45 2.2.4 小結 49 2.3 歐盟一般資料保護規則立法者的事實判斷 50 第三章 人類與人工智慧在法律適用上的差異 55 3.1 人工智慧的技術本質 57 3.2 人工智慧的兩種能力 60 3.2.1 模仿型的人工智慧 60 3.2.2 洞見發現型的人工智慧 63 3.3 法律適用的本質 67 3.3.1 法律論證作為法律適用的方法 67 3.3.2 法律面對多變事實的方式 70 3.3.3 小結 75 3.4 模仿型人工智慧對於法律適用的影響 76 3.4.1 無法提供法律適用的理由? 77 3.4.2 無法針對新事物進行法律適用 80 3.4.3 無法進行法律的續造 83 第四章 人為介入作為公部門正當使用人工智慧的必要條件 87 4.1 法律續造與法治原則間的關係 89 4.1.1 法律續造的實踐 89 4.1.2 法律續造與法治原則 92 4.2 法治原則下的自動化決定 96 4.3 人為介入為核心的人工智慧管制框架 99 4.3.1 人為介入的正當性 99 4.3.2 以人為介入為核心的人工智慧管制框架 103 4.3.2.1 透明作為人機互動挑戰的解方 103 4.3.2.2 以設計內涵透明為目的的法制框架建構 108 第五章 結論 113 5.1 論證總結 113 5.2 設例解析 115 5.3 未來展望 117 參考文獻 119 | - |
| dc.language.iso | zh_TW | - |
| dc.subject | 歐盟一般資料保護規則 | zh_TW |
| dc.subject | 法治原則 | zh_TW |
| dc.subject | 演算法 | zh_TW |
| dc.subject | 自動化決定 | zh_TW |
| dc.subject | 人工智慧 | zh_TW |
| dc.subject | 公部門 | zh_TW |
| dc.subject | 法律的續造 | zh_TW |
| dc.subject | 原則與規則 | zh_TW |
| dc.subject | 課責 | zh_TW |
| dc.subject | 透明 | zh_TW |
| dc.subject | 巨量資料 | zh_TW |
| dc.subject | 說明權 | zh_TW |
| dc.subject | 具說明能力的人工智慧 | zh_TW |
| dc.subject | 人為介入 | zh_TW |
| dc.subject | explainable Artificial Intelligence (xAI) | en |
| dc.subject | Artificial Intelligence | en |
| dc.subject | automated decision-making (ADM) | en |
| dc.subject | algorithm | en |
| dc.subject | Big Data | en |
| dc.subject | the public sector | en |
| dc.subject | human intervention | en |
| dc.subject | European Union General Data Protection Regulation (GDPR) | en |
| dc.subject | the Rule of Law | en |
| dc.subject | legal construction (Rechtsfortbildung) | en |
| dc.subject | principle and rule | en |
| dc.subject | accountability | en |
| dc.subject | transparency | en |
| dc.subject | the right to explanation (RtE) | en |
| dc.title | 公部門中的人工智慧—人為介入作為正當使用人工智慧的必要條件 | zh_TW |
| dc.title | Artificial Intelligence in the Public Sector: Human Intervention as the Necessary Condition for Legitimate Use of Artificial Intelligence | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 109-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.coadvisor | 蘇慧婕 | zh_TW |
| dc.contributor.coadvisor | Hui-Chieh Su | en |
| dc.contributor.oralexamcommittee | 劉靜怡;邱文聰 | zh_TW |
| dc.contributor.oralexamcommittee | Ching-Yi Liu;Wen-Tsong Chiou | en |
| dc.subject.keyword | 人工智慧,自動化決定,演算法,巨量資料,公部門,人為介入,歐盟一般資料保護規則,法治原則,法律的續造,原則與規則,課責,透明,說明權,具說明能力的人工智慧, | zh_TW |
| dc.subject.keyword | Artificial Intelligence,automated decision-making (ADM),algorithm,Big Data,the public sector,human intervention,European Union General Data Protection Regulation (GDPR),the Rule of Law,legal construction (Rechtsfortbildung),principle and rule,accountability,transparency,the right to explanation (RtE),explainable Artificial Intelligence (xAI), | en |
| dc.relation.page | 134 | - |
| dc.identifier.doi | 10.6342/NTU202103631 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2021-10-27 | - |
| dc.contributor.author-college | 法律學院 | - |
| dc.contributor.author-dept | 法律學系 | - |
| 顯示於系所單位: | 法律學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-109-2.pdf | 2.47 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
