請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68960
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 葉俊榮(Jiunn-Rong Yeh) | |
dc.contributor.author | Hau-Chiau Ku | en |
dc.contributor.author | 辜厚僑 | zh_TW |
dc.date.accessioned | 2021-06-17T02:44:18Z | - |
dc.date.available | 2020-10-01 | |
dc.date.copyright | 2020-08-21 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-18 | |
dc.identifier.citation | 中文部分 朱迪亞・珀爾、達納・麥肯錫(合著),甘錫安(譯)(2019),《因果革命》,新北市:行路。 艾莉斯‧楊(著),陳雅馨(譯)(2017),《正義與差異政治》,臺北:商周。 李國偉 (03/27/2018),〈人工智慧的名稱政治學〉,《AI 人工智慧時代來臨》,科學月刊 580 期,載於:http://scimonth.blogspot.com/2018/03/blog-post_35.html。 林勤富、劉漢威(2018),〈人工智慧法律議題初探〉,《月旦法學雜誌》,第 274 期,頁 195-215。 國立政治大學金融科技研究中心(2018),《我國保險業金融科技(FinTech/InsurTech) 發展趨勢之風險管理及監理機制研究 期末報告》。 張陳弘(2018),〈新興科技下的資訊隱私保護:「告知後同意原則」的侷限性與修正方法之提出〉,《臺大法學論叢》,47卷1期,頁201-297。 陳宣廷(2018),《從時尚社群網路之使用者評論進行作者分析》,國立臺灣大學資訊工程學研究所碩士論文(未出版),臺北。 黃子潔(2019),《論人工智慧演算法時代的解釋權:歐盟GDPR與我國個人資料保護法的比較研究》,國立臺灣大學科際整合法律學研究所碩士論文(未出版),臺北。 葉俊榮(1998),〈邁向「電子化政府」:資訊公開與行政程序的挑戰〉,《經社法制論叢》,22期7月,頁1-35。 葉俊榮(1999),《行政法案例分析與研究方法》,臺北:三民。 葉俊榮(2010),《面對行政程序法:轉型臺灣的程序建制》,臺北:元照。 葉俊榮(2015),〈氣候變遷的歷史排放量比例責任:市場佔有率責任理論的啟示〉,《月旦法學雜誌》,239期,頁5-17。 葉俊榮(2015),《氣候變遷治理與法律》,臺北:臺大出版中心。 葉俊榮(2016),〈探尋隱私權的空間意涵:大法官對基本權利的脈絡論證〉,《中研院法學期刊》,18期,頁1-40。 漢娜・弗萊(著),林志懋(譯)(2019),《打開演算法黑箱》,臺北:臉譜。 林明鏘(1993),〈公務機密與行政資訊公開〉,《臺大法學論叢》,23卷1期,頁51-86。 英文部分 Arntz, M., T. Gregory and U. Zierahn (2016), 'The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis', OECD Social, Employment and Migration Working Papers, No. 189, OECD Publishing, Paris, https://doi.org/10.1787/5jlz9h56dvq7-en. Balkin, J.M. (2017). 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data, Ohio State Law Journal, 78, 1217-1241. Bambauer, J., Zarsky, T. (2018). The Algorithm Game, Notre Dame Law Review, 94, 1-48. Barocas, S., Selbst, A.D. (2016). Big Data’s Disparate Impact. California Law Review, 104, 671-732. Calders, T., Žliobaitė, I. (2013). Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures. In B. Custers, T. Calders, B. Schermer, T. Zarsky (Eds.). Discrimination and Privacy in the Information Society (pp. 43-57). Berlin, Germeny: Springer. Cole, D. (2020). The Chinese Room Argument. In E. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu Doshi-Velez, F., Kortz, M. (2017, Sep.). Accountability of AI Under the Law: The Role of Explanation. (Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet Society working paper).Retrieved from http://nrs.harvard.edu/urn-3:HUL.InstRepos:34372584 Edwards, L., Veale, M. (2017) Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking for. Duke Law Technology Review, 18, 18-84. Feyerabend, P. (2010). Against Method (4th ed.) New York, NY: Verso Books. Glaeser, E.L. Sunstein, C.R. (2009). Extremism and Social Learning, Journal of Legal Analysis, 1, 263-324. Goodman, B. Flaxman, S. (2017). European Union Regulations on Algorithmic Decision Making and a “Right to Explanation”, AI Magzine, 38(3), 50-57. Heald, D.A. (2006). Varieties of Transparency. In C. Hood, D.A. Heald, (Eds.), Transparency: The Key to Better Governance? (pp. 25-43). New York, NY: Oxford. Irion, K., Williams, J. (2019). Prospective Policy Study on Artificial Intelligence and EU Trade Policy. Amsterdam, Deutch: The Insitution for Information Law. Kahneman, D. (2012). Thinking, Fast and Slow. Danver, MA: Farrar, Straus and Giroux. Kasparov, G. Friedel, F. (2017). Reconstructing Turing's 'paper machine'. EasyChair Preprint, 3, 1-11. Kennedy, D. (2006). Three Globalizations of Law and Legal Thought: 1850-2000. In D.M. Trubek, A. Satntos (Eds.), THE NEW LAW AND ECONOMIC DEVELOPMENT: A CRITICAL APPRAISAL (pp. 19-73). Cambridge, UK: Cambridge University Press. Kingsbury,B., Krisch, N., Stewart, R.B. (2005). The Emergence of Global Administrative Law. Law Contemparary Problems, 68, 15-61. Kosinski, M., Stillwell, D., Graepel, T. (2013) Private traits and attributes are predictable from digital records of human behavior. Proceeding National Academy of Science, 110, 5802-5805. Krizhevsky, A., Sutskever, I., Hinton, G.E. (2017) Imagenet Classification with Deep Convolutional Neural Networks. Communications of the ACM, 60, 84-90. Kroll, J.A., Huey, J., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., Yu, Y. (2017). Accountable Algorithms. University of Pennsylvania Law Review, 165, 633-705. Kuhn, T. (2012). the Structure of Scientific Revolutions (4th Ed.). Chicago, IL: University of Chicago Press. Lanvin, B. Monteiro, F. (2019). The Global Talent Competitiveness Index 2019, Fontainebleau, France: INSEAD. Loyola-González, O. (2019). Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses from a Practical Point of View. IEEE Access, 7, 154096-154113. McKinsey Global Institution (2018). Notes From the AI Frontier: Modeling the Impact of AI on the World Economies. New York, NY: McKinsey Company. Mead, C. Ismail, M. (Eds.). (1989) Analog VLSI Implementation of Neural Systems. Norwell, MA: Kluwer. Mittelstadt, B. (2017). From Individualto Group Privacyin Big Data Analytics. Philosophy Technology, 30, 475-494. National Science and Technology Council (2016). The National Artificial Intelligence Research and Development Strategic Plan. Retrieved from https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf Pasquale, F. (2016). Black Box Society. Cambridge, MA: Havard. Pearl, J. (2018). Theoretical Impediments to Machine Learning with Seven Sparks from the Causal Revolution. In Association for Computing Machinery (Ed.), WSDM '18: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (pp. 3-3). Quisquater J.-J., Guilloum L. (1989). How to Explain Zero-Knowledge Protocols to Your Children. In G. Brassard (Ed.), Advances in Cryptology Richards, M.N., King, J.H. (2014). Big Data Ethics. Wake Forest Law Review, 49, 393-432. Rudin, C. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead, Nature Machine Intelligence, 1, 206-215. S. C. Matz, M. Kosinski, G. Nave, Stillwell, D. J. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences, 114 (48) 12714-12719; DOI: 10.1073/pnas.1710966114 Searle, J.R. (1980). Minds, Brains, and Programs. Behavioral Brain Science, 3(3), 417-457. Sweeney, L. (2013). Discrimination in Online Ad Delivery, Communication of ACM, 56, 44-54. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. IEEE. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html. Turing, A.M. (1950). Computing Machinery and Intelligence. Mind: A Quaterly Review of Psychology Philosophy, 236, 434-460. Wachter, S., Mittelstadt, B., Floridi, L. (2017). Why a Right to Explanation of Automated Decision-making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. Warrington, G.S. (2019). A Comparison of Partisan-Gerrymandering Measures, Election Law Jornal, 18(3), 262-281. Wittgenstein, L. (2009). Philosophical Investigations. (G.E.M. Anscombe, Tans.). Oxford, UK: Blackwell’s. Wofford, N., Defever, A. M., Chopik, W. J. (2019). The vicarious effects of discrimination: How partner experiences of discrimination affect individual health. Social psychological and personality science, 10(1), 121–130. https://doi.org/10.1177/1948550617746218 Zittrain, J. (2014). Engineering an Election. Harvard Law Review Forum, 127, 355-342. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/68960 | - |
dc.description.abstract | 演算法在現代生活中對法社會的行動者影響層面十分廣闊,同時左右著個人生活及政治生活。針對演算法的相關議題,國內文獻多關注於如何透過法律建制達到個人資料保護(涵蓋對於大數據科技的討論,以及人工智能的法律議題討論)的目的。而在這些議題的背後,較少針對在這些科技產品的背後扮演著重要角色的演算法技術本身進行法律議題的形塑。演算法既無形體,也並非法社會中的行動者,但其之所以進入到法律世界中,仍有著使用它的個人。 本文藉由:第一,作為人工智慧核心的演算法有哪些特色?在這裡,本文試圖對演算法科技進行淺略的介紹,並討論目前以人工智慧的方式稱呼這些科技是否合適。並試圖找出,演算法作為自動化決策機制之所以會有問題最可能的原因為何。 第二,演算法的大量使用對社會可能有什麼樣的影響?在該章中,將以對個人生活及政治生活的可能影響進行分析。因為這種種影響,認為應該有管制的必要性。 第三,面對演算法社會,有哪些管制的可能性?在這章中,討論了目前主流的以個人資料為中心的管制方式的利弊。接著介紹以演算法為中心的管制方式的可能性。最後,討論以公害為類比的觀點,並引出需要從全球的角度來看待演算法社會的問題。 第四,如何以全球的角度看待演算法社會的成形?在該章中,先討論全球化可能的型態,最後引入全球行政法的觀點,試圖預測將來可能的發展。 | zh_TW |
dc.description.abstract | In modern society, the use of algorithm has a significant impact, affecting both personal life and political life, on the actors in the society of law. In Taiwan, curent legal studies about algorithm relating subjects were mostly focusing on legal construction of personal information protection law, which including the big data technology, and AI issues. Besides all these issues, algorithm, which is an important factor behind most modern technologiey, is seldom discussed by the legal research society in Taiwan. Although algorithm lacks substance, and is unable to act in the sociey of law, it can enter the world of law by the person using it. This thesis is fousing on: first, what are the characters of algorithms, being the core of artificail intelligence ? In the first part of the thesis, we will do a brief introdution about algorithms, and discuss whether the name “artificial intelligence” suits this art. And we will try to find the most possible reason of why algorithms being automatic decision-maker may cause problems. Second, what is the impacts of the heavy use of algorithm in society ? In this part, we will analyze in aspect of personal life and political life. Because these impact, we may come to a conlusion that regulation may be needed. Third, what kind of regulation is possible, when facing the algorithmic society ? In this part, we will discuss pros and cons of the mainstream regulation which is a personal information-based regulation. Then a possible approach of algorithm-based regulation is introduced. And we will use an analogy of public nuisance to dicuss the regulation of algorithmic society. Therefore we need to see this problem in a global aspect. Forth, How to form a global aspect of algorithmic society ? In this part, we will dicuss possible forms of the globalization of algorthmic society. And we use the aspect of the global administrative law in order to predict the future. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T02:44:18Z (GMT). No. of bitstreams: 1 U0001-1708202001482700.pdf: 3526589 bytes, checksum: 838cf1853d90b897acebd3def0aa617a (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 中文摘要 i 英文摘要 ii 壹、 緒論 1 一、 研究動機 1 二、 研究問題與範圍 2 貳、 演算法的特性及難題 3 一、 演算法與人工智慧 3 (一) 演算法的種類 4 1 「黑盒子」 5 2 「白盒子」 8 (二) 機器學習與人工智慧 9 1 中文房間論證 10 2 演算法與人工智慧的距離 13 二、 機器學習的「去因果性」難題 14 (一) 深度學習與因果語言:去因果性的典範 14 (二) 偽陽性與偽陰性 15 三、 小結 16 參、 以個人及政治生活觀察演算法社會的影響 17 一、 演算法社會對個人生活的影響 18 (一) 對自我認同(identity)的影響 18 (二) 對個人機會的影響 22 二、 演算法社會對政治生活的影響 24 (一) 劍橋分析事件 24 (二) 數位的傑利蠑螈(Digital Gerrymandering)——對票票等值可能的影響 26 三、 小結 27 肆、 不同的管制工具——理論及實務 29 一、 以個人及個人資料為中心的管制 29 (一) GDPR 29 1 解釋權(right to explanation) 30 2 不成為資料客體的權利 31 (二) 臺灣法規現況 31 (三) 可能的問題 33 二、 以演算法為中心的管制 35 (一) 建立演算法的責信(accountability) 35 (二) 透明性問題 37 1 程式碼的透明化 37 2 決定政策的透明化 37 (三) 不透明的演算法管制如何可能 39 1 軟體認證(software verification) 39 2 加密信封(cryptographic commitment) 39 3 零知識證明(zero-knowledge proof) 39 4 公平隨機決定(fair random choices) 40 三、 以公害(public nuisance)出發的管制 42 (一) 演算法作為一種公害 42 1 小小人謬誤 43 2 演算法所造成的侵害是程度的問題 43 3 這個概念幫助我們了解演算法社會的傷害是由廣大的公、私部門行動者的決定造成 43 (二) 可能遇到的問題 44 伍、 演算法全球化與全球行政法 46 一、 演算法社會所產生的全球化問題 46 (一) 演算法作為一種科技 46 (二) 演算法作為一種服務 46 (三) 演算法作為一種商品 47 (四) 演算法浪潮與全球經濟 48 (五) 演算法全球化的人權及區域發展不均衡問題 49 (六) 小結 52 二、 演算法管制是否有可能建立起新的全球行政法領域 52 (一) 全球行政法空間的形成 53 1 國際行政 53 2 跨國網路與合作協議 53 3 分散式行政 54 4 國家間與私部門的混合行政 54 5 私部門主體的行政 54 6 演算法的全球行政空間 55 (二) 全球行政法的發展路徑 56 1 由下而上(bottom-top) 56 2 由上而下(top-bottom) 57 (三) 綜合討論 57 三、 臺灣在這之中可能扮演的角色 58 陸、 結論 60 | |
dc.language.iso | zh-TW | |
dc.title | 演算法社會的管制——多元管制以及全球行政法的可能 | zh_TW |
dc.title | Regulating the Algorithmic Society: the Possibility of Diverse Regulation and Global Administrative Law | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 張文貞(Wen-Chen Chang),王明禮(MING-LI WANG) | |
dc.subject.keyword | 演算法,人工智慧,管制理論,全球行政法,GDPR, | zh_TW |
dc.subject.keyword | algorithm,artificial intelligence,regulation theory,global administrative law,GDPR, | en |
dc.relation.page | 66 | |
dc.identifier.doi | 10.6342/NTU202003653 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2020-08-19 | |
dc.contributor.author-college | 法律學院 | zh_TW |
dc.contributor.author-dept | 科際整合法律學研究所 | zh_TW |
顯示於系所單位: | 科際整合法律學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1708202001482700.pdf 目前未授權公開取用 | 3.44 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。