Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 法律學院
  3. 科際整合法律學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96384
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor蘇慧婕zh_TW
dc.contributor.advisorHui-Chieh Suen
dc.contributor.author劉逸文zh_TW
dc.contributor.authorYi-Wen Liuen
dc.date.accessioned2025-02-13T16:13:42Z-
dc.date.available2025-02-14-
dc.date.copyright2025-02-13-
dc.date.issued2025-
dc.date.submitted2025-02-10-
dc.identifier.citationDouglas E. Comer(著),鄭王駿等(譯)(2019),《電腦與網際網路國際版》,全華。
Frank Pasquale(著),李姿儀(譯)(2023),《二十一世紀機器人新律:如何打造有AI參與的理想社會?》,左岸文化。
Shoshana Zuboff(著),溫澤元等(譯)(2020),《監控資本主義時代(下卷):機器控制力量》,時報。
王志弘(2015),〈拼裝都市論與都市政治經濟學之辯〉,《地理研究》,62期,頁109-122。
王德瀛(2020),〈簡評加州消費者隱私保護法-規範重點與其對美國隱私保護的影響〉,《科技法律透析》,32卷3期,頁19-27。
王澤鑑(2012),《人格權法:法釋義學、比較法、案例研究》,自版。
呂胤慶(2021),《公部門中的人工智慧—人為介入作為正當使用人工智慧的必要條件》,國立臺灣大學法律學研究所碩士論文。
李榮耕(2022),〈刑事程序中人工智慧於風險評估上的應用〉,《政大法學評論》,168期,頁117-186。
李震山(2011),〈論資訊自決權〉,氏著,《人性尊嚴與人權保障》,頁709-756,元照。
林子儀(2015),〈公共隱私權〉,收於:國立臺灣大學法律學院(編),《馬漢寶講座論文彙編,第五屆》,頁7-62,財團法人馬氏思上文教基金會。
林建中(1999),《隱私權概念之再思考—關於概念範圍、定義及權利形成方法》,國立臺灣大學法律學研究所碩士論文。
邱文聰(2009),〈從資訊自決與資訊隱私的概念區分-評「電腦處理個人資料保護法修正草案」的結構性問題〉,《月旦法學雜誌》,168期,頁172-189。
邱文聰(2018),〈人工智慧相關法律議題芻議〉,收於:劉靜怡(編),《初探人工智慧中的個資保護發展趨勢與潛在的反歧視難題》,元照。
邱文聰(2020),〈第二波人工智慧知識學習與生產對法學的挑戰——資訊、科技與社會研究及法學的對話〉,收於:李建良(編),《法律思維與制度的智慧轉型》,頁135-166,元照。
陳起行(2000),〈資訊隱私權法理探討——以美國法為中心〉,《政大法學評論》,64期,頁297-341。
翁逸泓(2022),〈資料治理法制:歐盟模式之啟發〉,《東海大學法學研究》,64期,頁55-116。
孫森焱(2020),《民法債編總論上冊》,自版。
張陳弘(2018),〈新興科技下的資訊隱私保護:「告知後同意原則」的侷限性與修正方法之提出〉,《臺大法學論叢》,47卷1期,頁201-297。
張陳弘(2018),〈隱私之合理期待標準於我國司法實務的操作—我的期待?你的合理?誰的隱私?〉,《法令月刊》,69卷2期,頁82-112。
張陳弘(2022),〈美國加州消費者隱私保護法制之最新發展與比較法啟示〉,《當代法律》,6期,頁24-39。
黃銘輝(2009),〈法治行政、正當程序與媒體所有權管制-借鏡美國管制經驗析論NCC對「旺旺入主三中」案處分之合法性與正當性〉,《法學新論》,17期,頁105-149。
楊岳平(2021),〈重省我國法下資料的基本法律議題——以資料的法律定性為中心〉,《歐亞研究》,17期,頁31-39。
劉定基(2009),〈欺罔與不公平資訊行為之規範—以美國聯邦交易委員會的管制案例為中心〉,《公平交易季刊》,17卷4期,頁57-91。
劉定基(2013),〈析論個人資料保護法上「當事人同意」的概念〉,《月旦法學雜誌》,218期,頁146-167。
劉定基(2017),〈大數據與物聯網時代的個人資料自主權〉,《憲政時代》,42卷3期,頁265-308。
劉定基(2023),〈資訊的保鮮期限?──論被遺忘權幾個待解的習題〉,《政大法學評論》,174期,頁217-263。
劉靜怡(2006),〈言論自由:第六講:言論自由、媒體類型規範與民主政治〉,《月旦法學教室》,42期,頁34-44。
鄭詠綺(2023),《論歐盟社群媒體平台精準投放之管制架構:以私生活權之保障為中心》,國立臺灣大學法律學研究所碩士論文。
蘇慧婕(2022),〈歐盟被遺忘權的內國保障:德國聯邦憲法法院第一、二次被遺忘權判決評析〉,《臺大法學論叢》,51卷1期,頁1-65。
Ackoff R. L. (1999). Ackoff’s Best: His Classic Writings on Management. Wiley.
Acquisti, A. & Grossklags, J. (2005). Privacy and Rationality: A Survey. In Privacy and Technologies of Identity: A Cross-Disciplinary Conversation (pp. 15-29). (Privacy and Technologies of Identity: A Cross-Disciplinary Conversation). Springer Science and Business Media, LLC.
Advisory Committee on Automated Personal Data Systems of Department of House, Education, and Welfare (1973). Records, Computers, and the Rights of Citizens. https://www.justice.gov/opcl/docs/rec-com-rights.pdf.
Alpaydin, E. (2014). Introduction to Machine Learning. The MIT Press.
Altman, I. (1977). Privacy Regulation: Culturally Universal or Culturally Specific?. Journal of Social Issues, 33(3), 66-84.
Austin, L. (2014). Enough About Me: Why Privacy Is About Power, Not Consent (or Harm). In Sara, A. (Eds.), A World without Privacy: What Law Can and Should Do? (pp. 131-189). Cambridge University Press.
Balkin, J. M. (2016). Information Fiduciaries and the First Amendment. UC Davis Law Review, 49(4), 1183-1234.
Bamberger, K. A. & Lobel, O. (2017). Platform Market Power. Berkeley Technology Law Journal, 32, 1051-1092.
Barocas, S. & Nissenbaum, H. (2014) Big Data’s End Run around Anonymity and Consent. In Lane, J., Stodden, V., Bender, S. & Nissenbaum, H. (Eds.), Privacy, Big Data, and the Public Good (pp. 44-75). Cambridge University Press.
Burdon, M. & Andrejevic, M. (2016). Big Data in the Sensor Society. In Sugimoto, C. R., Ekbia, H. R. & Mattioli M. (Eds.), Big Data Is Not a Monolith. The MIT Press.
Carey, P. (2018). Data Protection: A Practical Guide to UK and EU Law. Oxford University Press.
Cohen, J. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. Yale University Press.
Cohen, J. (2013). What Privacy Is For?. Harvard Law Review, 126, 1904-1933.
Cohen, J. (2019). Turning Privacy inside Out. Theoretical Inquiries in Law, 20, 1-31.
Cohen, J. (2024). How (Not) to Write a Privacy Law. https://knightcolumbia.org/content/how-not-to-write-a-privacy-law.
Crawford, K. & Schultz, K. (2013). Big Data and Due Process:Toward a Framework to Redress Predictive Privacy Harms. Boston College Law Review, 55, 93-128.
Desouza, K. C. & Smith, K. L. (2014). Big Data for Social Innovation. Stanford Social Innovation Review, 2014, 39-43.
Fairfield, J. & Engel, C. (2015). Privacy as a Public Good. Duke Law Journal, 65, 385-457.
Federal Trade Commission (2014). Data Brokers: A Call for Transparency and Accountability.
Ferrer, E. C., Rudovic, O., Hardjono, T. & Pentland, A. (2018). RoboChain: A Secure Data-Sharing Framework for Human-Robot Interaction, https://arxiv.org/abs/1802.04480.
Fried, C. (1968). Privacy. The Yale Law Journal, 77(3), 475-493.
Froomkin, A.M. (2019). Big Data: Destroyer of Informed Consent. Yale Journal of Law & Technology, 21(3), 27-54.
Fuster, G. G. & Scherrer, A. (2015). Big Data and Smart Devices and Their Impact on Privacy. European Parliament. https://www.europarl.europa.eu/RegData/etudes/STUD/2015/536455/IPOL_STU(2015)536455_EN.pdf.
Gavison, R. (1980). Privacy and the Limits of Law. The Yale Law Journal, 89(3), 421-471.
Goffman, E. (1959). The Presentation of Self in Everyday Life. Anchor Books.
Gupta, S. B. & Mittal, A. (2017). Introduction to Database Management System. Laxmi.
Hildebrandt, M. (2014). Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology. Edward Elgar Publishing.
Hildebrandt, M. (2019). Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. Theoretical Inquiries in Law, 20(1), 83-121.
Hirsch, D. D. (2016). Privacy, Public Goods and the Tragedy of the Trust Commons: A Response to Professors Fairfield and Engel. Duke Law Journal Online, 65, 67-93.
Hirsch, D. D. (2020). From Individual Control to Social Protection: New Paradigms for Privacy Law in the Age of Predictive Analytics. Maryland Law Review, 79(2), 439-505.
Koops B. J. & Gahic, M. (2021). Unite in Privacy Diversity: A Kaleidoscopic View of Privacy Definitions. South Carolina Law Review, 73, 465-499.
Krasnow, E. G. & Goodman, J. N. (1998). The “Public Interest” Standard: The Search for the Holy Grail. Federal Communications Law Journal, 50(3), 605-635.
Lazaro, C. & Métayer, D. L. (2015) Control over Personal Data: True Remedy or Fairytale?. SCRIPTed, 12(1), 3-34.
Lubarsky, B. (2016). Re-Identification of “Anonymized” Data. Georgetown Law Technology Review, 1, 202-213.
Lynskey, O. (2015). The Foundations of EU Data Protection Law. Oxford University Press.
Matsumi, H. & Solove, D. J. (forthcoming 2025). The Prediction Society: Algorithms and the Problems of Forecasting the Future. University of Illinois Law Review.
Miller, A. R. (1971). The Assault on Privacy: Computers, Data Banks, and Dossiers. University of Michigan Press.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Pentland, A., Lipton, A. & Hardjono, T. (2021). Building the New Economy: Data as Capital. The MIT Press.
Prosser, W. L. (1960). Privacy. California Law Review, 48, 383-423.
Regan, P. (2020). A Design for Public Trustee and Privacy Protection Regulation. Seton Hall Journal of Legislation and Public Policy, 44(3), 487-513.
Richards, N. M. (2013). The Dangers of Surveillance. Harvard Law Review, 126, 1934-1965.
Richards, N. M. & Woodrow, H. (2016). Taking Trust Seriously in Privacy Law. Stanford Technology Law Review, 19, 431-472.
Richards, N. M. & Woodrow, H. (2019). The Pathologies of Digital Consent. Washington University Law Review, 96(6), 1461-1503.
Richards, N. M. & Woodrow, H. (2021). A Duty of Loyalty for Privacy Law. Washington University Law Review, 99, 961-1021.
Schwartz, P. M. (1999). Privacy and Democracy in Cyberspace. Vanderbilt Law Review, 52, 1609-1702.
Schwartz, P. M. & Peifer, K. (2017). Transatlantic Data Privacy Law. The Georgetown Law Journal, 106, 115-179.
Solove, D. J. (2006). The Digital Person: Technology And Privacy in the Information Age. NYU Press.
Solove, D. J. (2006). A Taxonomy of Privacy. University of Pennsylvania Law Review, 154, 477-560.
Solove, D. J. (2023). The Limitations of Privacy Rights. Notre Dame Law Review, 98(3), 975-1036.
Solove, D. J. (2024). Murky Consent: An Approach to the Fictions of Consent in Privacy Law. Boston University Law Review, 104, 593-639.
Solove, D. J. & Schwartz P. M. (2024). Information Privacy Law. Aspen.
Solove, D. J. & Woodrow, H. (2024). Kafka in the Age of AI and the Futility of Privacy as Control. Boston University Law Review, 104, 1021-1042.
Solow-Niederman, A. (2022). Information Privacy and the Inference Economy. Northwestern University Law Review, 117(2), 357-454.
Tene, O. & Polonetsky, J. (2013). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11(5), 240-273.
Viljoen, S. (2021). A Relational Theory of Data Governance. The Yale Law Journal, 131, 573-654.
Waggins C., & Jones, M. L. (2023). How Data Happened: A History from the Age of Reason to the Age of Algorithms, W. W. Norton & Company.
Waldman A. E. (2018). Privacy as Trust: Information Privacy for an Information Age. Cambridge University Press.
Waldman A. E. (2022). Privacy, Practice, and Performance. California Law Review, 110(4), 1221-1280.
Warren, S. D. & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review, 4(5), 193-220.
Westin A. F. (1967). Privacy and Freedom. Ig Publishing.
Woodrow, H. & Solove, D. J. (2015). The Scope and Potential of FTC Data Protection. Washington Law Review, 83(6), 2230-2300.
Zuboff, S. (2018). The age of surveillance capitalism : the fight for a human future at the new frontier of power, Public Affairs.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96384-
dc.description.abstract本文旨在探索機器學習時代資訊隱私權之概念與應有的保障機制。機器學習模仿人類學習過程,其「學習」的方法係將資料視作現實的替代品,匯聚大量的資料、在資料中探索潛藏的規律。其中,以認識、剖析「人」為目標的機器學習技術,匯聚巨量的個人資料,以統計原理分析資料、製作人格模版,以人格模版作為認識與評價個人與人口群體的素材,進而對個人與人口群體之行為與偏好進行推論、預測。機器學習之個人資料蒐用活動,使既有的資訊隱私權典範理論——個人資料自主控制理論發生保障不足的問題。首先,個人實際上無從追蹤個人資料的流向與控制蒐用活動,亦無法拒絕資料的蒐用要求。此外,以群體為資料分析規模之機器學習技術之運作邏輯與結果已超出個人可理解與控制範圍。
因此在機器學習為重要的數位化資訊技術工具的時代,資訊隱私權應如何有效地給予個人保障?此為本文最原始的問題意識。而本文研究目的是提出適宜於機器學習時代的資訊隱私權理論。該理論所提出的資訊隱私權概念,必須能夠捕捉機器學習之個人資料蒐用活動的新興危害、風險,以及給予適當的回應機制。本文以美國學者提出之信任隱私理論與隱私間隙理論,進行理論之分析比較。
本文主張應採Julie E. Cohen的隱私間隙理論作為機器學習時代的資訊隱私權典範理論,以社會主體理論、實質自主觀點、基本權利符擔性理論重新建構資訊隱私權概念與保障機制。由於社會與主體是相互建構關係、自主性之發展受到外在環境影響,資訊隱私為個人能夠管理與他人資訊性邊界之動態間隙。基於主體性與人格之自由發展,資訊隱私權保障個人設定與他人資訊性邊界之決定與能力,使個人得以免於受社會結構過度認識、形塑而使主體性與人格僵化。機器學習利用巨量個人資料進行人格剖繪,係以非主體決定之人格模版附加於該主體之上、建構對其之認識與理解,構成資訊隱私權之干預。為適當保障個人資訊隱私權,機器學習技術之使用應受語義不連續原則與運作可課責原則之限制,限制個人資料之匯流,以及機器學習之運作應對受影響之個人與人口群體透明、開放以確保可課責性。
在論證上,本文在第二章檢視自主控制典範下的資訊隱私權在機器學習時代產生的問題。自主控制典範係為回應電腦與資料庫時代發生之國家監控問題,確保個人自主控制個人資料揭露對象與條件,以及決定自己如何被認識與認識之程度。然機器學習技術匯聚、分析巨量資料、製作人口模版而對個人與群體進行評價、推論與預測,透過資訊環境之支配力使個人與群體的行為、偏好逐漸貼合推論與預測結果,個人與群體因此凝著於預決之人口模版。然而,在機器學習時代,因被蒐用的個資多、蒐用者多,而現實上個人無法實現資料自主控制;此外,自主控制理論也無法給予推論個人資料保護,以及機器學習之預測被當成真實,而使人類未來被機器預言寫定的問題。
在第三章,本文比較信任隱私理論與隱私間隙理論之隱私、資訊隱私概念與保障機制。相較於信任隱私理論從個人資訊分享揭露關係對社會之意義,論證資訊隱私保障之正當性,以及以私人民事關係汲取出資訊受託人義務作為主要規範模型。隱私間隙理論提出隱私對主體性與人格發展之功能、隱私權之構成要件、隱私權保障體系與機制,建構融貫、具體系性的隱私權理論。隱私間隙理論主張隱私為個人控管邊界之動態間隙,以自主保障取徑、權能保障取徑、符擔性保障取徑建構隱私權保障體系。符擔性保障取徑下,隱私權保障領域擴張,包含限制個人被轉譯程度之語義不連續原則,以及個人與群體參與社會結構之形塑的運作可課責原則。
確認本文採取隱私間隙理論為新興資訊隱私權典範理論之後,第四章以隱私間隙理論建構機器學習時代的資訊隱私概念與保障機制。機器學習對個人資訊隱私造成的新興危害來自以下二途徑:推論與預測分析、機器學習邏輯的怪異與混亂。首先,機器技術對個人與人口群體之推論與預測分析,匯聚大量個人資料、進行人口剖析、製作人格模版,以該模版作為認識與評價個人與群體之依據,並藉由其對資訊環境之掌控能力,使人類之行為與人格發展貼合人格模版。再者,個人與裝配機器學習技術之數位服務與工具頻繁互動下,因機器邏輯之混亂與難以理解,影響個人自我認識的過程。在保障機制上,語義不連續原則限制個人被轉譯的細緻程度,原則上禁止個人資料跨脈絡之匯聚,且數位服務工具應設運作之中斷機制;運作可課責原則下,機器學習、數位服務與工具、資訊環境之運作的設計與佈局,應對受影響之個人與群體透明,開放其參與事前決策過程,並確保事後課責之實踐。
zh_TW
dc.description.abstractThis thesis aims to explore the information privacy’s paradigm and the appropriate protection strategies in machine learning society. Machine learning is a technology imitating human learning process and “learning” from big data which is treated as a substitute for reality. In the vein of information privacy, it collects huge amounts of personal data, sorts and categorizes the data, and creates personality profiles to know a person or population groups and make judgements, inferences or predictions. The above-mentioned process challenges the probability of self-determination of personal information, the existing paradigm of the right to information privacy. Obviously, we cannot trace the situation that personal data processed and retained; therefore, self-determination of personal information cannot be realized. Moreover, machine learning is based on data collected and processed at population-scale, its logic and results are beyond the range individuals can imagine, understand, and manage.
As mentioned above, this thesis seeks to find the way to realize information privacy effectively in the era of machine learning. Hence, this thesis aims to find the information privacy theory suited in machine learning society. That theory needs to clarify the risk or harm caused by machine learning and the react mechanism to ensure the realization of the right to information privacy. This thesis’ objectives are to illustrate and analyze privacy as trust theory and privacy as the room for boundary management and to make a comparison between two theories.
This thesis argues Julie E. Cohen’s privacy as the room for boundary management theory as the paradigm of right to information privacy in machine learning society and argues that re-constructing the concept of information privacy and rebuild protection mechanism based on the concept of socially situated subjectivity, material liberal theory, and affordance-based approach to privacy. Subjectivity emerges from the society and environment that subjects are situated, and information privacy is the dynamic room for subjects to realize informational boundary management. To ensure subjectivity and personality to develop freely, the right to information privacy safeguards the aforementioned dynamic room, in order to guarantee people find ways to push back against the social shaping from particular institutional, cultural, and material constrains that they encounter in their everyday lives. Evaluating people based on personality profiles made by machine learning constrains the right to information privacy. Machine learning should satisfy the requirement of the semantic discontinuity and the operational accountability principles. The semantic discontinuity principle aims to frustrate seamless personal data flow. The operational accountability principle requires the process of data collection and processing to be transparent at the appropriate level, and make sure that people affected by machine learning would have a say to co-determine the way they are read.
Chapter 2 examines the problems of self-determination of personal information paradigm. The self-determination of personal information paradigm occurs to confront government surveillance raised in computer and database era. It safeguards individuals to control over and determine the conditions of the use of their personal information in order to determine the way to be known. However, personal data self-management is hard to realize nowadays. Furthermore, machine learning evaluation, inference, and prediction raised problems that preemptive intervention and fossilization. These problems are beyond the coverage that safeguards by the right to information privacy based on self-determination theory.
Chapter 3 illustrates, analyzes, and compares privacy as trust theory and privacy as the room for boundary management. Privacy as trust theory builds on the function of personal information sharing relationships, it argues to establish information fiduciary duty as the regulation model. Comparatively, privacy as the room for boundary management theory constructs a coherent and systemic theory. It argues that privacy is a dynamic room for subjects to engage in processes of boundary management. It constructs the right to privacy protection system in three pillars: liberty-based, capability-based, and affordance-based approach, which provide the way to find the location of self-determination privacy and personal data protection. The coverage of the right to privacy discoursed from affordance-based approach is composed of two principles: the semantic discontinuity principle and the operational accountability principle.
Chapter 4 constructs the concept and protection mechanism of the right to information privacy based on privacy as the room for boundary management. Machine learning inference and prediction is made through accumulating amounts of personal data, creating personality profiles, and human behavior or preference patterns. These processes constrain the right to information privacy. Moreover, machine learning inference and prediction are based on logic that humans cannot understand, thus interactions with machine logics disrupt processes of self-formation. The protection strategies are built on two principles. In order to ensure selves incomputable, the semantic discontinuity principle focuses on preventing seamless data collection and processing and requires to set gaps and breakdowns of translation. The operational accountability principle aims to guarantee people have a say in the operation and design of digital environments and technologies, machine learning included. To put it in detail, people should have an approach to access the relevant information, participate in the process of decision making, and build mechanisms to fulfill accountability.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-13T16:13:42Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-13T16:13:42Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 i
誌謝 ii
中文摘要 vi
ABSTRACT ix
簡目 xii
詳目 xiii
第一章 緒論 1
第一節 問題意識與研究目的 1
第二節 研究範圍、研究方法與用語說明 9
第三節 研究架構 12
第二章 資訊隱私權之現有典範:個人資料自主控制之理論、制度與限制 14
第一節 資訊隱私早期的發展:“THE RIGHT TO BE LET ALONE”、私密典範 14
第一項 新興大眾傳播資訊技術與“The Right to Privacy” 14
第二項 資訊隱私權的私密典範與其侷限 17
第一款 資訊隱私權的秘密與親密理論 17
第二款 私密典範下的資訊隱私規範 19
第三款 秘密典範的問題 21
第二節 資訊隱私現有典範——個人資料自主控制之理論介紹 22
第一項 技術背景:電腦、資料庫與網際網路 23
第二項 資訊隱私的個人資料自主控制理論 26
第三節 個人資料自主控制典範:知情同意原則、事後控制權、個人資料處理原則 29
第一項 事前同意:知情同意原則 29
第二項 事後控制權 32
第三項 個人資料蒐集使用原則 33
第四節 個人資料自主控制理論的問題 34
第一項 自主控制的弱化:當事人磋商能力不對等、瑣碎個人資料 35
第一款 當事人磋商能力不對等 35
第二款 瑣碎個人資料 36
第二項 人類理性決策之限制 37
第三項 巨量資料與機器學習的新挑戰:推論個人資料、預測分析 38
第一款 巨量資料與機器學習 39
第二款 推論個人資料 41
第三款 預測分析 46
第三章 資訊隱私典範之重省:信任理論、隱私間隙理論 49
第一節 信任理論與資訊託管關係模型、公共受託人模型、公共信任理論 49
第一項 資訊隱私權的信任理論 50
第二項 信任隱私理論下的規範模型 59
第一款 資訊託管關係模型 59
第一目 資訊受託人的酌情揭露、誠實告知、保護安全義務 61
第二目 資訊受託人的忠實義務 63
第二款 公共受託人模型 66
第三款 公共信任理論 68
第二節 隱私間隙理論(PRIVACY AS BREATHING ROOM FOR BOUNDARY MANAGEMENT) 71
第一項 隱私間隙理論之理論基礎:實質自主理論、基本權利的符擔性 73
第一款 實質自主理論:社會主體理論(socially situated subjectivity) 73
第一目 體化感知(embodied perception and cognition) 74
第二目 體化空間(embodied spatiality) 75
第三目 「玩」作為主體與社會的互動模式 75
第四目 小結:隱私間隙理論的自主觀點 77
第二款 基本權利的符擔性(affordance) 77
第二項 隱私間隙理論 79
第一款 隱私是主體進行邊界管控的間隙(room for boundary management) 80
第二款 數位社會的隱私侵害:數位監控 81
第一目 數位地理空間 81
第二目 數位監控:標準化監控、空間性暴露監控、主體協助監控與自我監控、調節式監控 83
第三項 隱私間隙理論隱私權之保障領域:語義不連續原則、運作可課責原則 89
第一款 語義不連續原則 89
第二款 運作可課責原則 93
第三節 理論分析 95
第一項 信任理論與間隙理論之分析 95
第一款 理論層次之分析 96
第二款 規範層次之分析 97
第二項 本文主張:以隱私間隙理論作為機器學習時代資訊隱私權典範 98
第四章 機器學習時代的資訊隱私權之概念重構與保護機制 102
第一節 資訊隱私權之重構 102
第一項 重新定義隱私與資訊隱私 103
第二項 資訊隱私權的不同保障取徑:以自由為出發點、以權能為出發點、符擔性取徑 105
第三項 資訊隱私權保障領域:語義不連續原則、運作可課責原則 107
第二節 機器學習之資訊隱私權保障機制 110
第一項 機器學習 110
第二項 機器學習對資訊隱私權之干預 112
第三項 符擔性取徑之機器學習的資訊隱私權保障機制 114
第一款 一般性隱私保護機制 115
第二款 內部組織程序與外部監管機制 115
第三款 建立在意資訊隱私之職業倫理與社會文化 116
第五章 結論 117
參考文獻 121
-
dc.language.isozh_TW-
dc.title機器學習時代下資訊隱私權的概念重構與保護機制:Julie Cohen的隱私間隙理論觀點zh_TW
dc.titleInformation Privacy’s Paradigm and Protection Strategies in Machine Learning Society: Julie Cohen’s “Privacy as Room for Boundary Management” Theoryen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee林子儀;劉定基zh_TW
dc.contributor.oralexamcommitteeTzu-yi Lin;Ting-Chi Liuen
dc.subject.keyword機器學習,隱私,資訊隱私,個人資料自主控制,個人資料保護,Julie Cohen,zh_TW
dc.subject.keywordMachine Learning,Privacy,Information Privacy,Personal Data Self-determination,Personal Data Protection,Julie Cohen,en
dc.relation.page129-
dc.identifier.doi10.6342/NTU202500550-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-02-10-
dc.contributor.author-college法律學院-
dc.contributor.author-dept科際整合法律學研究所-
dc.date.embargo-lift2025-02-14-
顯示於系所單位:科際整合法律學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf2.27 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved