請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96384
標題: | 機器學習時代下資訊隱私權的概念重構與保護機制:Julie Cohen的隱私間隙理論觀點 Information Privacy’s Paradigm and Protection Strategies in Machine Learning Society: Julie Cohen’s “Privacy as Room for Boundary Management” Theory |
作者: | 劉逸文 Yi-Wen Liu |
指導教授: | 蘇慧婕 Hui-Chieh Su |
關鍵字: | 機器學習,隱私,資訊隱私,個人資料自主控制,個人資料保護,Julie Cohen, Machine Learning,Privacy,Information Privacy,Personal Data Self-determination,Personal Data Protection,Julie Cohen, |
出版年 : | 2025 |
學位: | 碩士 |
摘要: | 本文旨在探索機器學習時代資訊隱私權之概念與應有的保障機制。機器學習模仿人類學習過程,其「學習」的方法係將資料視作現實的替代品,匯聚大量的資料、在資料中探索潛藏的規律。其中,以認識、剖析「人」為目標的機器學習技術,匯聚巨量的個人資料,以統計原理分析資料、製作人格模版,以人格模版作為認識與評價個人與人口群體的素材,進而對個人與人口群體之行為與偏好進行推論、預測。機器學習之個人資料蒐用活動,使既有的資訊隱私權典範理論——個人資料自主控制理論發生保障不足的問題。首先,個人實際上無從追蹤個人資料的流向與控制蒐用活動,亦無法拒絕資料的蒐用要求。此外,以群體為資料分析規模之機器學習技術之運作邏輯與結果已超出個人可理解與控制範圍。
因此在機器學習為重要的數位化資訊技術工具的時代,資訊隱私權應如何有效地給予個人保障?此為本文最原始的問題意識。而本文研究目的是提出適宜於機器學習時代的資訊隱私權理論。該理論所提出的資訊隱私權概念,必須能夠捕捉機器學習之個人資料蒐用活動的新興危害、風險,以及給予適當的回應機制。本文以美國學者提出之信任隱私理論與隱私間隙理論,進行理論之分析比較。 本文主張應採Julie E. Cohen的隱私間隙理論作為機器學習時代的資訊隱私權典範理論,以社會主體理論、實質自主觀點、基本權利符擔性理論重新建構資訊隱私權概念與保障機制。由於社會與主體是相互建構關係、自主性之發展受到外在環境影響,資訊隱私為個人能夠管理與他人資訊性邊界之動態間隙。基於主體性與人格之自由發展,資訊隱私權保障個人設定與他人資訊性邊界之決定與能力,使個人得以免於受社會結構過度認識、形塑而使主體性與人格僵化。機器學習利用巨量個人資料進行人格剖繪,係以非主體決定之人格模版附加於該主體之上、建構對其之認識與理解,構成資訊隱私權之干預。為適當保障個人資訊隱私權,機器學習技術之使用應受語義不連續原則與運作可課責原則之限制,限制個人資料之匯流,以及機器學習之運作應對受影響之個人與人口群體透明、開放以確保可課責性。 在論證上,本文在第二章檢視自主控制典範下的資訊隱私權在機器學習時代產生的問題。自主控制典範係為回應電腦與資料庫時代發生之國家監控問題,確保個人自主控制個人資料揭露對象與條件,以及決定自己如何被認識與認識之程度。然機器學習技術匯聚、分析巨量資料、製作人口模版而對個人與群體進行評價、推論與預測,透過資訊環境之支配力使個人與群體的行為、偏好逐漸貼合推論與預測結果,個人與群體因此凝著於預決之人口模版。然而,在機器學習時代,因被蒐用的個資多、蒐用者多,而現實上個人無法實現資料自主控制;此外,自主控制理論也無法給予推論個人資料保護,以及機器學習之預測被當成真實,而使人類未來被機器預言寫定的問題。 在第三章,本文比較信任隱私理論與隱私間隙理論之隱私、資訊隱私概念與保障機制。相較於信任隱私理論從個人資訊分享揭露關係對社會之意義,論證資訊隱私保障之正當性,以及以私人民事關係汲取出資訊受託人義務作為主要規範模型。隱私間隙理論提出隱私對主體性與人格發展之功能、隱私權之構成要件、隱私權保障體系與機制,建構融貫、具體系性的隱私權理論。隱私間隙理論主張隱私為個人控管邊界之動態間隙,以自主保障取徑、權能保障取徑、符擔性保障取徑建構隱私權保障體系。符擔性保障取徑下,隱私權保障領域擴張,包含限制個人被轉譯程度之語義不連續原則,以及個人與群體參與社會結構之形塑的運作可課責原則。 確認本文採取隱私間隙理論為新興資訊隱私權典範理論之後,第四章以隱私間隙理論建構機器學習時代的資訊隱私概念與保障機制。機器學習對個人資訊隱私造成的新興危害來自以下二途徑:推論與預測分析、機器學習邏輯的怪異與混亂。首先,機器技術對個人與人口群體之推論與預測分析,匯聚大量個人資料、進行人口剖析、製作人格模版,以該模版作為認識與評價個人與群體之依據,並藉由其對資訊環境之掌控能力,使人類之行為與人格發展貼合人格模版。再者,個人與裝配機器學習技術之數位服務與工具頻繁互動下,因機器邏輯之混亂與難以理解,影響個人自我認識的過程。在保障機制上,語義不連續原則限制個人被轉譯的細緻程度,原則上禁止個人資料跨脈絡之匯聚,且數位服務工具應設運作之中斷機制;運作可課責原則下,機器學習、數位服務與工具、資訊環境之運作的設計與佈局,應對受影響之個人與群體透明,開放其參與事前決策過程,並確保事後課責之實踐。 This thesis aims to explore the information privacy’s paradigm and the appropriate protection strategies in machine learning society. Machine learning is a technology imitating human learning process and “learning” from big data which is treated as a substitute for reality. In the vein of information privacy, it collects huge amounts of personal data, sorts and categorizes the data, and creates personality profiles to know a person or population groups and make judgements, inferences or predictions. The above-mentioned process challenges the probability of self-determination of personal information, the existing paradigm of the right to information privacy. Obviously, we cannot trace the situation that personal data processed and retained; therefore, self-determination of personal information cannot be realized. Moreover, machine learning is based on data collected and processed at population-scale, its logic and results are beyond the range individuals can imagine, understand, and manage. As mentioned above, this thesis seeks to find the way to realize information privacy effectively in the era of machine learning. Hence, this thesis aims to find the information privacy theory suited in machine learning society. That theory needs to clarify the risk or harm caused by machine learning and the react mechanism to ensure the realization of the right to information privacy. This thesis’ objectives are to illustrate and analyze privacy as trust theory and privacy as the room for boundary management and to make a comparison between two theories. This thesis argues Julie E. Cohen’s privacy as the room for boundary management theory as the paradigm of right to information privacy in machine learning society and argues that re-constructing the concept of information privacy and rebuild protection mechanism based on the concept of socially situated subjectivity, material liberal theory, and affordance-based approach to privacy. Subjectivity emerges from the society and environment that subjects are situated, and information privacy is the dynamic room for subjects to realize informational boundary management. To ensure subjectivity and personality to develop freely, the right to information privacy safeguards the aforementioned dynamic room, in order to guarantee people find ways to push back against the social shaping from particular institutional, cultural, and material constrains that they encounter in their everyday lives. Evaluating people based on personality profiles made by machine learning constrains the right to information privacy. Machine learning should satisfy the requirement of the semantic discontinuity and the operational accountability principles. The semantic discontinuity principle aims to frustrate seamless personal data flow. The operational accountability principle requires the process of data collection and processing to be transparent at the appropriate level, and make sure that people affected by machine learning would have a say to co-determine the way they are read. Chapter 2 examines the problems of self-determination of personal information paradigm. The self-determination of personal information paradigm occurs to confront government surveillance raised in computer and database era. It safeguards individuals to control over and determine the conditions of the use of their personal information in order to determine the way to be known. However, personal data self-management is hard to realize nowadays. Furthermore, machine learning evaluation, inference, and prediction raised problems that preemptive intervention and fossilization. These problems are beyond the coverage that safeguards by the right to information privacy based on self-determination theory. Chapter 3 illustrates, analyzes, and compares privacy as trust theory and privacy as the room for boundary management. Privacy as trust theory builds on the function of personal information sharing relationships, it argues to establish information fiduciary duty as the regulation model. Comparatively, privacy as the room for boundary management theory constructs a coherent and systemic theory. It argues that privacy is a dynamic room for subjects to engage in processes of boundary management. It constructs the right to privacy protection system in three pillars: liberty-based, capability-based, and affordance-based approach, which provide the way to find the location of self-determination privacy and personal data protection. The coverage of the right to privacy discoursed from affordance-based approach is composed of two principles: the semantic discontinuity principle and the operational accountability principle. Chapter 4 constructs the concept and protection mechanism of the right to information privacy based on privacy as the room for boundary management. Machine learning inference and prediction is made through accumulating amounts of personal data, creating personality profiles, and human behavior or preference patterns. These processes constrain the right to information privacy. Moreover, machine learning inference and prediction are based on logic that humans cannot understand, thus interactions with machine logics disrupt processes of self-formation. The protection strategies are built on two principles. In order to ensure selves incomputable, the semantic discontinuity principle focuses on preventing seamless data collection and processing and requires to set gaps and breakdowns of translation. The operational accountability principle aims to guarantee people have a say in the operation and design of digital environments and technologies, machine learning included. To put it in detail, people should have an approach to access the relevant information, participate in the process of decision making, and build mechanisms to fulfill accountability. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96384 |
DOI: | 10.6342/NTU202500550 |
全文授權: | 同意授權(全球公開) |
電子全文公開日期: | 2025-02-14 |
顯示於系所單位: | 科際整合法律學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-113-1.pdf | 2.27 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。