請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90038
標題: | 朝向抵抗機器學習攻擊的對抗型物理不可克隆函數 Towards Machine Learning Attack Resistant Adversarial-based PUFs |
作者: | 林佳志 Chia-Chih Lin |
指導教授: | 陳銘憲 Ming-Syan Chen |
關鍵字: | 物理不可克隆函數,基於機器學習之建模攻擊,對抗機器學習,毒化攻擊,硬體安全, Physically Unclonable Functions,Machine Learning-based Modeling Attacks,Adversarial Machine Learning,Poisoning Attacks,Hardware Security, |
出版年 : | 2023 |
學位: | 博士 |
摘要: | 隨著個人化人工智慧應用的興起和對用戶隱私的日益關注,物聯網(IoT)設備的安全面臨著各種挑戰。由於資源短缺的特性,傳統的加密方法不適用於 IoT 設備。為了應對這個挑戰,物理不可克隆函數(Physically Unclonable Functions, PUFs)作為一種輕量級安全應用,例如設備驗證,已成為提供輕量級安全應用的有前途的安全原型。PUF 是基於晶片獨特的物理特性生成唯一識別代碼的電路。然而,研究顯示,許多 PUF 的行為可以使用機器學習(ML)算法預測,這與 PUF 的原始意圖相矛盾,即無法複製之特性。
本研究旨在探索如何利用對抗機器學習(Adversarial Machine Learning, AML)的概念來建立一個兼具可靠性和成本效益的抗機器學習 PUF。我們首先提出了一個多工作模式之框架,建立了幾個輕量級工作模式,以轉換可學習 PUF 的輸入和輸出。因為每個工作模式代表了 ML 算法嘗試學習的目標函數,所提出的多工毒化 PUF(MMP PUF)通過非線性地在不同的工作模式之間切換,使 ML 算法在訓練過程中難以收斂。為了驗證我們的想法,我們提出了翻轉模式和位移模式兩種工作模式作為範例。實驗結果顯示,所提出的多模式方法在面積成本方面比其他基於 APUF 設計更具成本效益和抗深度學習攻擊的韌性。 雖然所提出的MMP PUF在傳統的機器學習攻擊下顯示出有效性,但其模式選擇機制使用挑戰本身的隨機性作為熵的來源,這給MMP PUF帶來了安全上的問題。事實上,對於大多數機器學習抗擊PUF而言,挑戰中的每個位位置並不平等地對回應造成影響。為了研究挑選挑戰攻擊的影響,我們提出了一種基於PUF輸出過渡特性之選擇挑戰策略。所提出的選擇挑戰策略,差分選擇挑戰攻擊(DCCA),通過限制訓練集中的內漢明距離(Hamming Distance, HD),強制機器學習算法專注於PUF之輸出過渡特性。實驗結果表明,當攻擊XOR APUF、IPUF和對抗性APUF時,我們的選擇挑戰策略提高了機器學習攻擊的數據效率。 最後,我們提出了一種基於NP-hard問題的新型PUF架構,可配置毒化PUF(CP PUF),以建立安全可證明之強PUF。CP PUF不僅在傳統機器學習攻擊及其變化方面保證了計算複雜度上的安全,其安全性亦適用於任何建模算法。我們在CP PUF中構建了一個可靠性後門,以在對手主動查詢PUF時提供噪聲回應。此外,我們還使用傳統Arbiter PUF的特性設計了一個身份驗證機制,以提高CP PUF之可靠性並減輕環境噪聲對其之影響。實驗結果表明,CP PUF對機器學習攻擊具有抵抗能力,而且在不同噪聲環境中比傳統PUF和幾種新開發的對抗性PUF更可靠。 With the rise of personalized artificial intelligence applications and growing concerns about user privacy, the security of Internet of Things (IoT) devices has faced various challenges. Due to the resource shortage nature, traditional cryptographic methods are not suitable for IoT devices. To address this challenge, Physically Unclonable Functions (PUFs) have emerged as a promising security primitive to provide lightweight security applications such as device authentication. PUFs are circuits that generate a unique identifying code based on a chip's unique physical properties. However, studies have shown that the behavior of many PUFs can be predicted using machine learning (ML) algorithms, which contradicts PUFs' original intent to be unclonable. This study focuses on exploring the concept of adversarial machine learning to establish an ML-resistant PUF that balances reliability and cost-effectiveness. We first propose a multi-mode framework that establishes several lightweight working modes to transform the input and output of a learnable PUF. Because each working mode represents an objective function that ML algorithms attempt to learn, the proposed multi-mode poisoning PUF (MMP PUF) causes ML algorithms to have difficulty converging during the training process by switching between different working modes non-linearly. Two types of working modes, flip and shift modes, are proposed to verify the concept. Experimental results show that the proposed multi-mode approaches are cost-efficient and more resilient to deep learning attacks than other APUF-based designs in terms of area cost. Although the proposed MMP PUF shows effectiveness under conventional ML attacks, its mode selection mechanism uses the randomness of the challenge itself as a source of entropy, which brings security concerns to the MMP PUF. In fact, each bit position in challenges does not equally contribute to the response for most ML-resistant PUFs. To investigate the impact of chosen challenge attacks, we propose a chosen challenge strategy based on the output transition property of PUFs. The proposed chosen challenge strategy, Differential Chosen Challenge Attack (DCCA), forces ML algorithms to focus on output transitions by restricting the intra-hamming distance in the training set. The experimental results show that our chosen challenge strategy improves the data efficiency of ML attacks when attacking XOR APUF, IPUF, and adversarial APUF. Finally, a new PUF architecture, the Configurable Poisoning PUF (CP PUF), based on an NP-hard problem, is proposed to establish a provably secure strong PUF. The CP PUF ensures security in terms of computational complexity against not only conventional ML attacks and their variations but any modeling algorithm in general. A reliability trapdoor is constructed in the CP PUF to provide noisy responses when the adversary queries the PUF actively. Additionally, an authentication mechanism is designed using the characteristics of traditional PUFs to enhance reliability and mitigate the impact of environmental noise. The experimental results show that the CP PUF is ML-resistant and much more reliable in different noisy environments than conventional PUFs and several newly developed adversarial-based PUFs. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90038 |
DOI: | 10.6342/NTU202302234 |
全文授權: | 同意授權(全球公開) |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 10.87 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。