請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90038
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳銘憲 | zh_TW |
dc.contributor.advisor | Ming-Syan Chen | en |
dc.contributor.author | 林佳志 | zh_TW |
dc.contributor.author | Chia-Chih Lin | en |
dc.date.accessioned | 2023-09-22T17:09:33Z | - |
dc.date.available | 2023-11-09 | - |
dc.date.copyright | 2023-09-22 | - |
dc.date.issued | 2023 | - |
dc.date.submitted | 2023-08-02 | - |
dc.identifier.citation | [1] https://github.com/d06921014/codesisss2022.
[2] https://github.com/d06921014/islped2023. [3] B. Applebaum, D. Cash, C. Peikert, and A. Sahai. Fast cryptographic primitives and circular-secure encryption based on hard learning problems. In CRYPTO, pages 595–618, 2009. [4] H. Awano and T. Sato. Ising-puf: A machine learning attack resistant puf featuring lattice like arrangement of arbiter-pufs. In 2018 Design, Automation Test in Europe Conference Exhibition (DATE), pages 1447–1452, 2018. [5] A. Babaei and G. Schiele. Physical unclonable functions in the internet of things: State of the art and open challenges. Sensors, 19(14), 2019. [6] G. T. Becker. The gap between promise and reality: On the insecurity of xor arbiter pufs. In CHES, pages 535–555, 2015. [7] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In Proceedings of the 29th International Coference on International Conference on Machine Learning, ICML’12, page 1467–1474, Madison, WI, USA, 2012. Omnipress. [8] A. Blum, A. Kalai, and H. Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50(4):506–519, 2003. [9] A. Braeken. Puf based authentication protocol for iot. Symmetry, 10(8), 2018. [10] F. Chollet et al. Keras. https://keras.io, 2015. [11] J. Delvaux, R. Peeters, D. Gu, and I. Verbauwhede. A survey on lightweight entity authentication with strong pufs. ACM Comput. Surv., 48(2), 2015. [12] E. Dubrova, O. Näslund, B. Degen, A. Gawell, and Y. Yu. Crc-puf: A machine learning attack resistant lightweight puf construction. In 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS PW), pages 264–271, 2019. [13] S. D. Galbraith. Space-efficient variants of cryptosystems based on learning with errors, 2013. [14] F. Ganji, S. Tajik, F. Fäßler, and J.-P. Seifert. Strong machine learning attack against pufs with no mathematical model. In CHES, page 391–411. Springer-Verlag, 2016. [15] B. Gassend, D. Clarke, M. van Dijk, and S. Devadas. Controlled physical random functions. In 18th Annual Computer Security Applications Conference, 2002. Proceedings., pages 149–160, 2002. [16] B. Gassend, D. Clarke, M. van Dijk, and S. Devadas. Silicon physical random functions. In CCS, page 148–160, 2002. [17] E. Grigorescu, L. Reyzin, and S. Vempala. On noise-tolerant learning of sparse parities and related problems. In Algorithmic Learning Theory, pages 413–424, 2011. [18] C. Herder, L. Ren, M. van Dijk, M.-D. Yu, and S. Devadas. Trapdoor computational fuzzy extractors and stateless cryptographically-secure physical unclonable functions. IEEE Trans. Dependable Secure Comput., 14(1):65–82, 2017. [19] M. Kearns and M. Li. Learning in the presence of malicious errors. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, STOC ’88, page 267–280, New York, NY, USA, 1988. Association for Computing Machinery. [20] M. Khalafalla and C. Gebotys. Pufs deep attacks: Enhanced modeling attacks using deep learning techniques to break the security of double arbiter pufs. In 2019 Design, Automation Test in Europe Conference Exhibition (DATE), pages 204–209, 2019. [21] Y. Lao and K. K. Parhi. Statistical analysis of mux-based physical unclonable functions. IEEE TCAD, 33(5):649–662, 2014. [22] D. Lim, J. Lee, B. Gassend, G. Suh, M. van Dijk, and S. Devadas. Extracting secret keys from integrated circuits. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 13(10):1200–1205, 2005. [23] C.-C. Lin and M.-S. Chen. Attack is the best defense: A multi-mode poisoning puf against machine learning attacks. In PAKDD, pages 176–187. Springer, 2021. [24] C.-C. Lin and M.-S. Chen. Enhancing reliability and security: A configurable poisoning puf against modeling attacks. IEEE TCAD, 41(11):4301–4312, 2022. [25] V. Lyubashevsky. The parity problem in the presence of noise, decoding random linear codes, and the subset sum problem. In APPROX 2005 and RANDOM 2005, page 378–389, 2005. [26] Q. Ma, C. Gu, N. Hanley, C. Wang, W. Liu, and M. O’Neill. A machine learning attack resistant multi-puf design on fpga. In 2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC), pages 97–104, 2018. [27] T. Machida, D. Yamamoto, M. Iwamoto, and K. Sakiyama. A new arbiter puf for enhancing unpredictability on fpga. The Scientific World Journal, 2015:864812, Sep 2015. [28] R. Maes. Puf-based entity identification and authentication. In Physically unclonable functions, pages 117–141. Springer, 2013. [29] M. Majzoobi, F. Koushanfar, and M. Potkonjak. Lightweight secure pufs. In 2008 IEEE/ACM International Conference on Computer-Aided Design, pages 670–673, 2008. [30] M. Majzoobi, F. Koushanfar, and M. Potkonjak. Testing techniques for hardware security. In 2008 IEEE International Test Conference, pages 1–10, 2008. [31] M. Majzoobi, M. Rostami, F. Koushanfar, D. S. Wallach, and S. Devadas. Slender puf protocol: A lightweight, robust, and secure authentication by substring matching. In IEEE SPW, pages 33–44, 2012. [32] A. May, A. Meurer, and E. Thomae. Decoding random linear codes in o(20.054n). In ASIACRYPT 2011, pages 107–124, 2011. [33] M. S. Mispan, H. Su, M. Zwolinski, and B. Halak. Cost-efficient design for modeling attacks resistant pufs. In 2018 Design, Automation Test in Europe Conference Exhibition (DATE), pages 467–472, 2018. [34] H. Nassar, L. Bauer, and J. Henkel. Capuf: Cascaded puf structure for machine learning resiliency. IEEE TCAD, 41(11):4349–4360, 2022. [35] P. H. Nguyen, D. P. Sahoo, R. S. Chakraborty, and D. Mukhopadhyay. Security analysis of arbiter puf and its lightweight compositions under predictability test. ACM Trans. Des. Autom. Electron. Syst., 22(2), Dec. 2016. [36] P. H. Nguyen, D. P. Sahoo, C. Jin, K. Mahmood, U. Rührmair, and M. van Dijk. The interpose puf: Secure puf design against state-of-the-art machine learning attacks. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2019(4):243–290, Aug. 2019. [37] K. Pietrzak. Cryptography from learning parity with noise. In SOFSEM, pages 99–114, 2012. [38] M. A. Qureshi and A. Munir. Puf-ipa: A puf-based identity preserving protocol for internet of things authentication. In IEEE CCNC, pages 1–7, 2020. [39] U. Rührmair, F. Sehnke, J. Sölter, G. Dror, S. Devadas, and J. Schmidhuber. Modeling attacks on physical unclonable functions. In Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS ’10, page 237–249, New York, NY, USA, 2010. Association for Computing Machinery. [40] U. Rührmair, J. Sölter, F. Sehnke, X. Xu, A. Mahmoud, V. Stoyanova, G. Dror, J. Schmidhuber, W. Burleson, and S. Devadas. Puf modeling attacks on simulated and silicon data. IEEE Transactions on Information Forensics and Security, 8(11):1876–1891, 2013. [41] D. P. Sahoo, D. Mukhopadhyay, R. S. Chakraborty, and P. H. Nguyen. A multiplexer-based arbiter puf composition with enhanced reliability and security. IEEE Transactions on Computers, 67(3):403–417, 2018. [42] D. P. SAHOO, P. H. NGUYEN, R. S. CHAKRABORTY, and D. MUKHOPADHYA. On the architectural analysis of arbiter delay puf variants. Crypt. ePrint Archive, 2016. [43] D. P. Sahoo, P. H. Nguyen, D. Mukhopadhyay, and R. S. Chakraborty. A case of lightweight puf constructions: Cryptanalysis and machine learning attacks. IEEE TCAD, 34(8):1334–1343, 2015. [44] P. Santikellur, A. Bhattacharyay, and R. S. Chakraborty. Deep learning based model building attacks on arbiter puf compositions. Cryptology ePrint Archive, Report 2019/566, 2019. [45] B. Settles. From theories to queries: Active learning in practice. In Active Learning and Experimental Design workshop In conjunction with AISTATS 2010, volume 16 of PMLR, pages 1–18, 2011. [46] G. E. Suh and S. Devadas. Physical unclonable functions for device authentication and secret key generation. In Proceedings of the 44th Annual Design Automation Conference, DAC ’07, page 9–14, New York, NY, USA, 2007. Association for Computing Machinery. [47] J. Tobisch, A. Aghaie, and G. T. Becker. Combining optimization objectives: New machine-learning attacks on strong pufs. Cryptology ePrint Archive, Paper 2020/957, 2020. [48] A. Wang, W. Tan, Y. Wen, and Y. Lao. Nopuf: A novel puf design framework toward modeling attack resistant pufs. IEEE Trans. Circuits Syst. I: Regul. Pap., 68(6):2508–2521, 2021. [49] Q. Wang, M. Gao, and G. Qu. A machine learning attack resistant dual-mode puf. In Proceedings of the 2018 on Great Lakes Symposium on VLSI, GLSVLSI ’18, page 177–182, New York, NY, USA, 2018. Association for Computing Machinery. [50] S.-J. Wang, Y.-S. Chen, and K. S.-M. Li. Adversarial attack against modeling attack on pufs. In 2019 56th ACM/IEEE Design Automation Conference (DAC), pages 1–6, 2019. [51] S.-J. Wang, Y.-S. Chen, and K. S.-M. Li. Modeling attack resistant pufs based on adversarial attack against machine learning. IEEE J. Emerg. Sel. Top. Circuits Syst., 11(2):306–318, 2021. [52] Y. Wang, C. Wang, C. Gu, Y. Cui, M. O’Neill, and W. Liu. A dynamically configurable puf and dynamic matching authentication protocol. IEEE Trans. Emerg. Top. Comput., pages 1–1, 2021. [53] Y. Wang, X. Xi, and M. Orshansky. Lattice puf: A strong physical unclonable function provably secure against machine learning attacks. In HOST, pages 273–283, 2020. [54] Y. Wen and Y. Lao. Puf modeling attack using active learning. In Proc. of IEEE ISCAS, pages 1–5, 2018. [55] N. Wisiol, C. Mühl, N. Pirnay, P. H. Nguyen, M. Margraf, J.-P. Seifert, M. van Dijk, and U. Rührmair. Splitting the interpose puf: A novel modeling attack strategy. TCHES, 2020(3):97–120, 2020. [56] N. Wisiol, B. Thapaliya, K. T. Mursi, J.-P. Seifert, and Y. Zhuang. Neural network modeling attacks on arbiter-puf-based designs. IEEE TIFS, 17:2719–2731, 2022. [57] X. Xu, W. Burleson, and D. E. Holcomb. Using statistical models to improve the reliability of delay-based pufs. In IEEE ISVLSI, pages 547–552, 2016. [58] Y. Yilmaz, S. R. Gunn, and B. Halak. Lightweight puf-based authentication protocol for iot devices. In IEEE IVSW, pages 38–43, 2018. [59] S. S. Zalivaka, A. A. Ivaniuk, and C.-H. Chang. Low-cost fortification of arbiter puf against modeling attack. In 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1–4, 2017. [60] C. Zhou, K. K. Parhi, and C. H. Kim. Secure and reliable xor arbiter puf design: An experimental study based on 1 trillion challenge response pair measurements. In Proc. of ACM/IEEE DAC, pages 1–6, 2017. [61] C. Zhou, S. Satapathy, Y. Lao, K. K. Parhi, and C. H. Kim. Soft response generation and thresholding strategies for linear and feed-forward mux pufs. In ISLPED’16, page 124–129. ACM, 2016. [62] S. Zhu, Y. Tang, J. Zheng, Y. Cao, H. Wang, Y. Huang, and M. Margraf. Sample essentiality and its application to modeling attacks on arbiter pufs. ACM TECS, 18(5):42:1–42:25, 2019. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90038 | - |
dc.description.abstract | 隨著個人化人工智慧應用的興起和對用戶隱私的日益關注,物聯網(IoT)設備的安全面臨著各種挑戰。由於資源短缺的特性,傳統的加密方法不適用於 IoT 設備。為了應對這個挑戰,物理不可克隆函數(Physically Unclonable Functions, PUFs)作為一種輕量級安全應用,例如設備驗證,已成為提供輕量級安全應用的有前途的安全原型。PUF 是基於晶片獨特的物理特性生成唯一識別代碼的電路。然而,研究顯示,許多 PUF 的行為可以使用機器學習(ML)算法預測,這與 PUF 的原始意圖相矛盾,即無法複製之特性。
本研究旨在探索如何利用對抗機器學習(Adversarial Machine Learning, AML)的概念來建立一個兼具可靠性和成本效益的抗機器學習 PUF。我們首先提出了一個多工作模式之框架,建立了幾個輕量級工作模式,以轉換可學習 PUF 的輸入和輸出。因為每個工作模式代表了 ML 算法嘗試學習的目標函數,所提出的多工毒化 PUF(MMP PUF)通過非線性地在不同的工作模式之間切換,使 ML 算法在訓練過程中難以收斂。為了驗證我們的想法,我們提出了翻轉模式和位移模式兩種工作模式作為範例。實驗結果顯示,所提出的多模式方法在面積成本方面比其他基於 APUF 設計更具成本效益和抗深度學習攻擊的韌性。 雖然所提出的MMP PUF在傳統的機器學習攻擊下顯示出有效性,但其模式選擇機制使用挑戰本身的隨機性作為熵的來源,這給MMP PUF帶來了安全上的問題。事實上,對於大多數機器學習抗擊PUF而言,挑戰中的每個位位置並不平等地對回應造成影響。為了研究挑選挑戰攻擊的影響,我們提出了一種基於PUF輸出過渡特性之選擇挑戰策略。所提出的選擇挑戰策略,差分選擇挑戰攻擊(DCCA),通過限制訓練集中的內漢明距離(Hamming Distance, HD),強制機器學習算法專注於PUF之輸出過渡特性。實驗結果表明,當攻擊XOR APUF、IPUF和對抗性APUF時,我們的選擇挑戰策略提高了機器學習攻擊的數據效率。 最後,我們提出了一種基於NP-hard問題的新型PUF架構,可配置毒化PUF(CP PUF),以建立安全可證明之強PUF。CP PUF不僅在傳統機器學習攻擊及其變化方面保證了計算複雜度上的安全,其安全性亦適用於任何建模算法。我們在CP PUF中構建了一個可靠性後門,以在對手主動查詢PUF時提供噪聲回應。此外,我們還使用傳統Arbiter PUF的特性設計了一個身份驗證機制,以提高CP PUF之可靠性並減輕環境噪聲對其之影響。實驗結果表明,CP PUF對機器學習攻擊具有抵抗能力,而且在不同噪聲環境中比傳統PUF和幾種新開發的對抗性PUF更可靠。 | zh_TW |
dc.description.abstract | With the rise of personalized artificial intelligence applications and growing concerns about user privacy, the security of Internet of Things (IoT) devices has faced various challenges. Due to the resource shortage nature, traditional cryptographic methods are not suitable for IoT devices. To address this challenge, Physically Unclonable Functions (PUFs) have emerged as a promising security primitive to provide lightweight security applications such as device authentication. PUFs are circuits that generate a unique identifying code based on a chip's unique physical properties. However, studies have shown that the behavior of many PUFs can be predicted using machine learning (ML) algorithms, which contradicts PUFs' original intent to be unclonable.
This study focuses on exploring the concept of adversarial machine learning to establish an ML-resistant PUF that balances reliability and cost-effectiveness. We first propose a multi-mode framework that establishes several lightweight working modes to transform the input and output of a learnable PUF. Because each working mode represents an objective function that ML algorithms attempt to learn, the proposed multi-mode poisoning PUF (MMP PUF) causes ML algorithms to have difficulty converging during the training process by switching between different working modes non-linearly. Two types of working modes, flip and shift modes, are proposed to verify the concept. Experimental results show that the proposed multi-mode approaches are cost-efficient and more resilient to deep learning attacks than other APUF-based designs in terms of area cost. Although the proposed MMP PUF shows effectiveness under conventional ML attacks, its mode selection mechanism uses the randomness of the challenge itself as a source of entropy, which brings security concerns to the MMP PUF. In fact, each bit position in challenges does not equally contribute to the response for most ML-resistant PUFs. To investigate the impact of chosen challenge attacks, we propose a chosen challenge strategy based on the output transition property of PUFs. The proposed chosen challenge strategy, Differential Chosen Challenge Attack (DCCA), forces ML algorithms to focus on output transitions by restricting the intra-hamming distance in the training set. The experimental results show that our chosen challenge strategy improves the data efficiency of ML attacks when attacking XOR APUF, IPUF, and adversarial APUF. Finally, a new PUF architecture, the Configurable Poisoning PUF (CP PUF), based on an NP-hard problem, is proposed to establish a provably secure strong PUF. The CP PUF ensures security in terms of computational complexity against not only conventional ML attacks and their variations but any modeling algorithm in general. A reliability trapdoor is constructed in the CP PUF to provide noisy responses when the adversary queries the PUF actively. Additionally, an authentication mechanism is designed using the characteristics of traditional PUFs to enhance reliability and mitigate the impact of environmental noise. The experimental results show that the CP PUF is ML-resistant and much more reliable in different noisy environments than conventional PUFs and several newly developed adversarial-based PUFs. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-09-22T17:09:33Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2023-09-22T17:09:33Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 論文口試委員審定書i
Acknowledgments iii 摘要v Abstract vii Contents xi List of Figures xv List of Tables xvii Chapter 1 Introduction 1 1.1 Motivation and Overview . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 MMP PUF: A Multi-mode Adversarial-based Approach . . . . . . . 4 1.1.2 Learning from Output Transitions: A Chosen Challenge Strategy for ML Attacks on PUFs . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Enhancing Reliability and Security: A Configurable Poisoning PUF against Modeling Attacks . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Organization of the dissertation . . . . . . . . . . . . . . . . . . . . 7 Chapter 2 MMP PUF: A Multi-mode Adversarial-based Approach 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1 Assumptions and Notations . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1.1 Modeling attack on Arbiter PUF . . . . . . . . . . . . 12 2.2.1.2 Quality Metrics of PUF . . . . . . . . . . . . . . . . . 13 2.2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Multi-mode Poisoning PUF . . . . . . . . . . . . . . . . . . . . . . 17 2.3.1 Multi-Mode Poisoning PUF . . . . . . . . . . . . . . . . . . . . . . 17 2.3.2 Working mode design . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.3 Hiding the working modes . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.1 Deep Learning Attack . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.2 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4.3 Performance of working modes design . . . . . . . . . . . . . . . . 24 2.4.4 Impact of increasing the number of working modes . . . . . . . . . 26 2.4.5 Impact of imbalanced working modes . . . . . . . . . . . . . . . . 29 2.4.6 Impact of output transition probability . . . . . . . . . . . . . . . . 30 2.4.7 PUF quality metrics evaluation . . . . . . . . . . . . . . . . . . . . 32 2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Chapter 3 Learning from Output Transitions: A Chosen Challenge Strategy for ML Attacks on PUFs 35 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.2.2 Arbiter PUF and its variants . . . . . . . . . . . . . . . . . . . . . 38 3.2.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.2.3.1 Hamming Distance Test . . . . . . . . . . . . . . . . . 40 3.2.3.2 ML attacks incorporating prior knowledge . . . . . . . 40 3.3 Differential Chosen Challenge Attacks . . . . . . . . . . . . . . . . 41 3.3.1 The Proposed Differential Chosen-Challenge Attacks . . . . . . . . 41 3.3.2 Mitigate the Data Mismatch . . . . . . . . . . . . . . . . . . . . . . 44 3.3.3 Deep Learning Attacks . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4.1 Output Transition Probability . . . . . . . . . . . . . . . . . . . . . 46 3.4.2 Performance of the Proposed Chosen-Challenge Strategy . . . . . . 48 3.4.3 The Ratio of Biased and Unbiased Data in Training Sets . . . . . . . 51 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Chapter 4 Enhancing Reliability and Security: A Configurable Poisoning PUF against Modeling Attacks 55 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2.2 Arbiter PUF and Its Parametric Model . . . . . . . . . . . . . . . . 59 4.2.3 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.2.3.1 PUF Modeling Attacks and Countermeasures . . . . . 60 4.2.3.2 Adversarial-based PUFs . . . . . . . . . . . . . . . . . 61 4.3 Configuring the Reliability and Security . . . . . . . . . . . . . . . . 63 4.3.1 Proposed PUF Construction . . . . . . . . . . . . . . . . . . . . . . 63 4.3.2 Reliability Trapdoor Design . . . . . . . . . . . . . . . . . . . . . . 64 4.3.2.1 Strategy for Noise Amplification . . . . . . . . . . . . 65 4.3.2.2 Configuring the Reliability . . . . . . . . . . . . . . . 66 4.3.3 Security of the CP PUF . . . . . . . . . . . . . . . . . . . . . . . . 67 4.3.3.1 Provably-Secure Approach . . . . . . . . . . . . . . . 67 4.3.3.2 Security Analysis of the CP PUF . . . . . . . . . . . . 69 4.3.4 Balancing the Reliability and Security . . . . . . . . . . . . . . . . 74 4.4 Authentication Protocol Design . . . . . . . . . . . . . . . . . . . . 76 4.4.1 Error Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.4.2 Protocol for the CP PUF . . . . . . . . . . . . . . . . . . . . . . . 77 4.4.3 Performance Analysis of the Protocol . . . . . . . . . . . . . . . . 79 4.4.3.1 False Rejection . . . . . . . . . . . . . . . . . . . . . 80 4.4.3.2 False Acceptance . . . . . . . . . . . . . . . . . . . . 81 4.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.5.1 PUF Performance metrics . . . . . . . . . . . . . . . . . . . . . . . 82 4.5.2 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.5.3 Performance of the Protocol . . . . . . . . . . . . . . . . . . . . . 86 4.5.3.1 False Rejection . . . . . . . . . . . . . . . . . . . . . 86 4.5.3.2 False Acceptance . . . . . . . . . . . . . . . . . . . . 86 4.5.4 Empirical Machine Learning Attacks . . . . . . . . . . . . . . . . . 88 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Chapter 5 Conclusion and Future Work 95 References 99 | - |
dc.language.iso | en | - |
dc.title | 朝向抵抗機器學習攻擊的對抗型物理不可克隆函數 | zh_TW |
dc.title | Towards Machine Learning Attack Resistant Adversarial-based PUFs | en |
dc.type | Thesis | - |
dc.date.schoolyear | 111-2 | - |
dc.description.degree | 博士 | - |
dc.contributor.oralexamcommittee | 雷欽隆;張原豪;蕭旭君;修丕承;鄭文皇 | zh_TW |
dc.contributor.oralexamcommittee | Chin-Laung Lei;Yuan-Hao Chang;Hsu-Chun Hsiao;Pi-Cheng Hsiu;Wen-Huang Cheng | en |
dc.subject.keyword | 物理不可克隆函數,基於機器學習之建模攻擊,對抗機器學習,毒化攻擊,硬體安全, | zh_TW |
dc.subject.keyword | Physically Unclonable Functions,Machine Learning-based Modeling Attacks,Adversarial Machine Learning,Poisoning Attacks,Hardware Security, | en |
dc.relation.page | 106 | - |
dc.identifier.doi | 10.6342/NTU202302234 | - |
dc.rights.note | 同意授權(全球公開) | - |
dc.date.accepted | 2023-08-04 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 電機工程學系 | - |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 10.87 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。