請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101689完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 洪士灝 | zh_TW |
| dc.contributor.advisor | Shih-Hao Hung | en |
| dc.contributor.author | 顧昱得 | zh_TW |
| dc.contributor.author | Yu-Te Ku | en |
| dc.date.accessioned | 2026-02-26T16:44:08Z | - |
| dc.date.available | 2026-02-27 | - |
| dc.date.copyright | 2026-02-26 | - |
| dc.date.issued | 2026 | - |
| dc.date.submitted | 2026-02-02 | - |
| dc.identifier.citation | [1] M. R. Albrecht, R. Player, and S. Scott. On the concrete hardness of learning with errors. Cryptology ePrint Archive, Report 2015/046, 2015. https://eprint.iacr.org/2015/046.
[2] A. A. Badawi, J. Bates, E. Bergamaschi, D. B. Cousins, S. Erabelli, N. Genise, S. Halevi, H. Hunt, A. Kim, Y. Lee, Z. Liu, D. Micciancio, I. Quah, Y. Polyakov, S. R. V., K. Rohloff, J. Saylor, D. Suponitsky, M. Triplett, V. Vaikuntanathan, and V. Zucca. OpenFHE: Open-source fully homomorphic encryption library. Cryptology ePrint Archive, Report 2022/915, 2022. https://eprint.iacr.org/2022/915. [3] F. Boemer, A. Costache, R. Cammarota, and C. Wierzynski. ngraph-he2: A high-throughput framework for neural network inference on encrypted data. In M. Brenner, T. Lepoint, and K. Rohloff, editors, Proceedings of the 7th ACM Workshop on Encrypted Computing & Applied Homomorphic Cryptography, WAHC@CCS 2019, London, UK, November 11–15, 2019, pages 45–56. ACM, 2019. [4] F. Bourse, M. Minelli, M. Minihold, and P. Paillier. Fast homomorphic evaluation of deep discretized neural networks. In H. Shacham and A. Boldyreva, editors, CRYPTO 2018, Part III, volume 10993 of LNCS, pages 483–512. Springer, Cham, Aug. 2018. [5] Z. Brakerski, C. Gentry, and V. Vaikuntanathan. (leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory, 6(3):13:1–13:36, 2014. [6] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. [7] A. Brutzkus, R. Gilad-Bachrach, and O. Elisha. Low latency privacy preserving inference. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 812–821. PMLR, 2019. [8] Z. Cai and C. Peng. A study on training fine-tuning of convolutional neural networks. In 13th International Conference on Knowledge and Smart Technology, KST 2021, Bangsaen, Chonburi, Thailand, January 21–24, 2021, pages 84–89. IEEE, 2021. [9] J. H. Cheon, A. Kim, M. Kim, and Y. S. Song. Homomorphic encryption for arithmetic of approximate numbers. In T. Takagi and T. Peyrin, editors, ASIACRYPT 2017, Part I, volume 10624 of LNCS, pages 409–437. Springer, Cham, Dec. 2017. [10] S. Cheon, Y. Lee, D. Kim, J. M. Lee, S. Jung, T. Kim, D. Lee, and H. Kim. Dacapo: Automatic bootstrapping management for efficient fully homomorphic encryption. In USENIX Security Symposium. USENIX Association, 2024. [11] I. Chillotti, N. Gama, M. Georgieva, and M. Izabachène. Faster fully homomorphic encryption: Bootstrapping in less than 0.1 seconds. In J. H. Cheon and T. Takagi, editors, ASIACRYPT 2016, Part I, volume 10031 of LNCS, pages 3–33. Springer, Berlin, Heidelberg, Dec. 2016. [12] I. Chillotti, M. Joye, and P. Paillier. Programmable bootstrapping enables efficient homomorphic inference of deep neural networks. In S. Dolev, O. Margalit, B. Pinkas, and A. A. Schwarzmann, editors, Cyber Security Cryptography and Machine Learning – 5th International Symposium, CSCML 2021, Be’er Sheva, Israel, July 8–9, 2021, Proceedings, volume 12716 of Lecture Notes in Computer Science, pages 1–19. Springer, 2021. [13] I. Chillotti, D. Ligier, J.-B. Orfila, and S. Tap. Improved programmable bootstrapping with larger precision and efficient arithmetic circuits for TFHE. In M. Tibouchi and H. Wang, editors, ASIACRYPT 2021, Part III, volume 13092 of LNCS, pages 670–699. Springer, Cham, Dec. 2021. [14] R. Datathri, B. Kostova, O. Saarikivi, W. Dai, K. Laine, and M. Musuvathi. EVA: an encrypted vector arithmetic language and compiler for efficient homomorphic computation. In A. F. Donaldson and E. Torlak, editors, Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI 2020, London, UK, June 15–20, 2020, pages 546–561. ACM, 2020. [15] L. Ducas and D. Micciancio. FHEW: Bootstrapping homomorphic encryption in less than a second. In E. Oswald and M. Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 617–640. Springer, Berlin, Heidelberg, Apr. 2015. [16] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In S. Halevi and T. Rabin, editors, TCC 2006, volume 3876 of LNCS, pages 265–284. Springer, Berlin, Heidelberg, Mar. 2006. [17] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3–4):211–407, 2014. [18] J. Fan and F. Vercauteren. Somewhat practical fully homomorphic encryption. IACR Cryptol. ePrint Arch., page 144, 2012. [19] L. Folkerts, C. Gouert, and N. G. Tsoutsos. Redsec: Running encrypted discretized neural networks in seconds. In 30th Annual Network and Distributed System Security Symposium, NDSS 2023, San Diego, California, USA, February 27 – March 3, 2023. The Internet Society, 2023. [20] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. E. Lauter, M. Naehrig, and J. Wernsing. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In M. Balcan and K. Q. Weinberger, editors, Proceedings of the 33rd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19–24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 201–210. JMLR.org, 2016. [21] S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2–4, 2016, Conference Track Proceedings, 2016. [22] M.-C. Ho, Y.-T. Ku, Y. Xiao, F.-H. Liu, C.-F. Hsu, M.-C. Chang, S.-H. Hung, and W.-C. Chen. Efficient design of fhewtfhe bootstrapping implementation with scalable parameters. In Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design, pages 1–9, 2024. [23] M.-C. Ho, Y.-T. Ku, Y. Xiao, F.-H. Liu, C.-F. Hsu, M.-C. Chang, S.-H. Hung, and W.-C. Chen. Invited paper: Efficient design of FHEW/TFHE bootstrapping implementation with scalable parameters. In 2024 IEEE/ACM International Conference on Computer Aided Design (ICCAD), 2024. [24] Z. Huang, W. Jie Lu, C. Hong, and J. Ding. Cheetah: Lean and fast secure two-party deep neural network inference. In K. R. B. Butler and K. Thomas, editors, USENIX Security 2022, pages 809–826. USENIX Association, Aug. 2022. [25] N. K. Jha, Z. Ghodsi, S. Garg, and B. Reagen. Deepreduce: Relu reduction for fast private inference. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18–24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4839–4849. PMLR, 2021. [26] I. T. Jolliffe and J. Cadima. Principal component analysis: a review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065):20150202, 2016. [27] C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan. GAZELLE: A low latency framework for secure neural network inference. In W. Enck and A. P. Felt, editors, USENIX Security 2018, pages 1651–1669. USENIX Association, Aug. 2018. [28] D. Kim and C. Guyot. Optimized privacy-preserving CNN inference with fully homomorphic encryption. IEEE Trans. Inf. Forensics Secur., 18:2175–2187, 2023. [29] K. Kluczniak and L. Schild. FDFB: Full domain functional bootstrapping towards practical fully homomorphic encryption. Cryptology ePrint Archive, Report 2021/1135, 2021. https://eprint.iacr.org/2021/1135. [30] B. Knott, S. Venkataraman, A. Y. Hannun, S. Sengupta, M. Ibrahim, and L. van der Maaten. Crypten: Secure multi-party computation meets machine learning. In M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6–14, 2021, virtual, pages 4961–4973, 2021. [31] A. Krizhevsky. Learning multiple layers of features from tiny images, 2009. U. Toronto Tech Report. [32] Y. Ku, M. Ho, Y. Xiao, F. Liu, C. Hsu, M. Chang, S. Hung, and W. Chen. A comprehensive evaluation of encrypted DNN inference methods. In IEEE International Symposium on Circuits and Systems, ISCAS 2025, London, United Kingdom, May 25–28, 2025, pages 1–5. IEEE, 2025. [33] Y. Ku, F. Liu, C. Hsu, M. Chang, S. Hung, I. Tu, and W. Chen. Optimizing encrypted neural networks: Model design, quantization and fine-tuning using FHEW/TFHE. Proc. Priv. Enhancing Technol., 2025(4):1075–1091, 2025. [34] K.-Y. Lam, X. Lu, L. Zhang, X. Wang, H. Wang, and S. Q. Goh. Efficient fhe-based privacy-enhanced neural network for ai-as-a-service. Cryptology ePrint Archive, Paper 2023/647, 2023. https://eprint.iacr.org/2023/647. [35] Y. LeCun, C. Cortes, and C. Burges. Mnist handwritten digit database. volume 2, 2010. [36] J.-W. Lee, H. Kang, Y. Lee, W. Choi, J. Eom, M. Deryabin, E. Lee, J. Lee, D. Yoo, Y.-S. Kim, and J.-S. No. Privacy-preserving machine learning with fully homomorphic encryption for deep neural network. Cryptology ePrint Archive, Report 2021/783, 2021. https://eprint.iacr.org/2021/783. [37] J. Liu, M. Juuti, Y. Lu, and N. Asokan. Oblivious neural network predictions via MiniONN transformations. In B. M. Thuraisingham, D. Evans, T. Malkin, and D. Xu, editors, ACM CCS 2017, pages 619–631. ACM Press, Oct./Nov. 2017. [38] J. Liu, A. Shahroudy, M. Perez, G. Wang, L. Duan, and A. C. Kot. NTU RGB+D 120: A large-scale benchmark for 3d human activity understanding. IEEE Trans. Pattern Anal. Mach. Intell., 42(10):2684–2701, 2020. [39] T.-L. Liu, Y.-T. Ku, M.-C. Ho, F.-H. Liu, M.-C. Chang, C.-F. Hsu, W.-C. Chen, and S.-H. Hung. An efficient ckks-fhew/tfhe hybrid encrypted inference framework. In European Symposium on Research in Computer Security, pages 535–551. Springer, 2023. [40] Z. Liu, D. Micciancio, and Y. Polyakov. Large-precision homomorphic sign evaluation using FHEW/TFHE bootstrapping. Cryptology ePrint Archive, Report 2021/1337, 2021. https://eprint.iacr.org/2021/1337. [41] Z. Liu, D. Micciancio, and Y. Polyakov. Large-precision homomorphic sign evaluation using FHEW/TFHE bootstrapping. In S. Agrawal and D. Lin, editors, ASIACRYPT 2022, Part II, volume 13792 of LNCS, pages 130–160. Springer, Cham, Dec. 2022. [42] Q. Lou and L. Jiang. SHE: A fast and accurate deep neural network for encrypted data. In NeurIPS, pages 10035–10043, 2019. [43] Q. Lou, W. Lu, C. Hong, and L. Jiang. Falcon: Fast spectral inference on encrypted data. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6–12, 2020, virtual, 2020. [44] J. Luo, Y. Zhang, Z. Zhang, J. Zhang, X. Mu, H. Wang, Y. Yu, and Z. Xu. Secformer: Fast and accurate privacy-preserving inference for transformer models via SMPC. In L. Ku, A. Martins, and V. Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11–16, 2024, pages 13333–13348. Association for Computational Linguistics, 2024. [45] L. McInnes, J. Healy, and J. Melville. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018. [46] D. Micciancio and Y. Polyakov. Bootstrapping in fhew-like cryptosystems. In WAHC, pages 17–28. ACM, 2021. [47] P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa. Delphi: A cryptographic inference service for neural networks. In S. Capkun and F. Roesner, editors, USENIX Security 2020, pages 2505–2522. USENIX Association, Aug. 2020. [48] P. Mohassel and P. Rindal. ABY3: A mixed protocol framework for machine learning. In D. Lie, M. Mannan, M. Backes, and X. Wang, editors, ACM CCS 2018, pages 35–52. ACM Press, Oct. 2018. [49] P. Mohassel and Y. Zhang. SecureML: A system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy, pages 19–38. IEEE Computer Society Press, May 2017. [50] J. P. Near, D. Darais, N. Lefkovitz, and G. Howarth. NIST special publication 800-226: Guidelines for evaluating differential privacy guarantees. Technical Report SP 800-226, National Institute of Standards and Technology, 2025. Published March 2025; DOI:10.6028/NIST.SP.800-226. [51] H. Peng, R. Ran, Y. Luo, J. Zhao, S. Huang, K. Thorat, T. Geng, C. Wang, X. Xu, W. Wen, and C. Ding. Lingcn: Structural linearized graph convolutional network for homomorphically encrypted inference. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10–16, 2023, 2023. [52] R. Ran, W. Wang, Q. Gang, J. Yin, N. Xu, and W. Wen. Cryptogcn: Fast and scalable homomorphically encrypted graph convolutional network inference. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 – December 9, 2022, 2022. [53] R. Ran, N. Xu, T. Liu, W. Wang, G. Quan, and W. Wen. Penguin: Parallel-packed homomorphic encryption for fast graph convolutional network inference. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10–16, 2023, 2023. [54] D. Rathee, M. Rathee, N. Kumar, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma. CrypTFlow2: Practical 2-party secure inference. Cryptology ePrint Archive, Report 2020/1002, 2020. https://eprint.iacr.org/2020/1002. [55] M. S. Riazi, M. Samragh, H. Chen, K. Laine, K. E. Lauter, and F. Koushanfar. XONN: XNOR-based oblivious deep neural network inference. In N. Heninger and P. Traynor, editors, USENIX Security 2019, pages 1501–1518. USENIX Association, Aug. 2019. [56] M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar. Chameleon: A hybrid secure computation framework for machine learning applications. In J. Kim, G.-J. Ahn, S. Kim, Y. Kim, J. López, and T. Kim, editors, ASIACCS 18, pages 707–721. ACM Press, Apr. 2018. [57] A. Ronacher and P. team. Flask. https://flask.palletsprojects.com/, 2010–. Version 3.x, accessed 2025-09-30. [58] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad. Collective classification in network data. AI Mag., 29(3):93–106, 2008. [59] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [60] D. Sonntag, M. Barz, J. Zacharias, S. Stauden, V. Rahmani, Á. Fóthi, and A. Lőrincz. Fine-tuning deep CNN models on specific MS COCO categories. CoRR, abs/1709.01476, 2017. [61] A. Stoian, J. Fréry, R. Bredehoft, L. Montero, C. Kherfallah, and B. Chevallier-Mames. Deep neural networks for encrypted inference with TFHE. In S. Dolev, E. Gudes, and P. Paillier, editors, Cyber Security, Cryptology, and Machine Learning – 7th International Symposium, CSCML 2023, Be’er Sheva, Israel, June 29–30, 2023, Proceedings, volume 13914 of Lecture Notes in Computer Science, pages 493–500. Springer, 2023. [62] L. van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605, 2008. [63] S. Wagh, D. Gupta, and N. Chandran. SecureNN: 3-party secure computation for neural network training. PoPETs, 2019(3):26–49, July 2019. [64] H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. volume abs/1708.07747, 2017. [65] Y. Xiao, F.-H. Liu, Y.-T. Ku, M.-C. Ho, C.-F. Hsu, M.-C. Chang, S.-H. Hung, and W.-C. Chen. Gpu acceleration for FHEW/TFHE bootstrapping. volume 2025, pages 314–339, 2025. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101689 | - |
| dc.description.abstract | 本論文旨在提升隱私保護的深度神經網路(DNN)推論之可部署性,聚焦於解決主要的效率瓶頸,也就是非線性運算(例如 ReLU),同時維持可量化的安全性與隱私保證。第一條研究主線聚焦於第三代全同態加密(FHE),特別是 FHEW/TFHE。為了降低加密推論成本,我提出 FHE-Neuron,透過重新配置FHEW/TFHE 參數與 bootstrapping 的資料流,並採用動態密文精度切換,以取得更佳的準確度與延遲折衷。我也進一步發展一套 FHE-aware 的量化與微調框架,使預訓練模型能更適應加密執行環境,並減輕由精度限制與噪聲所引入的誤差。第二條研究主線提出 DP Protocol,這是一種受差分隱私啟發的協定安全性放寬定義,要求在相鄰輸入下,對手所觀察到的視圖分佈必須保持接近,從而能在可調整的隱私界限下達到更高效率。基於此概念,我設計一個伺服器輔助的兩方安全計算(2PC)推論協定:線性層由兩方以秘密分享方式計算;非線性運算則在「隱私增強」資料上以明文執行,而這些資料由加噪與秘密打亂所產生。我並以形式化方式證明該協定滿足 DP-Protocol 的保證。總結而言,本論文從加密執行與協定設計兩個面向推進實務可用的安全推論,並展示共同設計密碼機制、模型架構,以及誤差與隱私分析,能夠顯著提升隱私保護 DNN 推論的效率與可用性。 | zh_TW |
| dc.description.abstract | This dissertation improves the deployability of privacy-preserving deep neural network (DNN) inference by addressing the dominant efficiency bottleneck in nonlinear computation (e.g., ReLU), while maintaining quantifiable security and privacy guarantees.
The first line of work focuses on third-generation Fully Homomorphic Encryption (FHE), particularly FHEW/TFHE. To reduce the cost of encrypted inference, I propose FHE-Neuron, which reconfigures FHEW/TFHE parameters and the bootstrapping dataflow, and employs dynamic ciphertext precision switching to achieve improved accuracy–latency trade-offs. I further develop an FHE-aware quantization and fine-tuning framework to adapt pretrained models to encrypted execution and mitigate precision- and noise-induced errors. The second line of work introduces DP-Protocol, a Differential Privacy–inspired relaxation of protocol security that requires an adversary’s view distributions to remain close for neighboring inputs, enabling higher efficiency under tunable privacy bounds. Building on this notion, I design a server-aided two-party computation (2PC) inference protocol in which linear layers are evaluated via two-party secret sharing, while nonlinearities are executed in plaintext on “privacy-enhanced” data produced by noise injection and secret shuffling. I formally prove that the protocol satisfies the DP-Protocol guarantee. Overall, this dissertation advances practical secure inference from both encrypted execution and protocol design perspectives, demonstrating that co-designing cryptographic mechanisms, model architecture, and error and privacy analysis can substantially improve the efficiency and usability of privacy-preserving DNN inference. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2026-02-26T16:44:08Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2026-02-26T16:44:08Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 ii Abstract iii Contents v List of Figures x List of Tables xiii Chapter 1 Introduction 1 1.1 Background and Motivation 1 1.2 Challenges 2 1.3 Approach and Thesis Overview 3 1.4 Contributions 5 1.5 Thesis Organization 6 Chapter 2 Related Work 7 2.1 FHE-Based Encrypted Inference 7 2.2 SMPC-Based Private Inference and Reducing the Cost of Nonlinearities 9 2.2.1 Summary 11 Chapter 3 Preliminaries and Notation 12 3.1 Neural Networks (NN) 13 3.2 FHEW/TFHE 13 3.2.1 LWE Symmetric Encryption 13 3.2.2 FHEW/TFHE Functional Bootstrapping 15 3.2.3 FHE Parameters 16 3.2.4 Post-Training Quantization 17 3.2.5 Pseudo-Noise Tuning 17 3.3 Secure Multi-Party Computation (SMPC) 21 3.3.1 SMPC-based Neural Network Inference: Notation and Protocols 21 3.3.2 Defining Security for Semi-Honest Adversaries 22 3.4 Differential Privacy and Composition 23 3.5 Basic Transformations: Histogram, Merge, and Shuffle (with Inverses) 25 I Practical Encrypted DNN Inference with Third-Generation FHE (FHEW/TFHE) 27 Chapter 4 FHE-Neuron Architecture and Precision Switching 28 4.1 Overview of FHE-Neuron 29 4.2 FHE-Neuron Procedure 31 4.3 Why Precision Switching is Essential 32 Chapter 5 Noise and Error Analysis of FHE-ACT 35 5.1 Error Decomposition of FHE-ACT 37 5.2 Empirical Estimation of FHE-ACT Noise 38 5.3 Case Study and Practical Implications 39 Chapter 6 FHE-Aware Quantization and Tuning Pipeline 40 6.1 FHE-Aware Quantization 41 6.1.1 Estimating activation scaling factors 43 6.2 FHE-Aware Tuning 44 6.2.1 Estimating the FHE-ACT noise distribution 45 6.2.2 Noise proxy for training-time simulation 45 6.2.3 Pseudo FHE-Noise Tuning algorithm 46 6.3 Implementation Notes and Practical Observations 47 Chapter 7 Implementation and Evaluation 49 7.1 Evaluation 50 7.1.1 Experimental Settings 50 7.1.2 Results on MNIST and Fashion-MNIST 53 7.1.3 Results on CIFAR-10 54 7.1.4 My Method’s Verification in Various GPUs 55 7.1.5 Accuracy–Latency Trade-offs in B_g and the Role of FHE-Aware Fine-Tuning 58 7.1.6 Activation Functions as Performance Bottlenecks in Encrypted DNN Inference 59 7.2 Summary of Findings 59 II DP-Protocol: Efficient Server-Aided Secure Inference via Relaxed SMPC 61 Chapter 8 DP-Protocol NN: Security Notion and Protocol Design 62 8.1 My Methodology 62 8.1.1 DP-Protocol for NN Inference 63 8.1.2 Neural Network Inference Protocol Π_NN 64 8.1.3 Activation Function Protocol Π_Act 66 8.1.4 Noise Generation 67 Chapter 9 Formal Security & Composition Analysis 69 9.1 Security Analysis of the NN Protocol Π_NN 69 9.1.1 Full Security 70 9.1.2 Security of the DP-Protocol 74 Chapter 10 Offline Calibration and Parameterization for DP-Protocol NN 80 10.1 Estimating Layerwise Distance Thresholds for Neighboring Inputs 80 10.1.1 Workflow for Estimating Thresholds 81 10.1.2 Estimating Distance Sample Multisets 85 Chapter 11 Evaluation & Model Configuration 89 11.1 Evaluation 89 11.1.1 Experimental Settings 90 11.1.2 Setting DP Parameters ε and δ 91 11.1.3 Setting threshold t⃗ 92 11.1.4 Efficiency Comparison 92 11.1.5 Privacy Analysis 94 Chapter 12 Conclusion 100 References 103 Appendix A — Symbol Table 113 A.1 Shared Conventions 113 A.2 Part I: FHEW/TFHE-Based Encrypted DNN Inference 114 A.2.1 Neural network pipeline and layer-wise quantities 114 A.2.2 Quantization and pseudo-noise tuning (PNT) 114 A.2.3 FHE-ACT bootstrapping, modulus spaces, and error terms 115 A.3 Part II: DP-Protocol NN and Server-Aided 2PC 116 A.3.1 Parties, inputs, and outputs 116 A.3.2 Additive secret sharing (2-out-of-2) and modulus conventions 116 A.3.3 Protocols and sub-protocols 116 A.3.4 Basic transformations: shuffle, merge, and histogram 117 A.3.5 DP and DP-Protocol parameters 117 A.3.6 Intermediate variables in Π_Act 118 A.3.7 Noise generation variables (histogram-based construction) 118 | - |
| dc.language.iso | en | - |
| dc.subject | FHEW/TFHE | - |
| dc.subject | 加密深度神經網路推論 | - |
| dc.subject | 精度切換 | - |
| dc.subject | FHE-aware 量化 | - |
| dc.subject | DP-Protocol | - |
| dc.subject | 伺服器輔助兩方計算 | - |
| dc.subject | 洗牌與噪聲注入 | - |
| dc.subject | FHEW/TFHE | - |
| dc.subject | Encrypted DNN Inference | - |
| dc.subject | Precision Switching | - |
| dc.subject | FHE-aware Quantization | - |
| dc.subject | DP-Protocol | - |
| dc.subject | Server-aided Two-Party Computation (2PC) | - |
| dc.subject | Shuffling and Noise Injection | - |
| dc.title | 隱私保護神經網路推論之優化方法 | zh_TW |
| dc.title | Optimization Methods for Privacy-Preserving Neural Network Inference | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 博士 | - |
| dc.contributor.coadvisor | 杜憶萍 | zh_TW |
| dc.contributor.coadvisor | I-Ping Tu | en |
| dc.contributor.oralexamcommittee | 陳維超;張明清;許之凡 ;劉峰豪 | zh_TW |
| dc.contributor.oralexamcommittee | Wei-Chao Chen;Ming-Ching Chang;Chih-Fan Hsu ;Feng-Hao Liu | en |
| dc.subject.keyword | FHEW/TFHE,加密深度神經網路推論精度切換FHE-aware 量化DP-Protocol伺服器輔助兩方計算洗牌與噪聲注入 | zh_TW |
| dc.subject.keyword | FHEW/TFHE,Encrypted DNN InferencePrecision SwitchingFHE-aware QuantizationDP-ProtocolServer-aided Two-Party Computation (2PC)Shuffling and Noise Injection | en |
| dc.relation.page | 119 | - |
| dc.identifier.doi | 10.6342/NTU202600450 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2026-02-03 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資料科學學位學程 | - |
| dc.date.embargo-lift | 2028-12-31 | - |
| 顯示於系所單位: | 資料科學學位學程 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf 未授權公開取用 | 9.87 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
