請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101689| 標題: | 隱私保護神經網路推論之優化方法 Optimization Methods for Privacy-Preserving Neural Network Inference |
| 作者: | 顧昱得 Yu-Te Ku |
| 指導教授: | 洪士灝 Shih-Hao Hung |
| 共同指導教授: | 杜憶萍 I-Ping Tu |
| 關鍵字: | FHEW/TFHE,加密深度神經網路推論精度切換FHE-aware 量化DP-Protocol伺服器輔助兩方計算洗牌與噪聲注入 FHEW/TFHE,Encrypted DNN InferencePrecision SwitchingFHE-aware QuantizationDP-ProtocolServer-aided Two-Party Computation (2PC)Shuffling and Noise Injection |
| 出版年 : | 2026 |
| 學位: | 博士 |
| 摘要: | 本論文旨在提升隱私保護的深度神經網路(DNN)推論之可部署性,聚焦於解決主要的效率瓶頸,也就是非線性運算(例如 ReLU),同時維持可量化的安全性與隱私保證。第一條研究主線聚焦於第三代全同態加密(FHE),特別是 FHEW/TFHE。為了降低加密推論成本,我提出 FHE-Neuron,透過重新配置FHEW/TFHE 參數與 bootstrapping 的資料流,並採用動態密文精度切換,以取得更佳的準確度與延遲折衷。我也進一步發展一套 FHE-aware 的量化與微調框架,使預訓練模型能更適應加密執行環境,並減輕由精度限制與噪聲所引入的誤差。第二條研究主線提出 DP Protocol,這是一種受差分隱私啟發的協定安全性放寬定義,要求在相鄰輸入下,對手所觀察到的視圖分佈必須保持接近,從而能在可調整的隱私界限下達到更高效率。基於此概念,我設計一個伺服器輔助的兩方安全計算(2PC)推論協定:線性層由兩方以秘密分享方式計算;非線性運算則在「隱私增強」資料上以明文執行,而這些資料由加噪與秘密打亂所產生。我並以形式化方式證明該協定滿足 DP-Protocol 的保證。總結而言,本論文從加密執行與協定設計兩個面向推進實務可用的安全推論,並展示共同設計密碼機制、模型架構,以及誤差與隱私分析,能夠顯著提升隱私保護 DNN 推論的效率與可用性。 This dissertation improves the deployability of privacy-preserving deep neural network (DNN) inference by addressing the dominant efficiency bottleneck in nonlinear computation (e.g., ReLU), while maintaining quantifiable security and privacy guarantees. The first line of work focuses on third-generation Fully Homomorphic Encryption (FHE), particularly FHEW/TFHE. To reduce the cost of encrypted inference, I propose FHE-Neuron, which reconfigures FHEW/TFHE parameters and the bootstrapping dataflow, and employs dynamic ciphertext precision switching to achieve improved accuracy–latency trade-offs. I further develop an FHE-aware quantization and fine-tuning framework to adapt pretrained models to encrypted execution and mitigate precision- and noise-induced errors. The second line of work introduces DP-Protocol, a Differential Privacy–inspired relaxation of protocol security that requires an adversary’s view distributions to remain close for neighboring inputs, enabling higher efficiency under tunable privacy bounds. Building on this notion, I design a server-aided two-party computation (2PC) inference protocol in which linear layers are evaluated via two-party secret sharing, while nonlinearities are executed in plaintext on “privacy-enhanced” data produced by noise injection and secret shuffling. I formally prove that the protocol satisfies the DP-Protocol guarantee. Overall, this dissertation advances practical secure inference from both encrypted execution and protocol design perspectives, demonstrating that co-designing cryptographic mechanisms, model architecture, and error and privacy analysis can substantially improve the efficiency and usability of privacy-preserving DNN inference. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101689 |
| DOI: | 10.6342/NTU202600450 |
| 全文授權: | 同意授權(限校園內公開) |
| 電子全文公開日期: | 2028-12-31 |
| 顯示於系所單位: | 資料科學學位學程 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf 未授權公開取用 | 9.87 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
