請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88484
標題: | 用於隱私維護機器學習的通用矩陣乘法改善方法 SepMM : A General Matrix Multiplication Improvement Approach for Privacy-Preserving Machine Learning |
作者: | 蔡東霖 Tung-Lin Tsai |
指導教授: | 吳沛遠 Pei-Yuan Wu |
關鍵字: | 隱私維護機器學習,安全多方計算,安全兩方計算,安全推理,矩陣乘法, privacy-preserving machine learning,secure multi-party computation,2-party computation,secure inference,matrix multiplication, |
出版年 : | 2023 |
學位: | 碩士 |
摘要: | 隨著獲得機器學習準確預測的同時,保護敏感資料隱私的需求愈來愈高,隱私維護機器學習 (PPML) 受到了廣泛的關注。這項技術歸功於同態加密和安全多方計算等安全加密方案的出現,這些方案允許一些參與方在不知道其他參與方數據的情況下共同計算結果。然而,這些方法的實際實現需要在加密下進行安全計算,相比於明文計算,會顯著增加通信成本和計算成本。鑑於矩陣乘法在機器學習的應用中是一項關鍵操作,由於安全矩陣乘法需要在參與方之間進行大量通信,通常在安全多方計算框架下成為瓶頸,所以我們提出了一種名為 SepMM 的新型安全矩陣乘法改善方法,適用於加速兩方安全計算。
SepMM 確保在均勻分佈的條件下,對於惡意方揭示矩陣中的任何欄位的機會將會隨著位元數的增加而呈指數級下降。SepMM 可以跟能與明文執行結果位元等效的安全矩陣乘法的方法相互結合。實驗結果表明,通過將 SepMM 與最先進的 PPML 框架 SIRNN 結合使用,對於廣泛採用的神經網絡 SqueezeNet、ResNet50 和 DenseNet121,通信成本和推理時間分別減少了 4.67-13.29 倍和 3.64-9.44 倍。 Privacy-preserving machine learning (PPML) has gained significant attention in recent years due to the increasing need to protect sensitive data while obtaining accurate predictions. This is attributed to secure encryption schemes such as homomorphic encryption and secure multi-party computation, which allow some parties to jointly compute the results without knowing others' data. However, practical implementations of these methods require secure computation under encryption, which significantly increase communication and computation costs compared to plaintext computation. In view that matrix multiplication is a key operation in machine learning applications, and typically serves as a bottleneck under secure multi-party computation framework due to the massive communications it requires between parties, we propose SepMM as a novel secure matrix multiplication optimization approach for 2-party computation. SepMM ensures that, assuming uniform distribution prior, the chances for the adverary to reveal any entity in the matrix decreases exponentially with the increase in bitlength. SepMM can be integrated with secure matrix multiplication methods that are bitwise equivalent to plaintext execution. Experimental results show that, by integrating SepMM with state-of-the-art PPML framework SIRNN, the communication cost and inference time are reduced by 4.67x - 13.29x and 3.64x - 9.44x, respectively, for widely adopted neural networks SqueezeNet, ResNet50, and DenseNet121. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88484 |
DOI: | 10.6342/NTU202302160 |
全文授權: | 同意授權(全球公開) |
顯示於系所單位: | 電機工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-111-2.pdf | 8.89 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。