Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93930| Title: | 深度神經網絡的機器反學習 Machine Unlearning for Deep Neural Network |
| Authors: | 黃玉婷 Yu-Ting Huang |
| Advisor: | 王釧茹 Chuan-Ju Wang |
| Co-Advisor: | 吳沛遠 Pei-Yuan Wu |
| Keyword: | 機器反學習,參數化規劃設計,CP 演算法,KKT 條件,對偶支持向量機, Machine Unlearning,Parametric Programming,CP Algorithm,KKT Condition,Dual Support Vector Machine, |
| Publication Year : | 2024 |
| Degree: | 碩士 |
| Abstract: | 本論文介紹了一個高效的計算優化框架 ECO,它將 CP 算法(最初由 Cauwenberghs & Poggio (2000)[4] 提出)適應於深度神經網絡(DNN)模型中的精確卸載。ECO 利用單一的模型架構,結合了基於 DNN 的特徵轉換函數和 CP 算法,實現了精確的數據刪除,而無需完全重新訓練模型。我們證明了 ECO 不僅提高了效率,還保持了原始基礎 DNN 模型的性能,令人驚訝的是,它甚至在效果上超越了機器反學習領域裡的黃金標準——重新訓練。最重要的是,我們為第一個調適 CP 算法原本設計用於進行逐一排除評估的遞減學習,以在 DNN 模型中實現精確遺忘,徹底移除特定數據資料的影響。我們計劃開源我們的程式碼,以促進此方法在機器遺忘領域的進一步研究。 This paper introduces ECO, an efficient computational optimization framework that adapts the CP algorithm—originally proposed by Cauwenberghs & Poggio (2000)[4]—for exact unlearning within deep neural network (DNN) models. ECO utilizes a single model architecture that integrates a DNN-based feature transformation function with the CP algorithm, facilitating precise data removal without necessitating full model retraining. We demonstrate that ECO not only boosts efficiency but also maintains the performance of the original base DNN model, and surprisingly, it even surpasses naive retraining in effectiveness. Crucially, we are the first to adapt the CP algorithm’s decremental learning for leave-one-out evaluation to achieve exact unlearning in DNN models by fully removing a specific data instance's influence. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93930 |
| DOI: | 10.6342/NTU202402829 |
| Fulltext Rights: | 未授權 |
| Appears in Collections: | 資料科學學位學程 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-112-2.pdf Restricted Access | 5.55 MB | Adobe PDF |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
