請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93930完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 王釧茹 | zh_TW |
| dc.contributor.advisor | Chuan-Ju Wang | en |
| dc.contributor.author | 黃玉婷 | zh_TW |
| dc.contributor.author | Yu-Ting Huang | en |
| dc.date.accessioned | 2024-08-09T16:29:32Z | - |
| dc.date.available | 2024-08-10 | - |
| dc.date.copyright | 2024-08-09 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-03 | - |
| dc.identifier.citation | [1] L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. Jia, A. Travers, B. Zhang, D. Lie, and N. Papernot. Machine Unlearning. In Proceedings of the 42nd IEEE Symposium on Security and Privacy, pages 141–159, 2021.
[2] J. Brophy and D. Lowd. Machine Unlearning for Random Forests. In Proceedings of the 38th International Conference on Machine Learning, volume 139, pages 1092–1104, 2021. [3] Y. Cao and J. Yang. Towards Making Systems Forget with Machine Unlearning. In Proceedings of the 36th IEEE Symposium on Security and Privacy, pages 463–480, 2015. [4] G. Cauwenberghs and T. Poggio. Incremental and Decremental Support Vector Machine Learning. In Proceedings of the 13th International Conference on Neural Information Processing Systems, 2000. [5] C. Chen, F. Sun, M. Zhang, and B. Ding. Recommendation Unlearning. In Proceedings of the 31st ACM Web Conference, pages 2768–2777, 2022. [6] B.v.d.S.ChrisJayHoofnagleandF.Z.Borgesius. The European Union General Data Protection Regulation: What It Is and What It Means. Information & Communications Technology Law, 28:65–98, 2019. [7] L. de la Torre. A Guide to the California Consumer Privacy Act. SSRN Electronic Journal, 2018. [8] C. Fan, J. Liu, Y. Zhang, E. Wong, D. Wei, and S. Liu. SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation. In Proceedings of the 12th International Conference on Learning Representations, 2024. [9] J. Foster, S. Schoepf, and A. Brintrup. Fast Machine Unlearning without Retraining through Selective Synaptic Dampening. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, volume 38, pages 12043–12051, 2024. [10] A. A. Ginart, M. Y. Guan, G. Valiant, and J. Zou. Making AI Forget You: Data Deletion in Machine Learning. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2019. [11] A. Golatkar, A. Achille, A. Ravichandran, M. Polito, and S. Soatto. Mixed-Privacy Forgetting in Deep Networks. In Proceedings of the 34th IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 792–801, 2021. [12] A. Golatkar, A. Achille, and S. Soatto. Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations. In Proceedings of the 16th European Conference on Computer Vision, pages 383–398, 2020. [13] C. Guo, T. Goldstein, A. Hannun, and L. Van Der Maaten. Certified Data Removal from Machine Learning Models. In Proceedings of the 37th International Conference on Machine Learning, volume 119, pages 3832–3842, 2020. [14] Y. He, G. Meng, K. Chen, J. He, and X. Hu. DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks. arXiv preprint arXiv:2105.06209, 2021. [15] J. Jia, J. Liu, P. Ram, Y. Yao, G. Liu, Y. Liu, P. Sharma, and S. Liu. Model Sparsity Can Simplify Machine Unlearning. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pages 51584–51605, 2023. [16] M. Karasuyama and I. Takeuchi. Multiple Incremental Decremental Learning of Support Vector Machines. In Proceedings of the 22nd International Conference on Neural Information Processing Systems, 2009. [17] M. Kurmanji, P. Triantafillou, J. Hayes, and E. Triantafillou. Towards Unbounded Machine Unlearning. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pages 1957–1987, 2023. [18] P. Laskov, C. Gehl, S. Krüger, and K.-R. Müller. Incremental Support Vector Learning: Analysis, Implementation and Applications. Journal of Machine Learning Research, 7(69):1909–1936, 2006. [19] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [20] Y. Li, C. Chen, Y. Zhang, W. Liu, L. Lyu, X. Zheng, D. Meng, and J. Wang. UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition. In Proceedings of the 36th International Conference on Neural Information Processing Systems, pages 12611–12625, 2023. [21] A. Mahadevan and M. Mathioudakis. Certifiable Machine Unlearning for Linear Models. arXiv preprint arXiv:2106.15093, 2021. [22] Q. P. Nguyen, B. Kian, H. Low, and P. Jaillet. Variational Bayesian Unlearning. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 16025–16036, 2020. [23] T. T. Nguyen, T. T. Huynh, P. L. Nguyen, A. W.-C. Liew, H. Yin, and Q. V. H. Nguyen. A Survey of Machine Unlearning. arXiv preprint arXiv:2209.02299, 2022. [24] J. Rosen. The Right to Be Forgotten. The Stanford Law Review, 64, 2012. [25] A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh. Remember What You Want to Forget: Algorithms for Machine Unlearning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pages 18075–18086, 2021. [26] S. Shen, C. Zhang, Y. Zhao, A. Bialkowski, W. T. Chen, and M. Xu. Label-Agnostic Forgetting: A Supervision-Free Unlearning in Deep Models. In Proceedings of the 12th International Conference on Learning Representations, 2024. [27] A. Thudi, G. Deza, V. Chandrasekaran, and N. Papernot. Unrolling SGD: Understanding Factors Influencing Machine Unlearning. In Proceedings of the 7th IEEE European Symposium on Security and Privacy, pages 303–319, 2022. [28] H. Yan, X. Li, Z. Guo, H. Li, F. Li, and X. Lin. ARCANE: An Efficient Architecture for Exact Machine Unlearning. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, pages 4006–4013, 2022. [29] Z. Zhang, Y. Zhou, X. Zhao, T. Che, and L. Lyu. Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and Quantization. In Proceedings of the 35th International Conference on Neural Information Processing Systems, pages 13433–13455, 2022. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93930 | - |
| dc.description.abstract | 本論文介紹了一個高效的計算優化框架 ECO,它將 CP 算法(最初由 Cauwenberghs & Poggio (2000)[4] 提出)適應於深度神經網絡(DNN)模型中的精確卸載。ECO 利用單一的模型架構,結合了基於 DNN 的特徵轉換函數和 CP 算法,實現了精確的數據刪除,而無需完全重新訓練模型。我們證明了 ECO 不僅提高了效率,還保持了原始基礎 DNN 模型的性能,令人驚訝的是,它甚至在效果上超越了機器反學習領域裡的黃金標準——重新訓練。最重要的是,我們為第一個調適 CP 算法原本設計用於進行逐一排除評估的遞減學習,以在 DNN 模型中實現精確遺忘,徹底移除特定數據資料的影響。我們計劃開源我們的程式碼,以促進此方法在機器遺忘領域的進一步研究。 | zh_TW |
| dc.description.abstract | This paper introduces ECO, an efficient computational optimization framework that adapts the CP algorithm—originally proposed by Cauwenberghs & Poggio (2000)[4]—for exact unlearning within deep neural network (DNN) models. ECO utilizes a single model architecture that integrates a DNN-based feature transformation function with the CP algorithm, facilitating precise data removal without necessitating full model retraining. We demonstrate that ECO not only boosts efficiency but also maintains the performance of the original base DNN model, and surprisingly, it even surpasses naive retraining in effectiveness. Crucially, we are the first to adapt the CP algorithm’s decremental learning for leave-one-out evaluation to achieve exact unlearning in DNN models by fully removing a specific data instance's influence. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-09T16:29:31Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-09T16:29:32Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Acknowledgements i
摘要 iii Abstract v Contents vii List of Figures xi List of Tables xiii Chapter 1 Introduction 1 Chapter 2 Problem Definition and Preliminaries 5 2.1 Problem Definition 5 2.1.1 Revisit the Dual SVM and the CP Algorithm 6 Chapter 3 Our Proposed Method: ECO 9 3.1 Model Preparation 10 3.1.1 A Strategy for CP Learning Acceleration 12 3.2 Model Serving 13 3.2.1 TheUnlearningProcedureoftheCPAlgorithm 14 Chapter 4 Experiments 17 4.1 Dataset and Models Used 17 4.2 Evaluation Metrics 17 4.3 Implementation Details 18 4.4 Compared Methods 19 4.5 Experimental Results 19 4.5.1 Main Results 19 4.5.2 The Forgetfulness Quality: MIA 22 4.5.3 Sensitivity Analysis on kGD 23 Chapter 5 Conclusion 25 References 27 Appendix A — Impact, Limitations, and Resources 31 A.1 Broader Impact 31 A.2 Limitations 32 A.3 Computer Resources 33 Appendix B — More about the CP Algorithms 35 B.1 Preliminaries 35 B.1.1 Generalized Lagrangian 35 B.1.2 Distancetoa Hyperplane 36 B.1.3 The Primal SVMs 38 B.1.4 The KKT Condition 41 B.1.5 The Dual SVMs 42 B.1.6 The Sufficiency of the Karush-Kuhn-Tucker (KKT) Conditions 45 B.1.7 Slater’s Condition 45 B.1.8 SVMs and Their KKT Conditions 46 B.2 The Complete Derivation of the CP Algorithm 47 | - |
| dc.language.iso | en | - |
| dc.subject | 參數化規劃設計 | zh_TW |
| dc.subject | 對偶支持向量機 | zh_TW |
| dc.subject | KKT 條件 | zh_TW |
| dc.subject | CP 演算法 | zh_TW |
| dc.subject | 機器反學習 | zh_TW |
| dc.subject | Parametric Programming | en |
| dc.subject | Dual Support Vector Machine | en |
| dc.subject | KKT Condition | en |
| dc.subject | CP Algorithm | en |
| dc.subject | Machine Unlearning | en |
| dc.title | 深度神經網絡的機器反學習 | zh_TW |
| dc.title | Machine Unlearning for Deep Neural Network | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.coadvisor | 吳沛遠 | zh_TW |
| dc.contributor.coadvisor | Pei-Yuan Wu | en |
| dc.contributor.oralexamcommittee | 林軒田;陳駿丞 | zh_TW |
| dc.contributor.oralexamcommittee | Hsuan-Tien Lin;Jun-Cheng Chen | en |
| dc.subject.keyword | 機器反學習,參數化規劃設計,CP 演算法,KKT 條件,對偶支持向量機, | zh_TW |
| dc.subject.keyword | Machine Unlearning,Parametric Programming,CP Algorithm,KKT Condition,Dual Support Vector Machine, | en |
| dc.relation.page | 51 | - |
| dc.identifier.doi | 10.6342/NTU202402829 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2024-08-07 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資料科學學位學程 | - |
| 顯示於系所單位: | 資料科學學位學程 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf 未授權公開取用 | 5.55 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
