Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊網路與多媒體研究所
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91775
Full metadata record
???org.dspace.app.webui.jsptag.ItemTag.dcfield???ValueLanguage
dc.contributor.advisor逄愛君zh_TW
dc.contributor.advisorAi-Chun Pangen
dc.contributor.author鄭力誠zh_TW
dc.contributor.authorLi-Chen Chengen
dc.date.accessioned2024-02-22T16:40:21Z-
dc.date.available2024-02-23-
dc.date.copyright2024-02-22-
dc.date.issued2024-
dc.date.submitted2024-02-10-
dc.identifier.citation[1] C. M. Bishop. Training with noise is equivalent to tikhonov regularization. Neural Computation, 7(1):108–116, 1995.
[2] Y.-L. Boureau, J. Ponce, and Y. LeCun. A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, page 111–118, 2010.
[3] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
[4] A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4829–4837, 2016.
[5] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. Springer, 2006.
[6] H. Edwards and A. Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015.
[7] E. Erdoğan, A. Küpçü, and A. E. Çiçek. Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning. In Proceedings of the 21st Workshop on Privacy in the Electronic Society, WPES’22, page 115–124, 2022.
[8] M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, page 1322–1333, 2015.
[9] B. Graham. Fractional max-pooling. arXiv preprint arXiv:1412.6071, 2014.
[10] O. Gupta and R. Raskar. Distributed learning of deep neural network over multiple agents. Journal of Network and Computer Applications, 116:1–8, 2018.
[11] Z. He, T. Zhang, and R. B. Lee. Model inversion attacks against collaborative inference. In Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC ’19, page 148–162, 2019.
[12] Z. He, T. Zhang, and R. B. Lee. Attacking and protecting data privacy in edge–cloud collaborative inference systems. IEEE Internet of Things Journal, 8(12):9706–9716, 2021.
[13] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
[14] J. Li, A. S. Rakin, X. Chen, Z. He, D. Fan, and C. Chakrabarti. Ressfl: A resistance transfer framework for defending model inversion attack in split federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10194–10202, 2022.
[15] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
[16] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
[17] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011.
[18] F. Pittaluga, S. Koppal, and A. Chakrabarti. Learning privacy preserving encodings through adversarial training. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 791–799, Jan 2019.
[19] N. Raval, A. Machanavajjhala, and L. P. Cox. Protecting visual secrets using adversarial nets. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1329–1332, 2017.
[20] A. Singh, P. Vepakomma, O. Gupta, and R. Raskar. Detailed comparison of communication efficiency of split learning and federated learning. arXiv preprint arXiv:1909.09145, 2019.
[21] C. Thapa, P. C. M. Arachchige, S. Camtepe, and L. Sun. Splitfed: When federated learning meets split learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 8485–8493, 2022.
[22] T. Titcombe, A. J. Hall, P. Papadopoulos, and D. Romanini. Practical defences against model inversion attacks for split neural networks. arXiv preprint arXiv:2104.05743, 2021.
[23] P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar. Split learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint arXiv:1812.00564, 2018.
[24] P. Vepakomma, T. Swedish, R. Raskar, O. Gupta, and A. Dubey. No peek: A survey of private distributed deep learning. arXiv preprint arXiv:1812.03288, 2018.
[25] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3462–3471, 2017.
[26] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
[27] Q. Yang, Y. Liu, T. Chen, and Y. Tong. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol., 10(2), Jan 2019.
[28] Z. Yang, J. Zhang, E.-C. Chang, and Z. Liang. Neural network inversion in adversarial setting via background knowledge alignment. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS ’19, page 225–240, 2019.
[29] M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks. arXiv preprint arXiv:1301.3557, 2013.
[30] S. Zhai, H. Wu, A. Kumar, Y. Cheng, Y. Lu, Z. Zhang, and R. Feris. S3pool: Pooling with stochastic spatial sampling. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4003–4011, 2017. 55
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/91775-
dc.description.abstract拆分式學習是一種很有前景的協作學習架構,用於解決深度學習應用中的隱私問題。它有助於在不損害個人資料隱私的情況下進行協作神經網路訓練。然而,由於潛在的資料重建攻擊,即使參與者只分享中間特徵,也會威脅到參與者的隱私,因此如何在拆分式學習中保護隱私,仍然是一個巨大的挑戰。以往針對資料重建攻擊的防禦策略通常會導致模型效用顯著下降或需要高昂的計算成本。為了解決這些問題,我們提出了一種新的防禦方法--差分隱私隨機降採樣。這種防禦策略將隨機降採樣和雜訊應用到中間特徵中,在不增加大量計算成本的情況下,有效地實現了隱私與效用的平衡。在各種資料集上進行的實證分析表明,所提出的防禦方法優於現有的最先進方法,突出了它在不犧牲效用的情況下維護隱私的功能。zh_TW
dc.description.abstractSplit learning emerges as a promising collaborative learning framework addressing privacy concerns in deep learning applications. It facilitates collaborative neural network training without compromising individual data privacy. However, preserving privacy in split learning remains a substantial challenge due to potential data reconstruction attacks that threaten participants' privacy even when participants only share intermediate features. Previous defense strategies against data reconstruction attacks usually result in a significant drop in model utility or require high computational costs. To navigate these issues, we propose a novel defense method --- differentially private stochastic downsampling. This defense strategy applies stochastic downsampling and noise addition to intermediate features, effectively creating a privacy-utility balance without imposing substantial computational burdens. Empirical evaluations conducted on diverse datasets demonstrate the superiority of the proposed defense method over existing state-of-the-art methods, highlighting its efficacy in maintaining privacy without sacrificing utility.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-02-22T16:40:21Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-02-22T16:40:21Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 i
Acknowledgements ii
中文摘要 iv
Abstract v
目次 vi
圖次 viii
表次 x
Denotation xii
Chapter 1 Introduction 1
Chapter 2 Related Works 6
2.1 Collaborative Learning . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Data Reconstruction Attacks . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Defense Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Image Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 3 System Model 10
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Attack Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Data Reconstruction Attacks . . . . . . . . . . . . . . . . . . . . . 13
3.2.2 Identity of Attackers . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.3 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.4 Attack Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Chapter 4 Methodology 19
4.1 Design Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Design Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Stochastic Downsampling (SDS) . . . . . . . . . . . . . . . . . . . . 22
4.4 Noise Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.1 Differential Privacy Formulation . . . . . . . . . . . . . . . . . . . 26
4.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.5 Differentially Private Stochastic Downsampling (DPSDS) . . . . . . 28
Chapter 5 Experiment 29
5.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.3 Tradeoff between Utility and Privacy . . . . . . . . . . . . . . . . . 31
5.4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.5 Synergy between Stochastic Downsampling and Noise Addition . . . 42
5.6 Impact of Downsampling Ratio . . . . . . . . . . . . . . . . . . . . 44
5.7 Computational and Communication Cost . . . . . . . . . . . . . . . 46
Chapter 6 Conclusion 49
References 50
-
dc.language.isoen-
dc.title防禦拆分式學習中的資料重建攻擊zh_TW
dc.titleDefending against Data Reconstruction Attacks in Split Learningen
dc.typeThesis-
dc.date.schoolyear112-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee王志宇;邱德泉;陳尚澤;李奇育zh_TW
dc.contributor.oralexamcommitteeChih-Yu Wang;Te-Chuan Chiu;Shang-Tse Chen;Chi-Yu Lien
dc.subject.keyword隱私,差分隱私,隨機降採樣,拆分式學習,資料重建攻擊,模型逆向攻擊,影像分類,zh_TW
dc.subject.keywordPrivacy,Stochastic Downsampling,Split Learning,Data Reconstruction Attack,Model Inversion Attack,Differential Privacy,Image Classification,en
dc.relation.page54-
dc.identifier.doi10.6342/NTU202400487-
dc.rights.note未授權-
dc.date.accepted2024-02-10-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊網路與多媒體研究所-
Appears in Collections:資訊網路與多媒體研究所

Files in This Item:
File SizeFormat 
ntu-112-1.pdf
  Restricted Access
5.95 MBAdobe PDF
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved