Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73433
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor吳沛遠(Pei-Yuan Wu)
dc.contributor.authorBo-Wei Tsengen
dc.contributor.author曾柏偉zh_TW
dc.date.accessioned2021-06-17T07:34:36Z-
dc.date.available2019-06-12
dc.date.copyright2019-06-12
dc.date.issued2019
dc.date.submitted2019-05-14
dc.identifier.citation[1] D. Bisson, “The 10 biggest data breaches of 2018... so far,” https://blog.barkly.com/biggest-data-breaches-2018-so-far.
[2] D. Kwok, “Cathay pacific faces probe over massive data breach. technology news,”https://www.reuters.com/article/uscathaypacific-cyber/cathay-pacific-faces-probe-over-massive-data-breach-idUSKCN1NB0JY, November 2018.
[3] C. Arthur, “Businesses unwilling to share data, but keen on government doing it. the guardian.” https://www.theguardian.com/technology/2010/jun/29/business-data-sharing-unwilling, June 2010.
[4] S. Chang and C. Li, “Privacy in neural network learning: Threats and countermeasures,” IEEE Network, vol. 32, no. 4, pp. 61–67, July 2018.
[5] M. Veale, R. Binns, and L. Edwards, “Algorithms that remember: model inversion attacks and data protection law,” Philosophical Transactions of the Royal Society, vol. 376, Nov 2018, mathematical, Physical and Engineering Sciences. [Online]. Available: https://doi.org/10.1098/rsta.2018.0083
[6] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’15, 2015, pp. 1322–1333. [Online]. Available: http://doi.acm.org/10.1145/2810103.2813677
[7] J. Feng and A. K. Jain, “Fingerprint reconstruction: From minutiae to phase,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 209–223, Feb 2011.
[8] M. Al-Rubaie and J. M. Chang, “Reconstruction attacks against mobile-based continuous authentication systems in the cloud,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 12, pp. 2648–2663, Dec 2016.
[9] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP), May 2017, pp. 3–18.
[10] C. Dwork, “Differential privacy: A survey of results,” in Theory and Applications of Models of Computation, M. Agrawal, D. Du, Z. Duan, and A. Li, Eds. Springer Berlin Heidelberg, 2008, pp. 1–19.
[11] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang, “Analyze gauss: Optimal bounds for privacy-preserving principal component analysis,” in Proceedings of the Forty-sixth Annual ACM Symposium on Theory of Computing, ser. STOC ’14, 2014, pp. 11–20.
[12] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’16, 2016, pp. 308–318.
[13] M. Hardt and E. Price, “The noisy power method: A meta algorithm with applications,” in International Conference on Neural Information Processing Systems (NIPS), 2014.
[14] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate, “Differentially private empirical risk minimization,” Journal of Machine Learning Research (JMLR), vol. 12, pp. 1069–1109, Jul. 2011.
[15] K. Chaudhuri, A. D. Sarwate, and K. Sinha, “A near-optimal algorithm for differentially-private principal components,” Journal of Machine Learning Research (JMLR), vol. 14, no. 1, pp. 2905–2943, Jan 2013.
[16] J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Local privacy and statistical minimax rates,” in Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, ser. FOCS ’13. IEEE Computer Society, 2013, pp. 429–438.
[17] U. Erlingsson, V. Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacy-preserving ordinal response,” in Proceedings of the 21st ACM Conference on Computer and Communications Security, 2014. [Online]. Available: https://arxiv.org/abs/1407.6981
[18] G. Cormode, S. Jha, T. Kulkarni, N. Li, D. Srivastava, and T. Wang, “Privacy at scale: Local differential privacy in practice,” in Proceedings of the 2018 International Conference on Management of Data, ser. SIGMOD ’18. ACM, 2018, pp. 1655–1658.
[19] C. Huang, P. Kairouz, X. Chen, L. Sankar, and R. Rajagopal, “Generative adversarial privacy,” arxiv preprint arXiv:1807.05306, 2018. [Online]. Available: http://arxiv.org/abs/1807.05306
[20] C. Gentry, “A fully homomorphic encryption scheme,” Ph.D. dissertation, Stanford University, 2009, crypto.stanford.edu/craig.
[21] L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333–1345, May 2018.
[22] S. Y. Kung, “Compressive privacy: From informationestimation theory to machine learning [lecture notes],” IEEE Signal Processing Magazine, vol. 34, no. 1, pp. 94–112, Jan 2017.
[23] S. Y. Kung, T. Chanyaswad, J. M. Chang, and P. Y. Wu, “Collaborative pca/dca learning methods for compressive privacy,” ACM Transactions on Embedded Computing Systems (TECS), vol. 16, p. 76, 7 2017.
[24] S. Y. Kung, “A compressive privacy approach to generalized information bottleneck and privacy funnel problems,” Journal of the Franklin Institute, vol. 355, no. 4, pp. 1846 – 1872, 2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0016003217303162
[25] T. Chanyaswad, J. M. Chang, and S. Y. Kung, “A compressive multi-kernel method for privacy-preserving machine learning,” in 2017 International Joint Conference on Neural Networks (IJCNN), May 2017, pp. 4079–4086.
[26] M. Al, T. Chanyaswad, and S. Y. Kung, “Multi-kernel, deep neural network and hybrid models for privacy preserving machine learning,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 2891–2895.
[27] A. Alemi, I. Fischer, J. Dillon, and K. Murphy, “Deep variational information bottleneck,” in International Conference on Learning Representations (ICLR), 2017. [Online]. Available: https://arxiv.org/abs/1612.00410
[28] D. Rebollo-Monedero, J. Forne, and J. Domingo-Ferrer, “From t-closeness-like privacy to postrandomization via information theory (tkde),” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 11, pp. 1623–1636, Nov 2010.
[29] F. du Pin Calmon and N. Fawaz, “Privacy against statistical inference,” in 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Oct 2012, pp. 1401–1408.
[30] Y. O. Basciftci, Y. Wang, and P. Ishwar, “On privacy-utility tradeoffs for constrained data release mechanisms,” in 2016 Information Theory and Applications Workshop (ITA), Jan 2016, pp. 1–6.
[31] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in International Conference on Neural Information Processing Systems (NIPS), 2014.
[32] H. Edwards and A. Storkey, “Censoring representations with an adversary,” in International Conference on Learning Representations (ICLR), 2016.
[33] J. Hamm, “Enhancing utility and privacy with noisy minimax filters,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2017, pp. 6389–6393.
[34] A. Tripathy, Y. Wang, and P. Ishwar, “Privacy-preserving adversarial networks,” arXiv preprint arXiv:1712.07008, 2017. [Online]. Available: http://arxiv.org/abs/1712.07008
[35] X. Chen, P. Kairouz, and R. Rajagopal, “Understanding compressive adversarial privacy,” arxiv preprint arXiv:1809.08911, 2018. [Online]. Available: http://arxiv.org/abs/1809.08911 [36] S. Liu, A. Shrivastava, J. Du, and L. Zhong, “Better accuracy with quantified privacy: representations learned via reconstructive adversarial network,” arXiv preprint arXiv:1901.08730, 2019. [Online]. Available: http://arxiv.org/abs/1901.08730
[37] J. Schmidhuber, “Learning factorial codes by predictability minimization,” Neural Computation, vol. 4, no. 6, pp. 863–879, 1992. [Online]. Available: https://doi.org/10.1162/neco.1992.4.6.863
[38] A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier gans,” in International Conference on Machine Learning (ICML), 2017.
[39] Y. U. Liu, T. Breuel, and J. Kautz, “Unsupervised image-to-image translation networks,” in Internation Conference on Neural Information Processing Systems (NIPS), 2017.
[40] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in Internation Conference on Neural Information Processing Systems (NIPS), 2016.
[41] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in International Conference on Machine Learning (ICML), 2017.
[42] M. Arjovsky and L. Bottou, “Towards principled methods for training generative adversarial networks,” in International Conference on Learning Representations (ICLR), 2017.
[43] I. Durugkar, I. Gem, and S. Mahadevan, “Generative multi-adversarial networks,” in International Conference on Learning Representations (ICLR), 2016.
[44] C. Huang, P. Kairouz, X. Chen, L. Sankar, and R. Rajagopal, “Context-aware generative adversarial privacy,” Entropy, vol. 19, no. 12, 2017.
[45] T. T. Nguyen and S. Sanner, “Algorithms for direct 0-1 loss optimization in binary classification,” in International Conference on International Conference on Machine Learning (ICML), 2013.
[46] S. Y. Kung, Kernel Methods and Machine Learning. Cambridge University Press, 2014.
[47] J. Mercer, “Functions of positive and negative type, and their connection with the theory of integral equations,” Philosophical Transactions of the Royal Society, London, vol. 209, pp. 415–446, 1909.
[48] B. Sch¨olkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” in Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory, ser. COLT ’01/EuroCOLT ’01. London, UK: Springer-Verlag, 2001, pp. 416–426.
[49] C. K. I. Williams and M. Seeger, “Using the nystr¨om method to speed up kernel machines,” in International conference on Neural Information Processing System (NIPS), 2001.
[50] A. Rahimi and B. Recht, “Random features for large-scale kernel machines,” in International Conference on Neural Information Processing Systems (NIPS), 2007.
[51] Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist/
[52] D. Dua and C. Graff, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
[53] “The mplab genki-4k database,” http://mplab.ucsd.edu/.
[54] H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 2, no. 4, pp. 433–459, Jul. 2010. [Online]. Available: https://doi.org/10.1002/wics.101
[55] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. [Online]. Available: http://ufldl.stanford.edu/housenumbers/nips2011 housenumbers.pdf
[56] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Master’s thesis, Department of Computer Science, University of Toronto, 2009.
[57] X. Gastaldi, “Shake-shake regularization,” arXiv preprint arXiv:1705.07485, 2017. [Online]. Available: http://arxiv.org/abs/1705.07485
[58] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in 2015 IEEE International Conference on Computer Vision (ICCV), Dec 2015, pp. 3730–3738.
[59] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 815–823.
[60] S. Zagoruyko and N. Komodakis, “Wide residual networks,” in Proceedings of the British Machine Vision Conference (BMVC), 2016.
[61] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man´e, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi´egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: http://tensorflow.org/
[62] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Visio (ECCV), 2016.
[63] D. Gao, P. Yuan, N. Sun, X. Wu, and Y. Cai, “Face attribute prediction with convolutional neural networks,” in 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dec 2017, pp. 1294–1299.
[64] H. Han, A. K. Jain, F. Wang, S. Shan, and X. Chen, “Heterogeneous face attribute estimation: A deep multi-task learning approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 11, pp. 2597–2609, Nov 2018.
[65] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Learning face representation from scratch,” arXiv preprint arXiv:1411.7923, 2014. [Online]. Available: http://arxiv.org/abs/1411.7923
[66] Y. Zhong, J. Sullivan, and H. Li, “Face attribute prediction using off-the-shelf cnn features,” in 2016 International Conference on Biometrics (ICB), June 2016, pp. 1–7.
[67] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 770–778.
[68] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural networks : the official journal of the International Neural Network Society, vol. 12, no. 1, pp. 145–151, Jan. 1999. [Online]. Available: http://dx.doi.org/10.1016/S0893-6080(98)00116-6
[69] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2014.
[70] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning (ICML), 2015.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73433-
dc.description.abstract機器學習服務(MLaaS)近年來為我們日常生活帶來了很多便利,事實上這些部署在雲上的機器學習服務造成了嚴重隱私洩露的問題。此篇論文提出了壓縮隱私生成式對抗網路(CPGAN),這是一個數據驅動化的模型並且運用了正火紅的對抗式學習概念。我們的目標是,在上傳資料至雲端前先經過設計好的非線性壓縮類神經網路(privatizer),產生的壓縮信號可保留原機敏性數據的可用性且移除侵犯隱私的相關訊息,在此框架下可以提供二階段的隱私保護:原始資料只會保留在本地端、此壓縮信號可以防禦重建攻擊。要評量此壓縮網路的好壞,可以由壓縮隱私生成式對抗網路的分類器來衡量數據可用性,並另外學習一個重建網路(adversary reconstructor)來衡量隱私保護的程度。我們實驗不同種類的資料集並和過去文獻方法比較,由此證實壓縮隱私生成式對抗網路可以在數據可用性及隱私維護間達到較好的平衡(trade-off)。zh_TW
dc.description.abstractMachine learning as a service (MLaaS) has brought much convenience to our daily lives recently. However, the fact that the service is provided through cloud raises privacy leakage issues. In this work we propose the compressive privacy generative adversarial network (CPGAN), a data-driven adversarial learning framework for generating compressing representations that retain utility comparable to state-of-the-art, with the additional feature of defending against reconstruction attack. This is achieved by applying adversarial learning scheme to the design of compression network (privatizer), whose utility/privacy performances are evaluated by the utility classifier and the adversary reconstructor, respectively. Experimental results demonstrate that CPGAN achieves better utility/privacy trade-off in comparison with the previous work, and is applicable to real-world large datasets.en
dc.description.provenanceMade available in DSpace on 2021-06-17T07:34:36Z (GMT). No. of bitstreams: 1
ntu-108-R06942098-1.pdf: 7899800 bytes, checksum: 5443f0436844f0f694eee7286e92f8c0 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents誌謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Threats behind MLaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Differential privacy (DP) . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Local Differential Privacy . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Homomorphic Encryption . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Compressive privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Information theoretic privacy . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Generative Adversial Privacy . . . . . . . . . . . . . . . . . . . . . . . . 6
1.8 Our contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Utility perspective . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Privacy perspective . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Multiple Adversarial Reconstruction Attack Strategies . . . . . . . . . . 13
2.2.1 Linear Ridge Regression (LRR) . . . . . . . . . . . . . . . . . . 15
2.2.2 Kernel Ridge Regression (KRR) . . . . . . . . . . . . . . . . . . 15
2.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Theoretical Analysis for Binary Gaussian Mixture Model . . . . . . . . . . . . . . . . .19
3.1 Gaussian Mixture Model Settings . . . . . . . . . . . . . . . . . . . . . 19
3.2 Privacy Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Utility Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Comparison between empirical and theoretical results . . . . . . . . . . . 22
4 Empirical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1 Privacy and Utility trade-off on small data set . . . . . . . . . . . . . . . 25
4.2 CPGAN on Real Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.2.1 Experiment on SVHN and CIFAR-10 datasets . . . . . . . . . . . 31
4.2.2 Experiment on CelebA datasets . . . . . . . . . . . . . . . . . . 34
4.2.3 Dimension of the funnel layer . . . . . . . . . . . . . . . . . . . 36
4.3 Summarizing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
dc.language.isoen
dc.title壓縮隱私生成式對抗網路zh_TW
dc.titleCompressive Privacy Generative Adversarial Networksen
dc.typeThesis
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee林宗男(Tsung-Nan Lin),林昌鴻(Chang-Hong Lin)
dc.subject.keyword隱私維護機器學習,生成式對抗網路,壓縮隱私,網路資訊安全,對抗式學習,機器學習服務,zh_TW
dc.subject.keywordCompressive Privacy,Cyber Security,Privacy Preserving Machine Learning,Adversarial Learning,Generative Adversarial Networks,Machine Learning as a Service,en
dc.relation.page50
dc.identifier.doi10.6342/NTU201900761
dc.rights.note有償授權
dc.date.accepted2019-05-14
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  目前未授權公開取用
7.71 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved