Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電信工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/758
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor貝蘇章(Soo-Chang Pei)
dc.contributor.authorYi-Lin Sungen
dc.contributor.author宋易霖zh_TW
dc.date.accessioned2021-05-11T05:00:41Z-
dc.date.available2019-07-31
dc.date.available2021-05-11T05:00:41Z-
dc.date.copyright2019-07-31
dc.date.issued2019
dc.date.submitted2019-07-17
dc.identifier.citation[1] Shuo Yang, Ping Luo, Chen Change Loy, and Xiaoou Tang. From facial parts re­sponses to face detection: A deep learning approach. 2015 IEEE International Con­ference on Computer Vision (ICCV), pages 3676–3684, 2015.
[2] Jost Tobias Springenberg. Unsupervised and semi­supervised learning with categor­ical generative adversarial networks. In ICLR, 2016.
[3] Chongxuan Li, Kun Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. In NIPS, 2017.
[4] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In NIPS, pages 2234–2242. 2016.
[5] Zihang Dai, Zhilin Yang, Fan Yang, William W Cohen, and Ruslan R Salakhutdinov. Good semi­supervised learning that requires a bad gan. In NIPS, pages 6510–6520. 2017.
[6] Xiang Wei, Boqing Gong, Zixia Liu, Wei Lu, and Liqiang Wang. Improving the improved training of wasserstein gans: A consistency term and its dual effect. In ICLR, 2018.
[7] D. P. Kingma and Max Welling. Auto­encoding variational bayes. In ICLR. 2014.
[8] D. Abati, A. Porrello, S. Calderara, and R. Cucchiara. And: Autoregressive novelty detectors. In IEEE CVPR, 2019.
[9] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one­class classification. In ICML, pages 4393–4402, 2018.
[10] Pramuditha Perera, Ramesh Nallapati, and Bing Xiang. OCGAN: one­class novelty detection using gans with constrained latent representations. In IEEE CVPR, 2019.
[11] Y. Yu, W.­Y. Qu, N. Li, and Z. Guo. Open­category classification by adversarial sample generation. In IJCAI, pages 3357–3363, 2017.
[12] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
[13] Y. Saito, S. Takamichi, and H. Saruwatari. Statistical parametric speech synthesis incorporating generative adversarial networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(1):84–96, 2018.
[14] Ian Goodfellow, Jean Pouget­Abadie, Mehdi Mirza, Bing Xu, David Warde­Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672–2680. 2014.
[15] A.V. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. In ICML, pages 1747–1756. 2016.
[16] Diederik P. Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In NeurIPS, 2018.
[17] Jun­Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image­to­ image translation using cycle­consistent adversarial networks. 2017 IEEE Interna­tional Conference on Computer Vision (ICCV), pages 2242–2251, 2017.
[18] Bo Dai, Dahua Lin, Raquel Urtasun, and Sanja Fidler. Towards diverse and natural image descriptions via a conditional gan. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2989–2998, 2017.
[19] Behrooz Mahasseni, Michael Lam, and Sinisa Todorovic. Unsupervised video sum­marization with adversarial lstm networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2982–2991, 2017.
[20] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time­scale update rule converge to a local nash equilibrium. In NIPS, 2017.
[21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representa­tion learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2016.
[22] Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, and
Stephen Paul Smolley. Least squares generative adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2813–2821, 2017.
[23] Martín Arjovsky and Léon Bottou. Towards principled methods for training gener­ative adversarial networks. CoRR, abs/1701.04862, 2017.
[24] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In ICLR. 2017.
[25] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative adversarial net­works. In ICML, volume 70, pages 214–223, 2017.
[26] Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
[27] M. Hou, B. Chaib­draa, C. Li, and Q. Zhao. Generative adversarial positive­
unlabelled learning. In IJCAI, pages 2255–2261, 2018.
[28] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence­
calibrated classifiers for detecting out­of­distribution samples. In ICLR, 2018.
[29] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR. 2018.
[30] Scott E. Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In ICML, 2016.
[31] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi­supervised learning with deep generative models. In NIPS, pages 3581–3589. 2014.
[32] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy (EuroS P), pages 372–387, 2016.
[33] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (SP), pages 39–57, 2017.
[34] A. Athalye, N. Carlini, and D. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML. 2018.
[35] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR. 2018.
[36] Yingzhen Li. Are generative classifiers more robust to adversarial attacks?, 2018.
[37] Bernhard Schölkopf, John C. Platt, John C. Shawe­Taylor, Alex J. Smola, and Robert C. Williamson. Estimating the support of a high­dimensional distribution. Neural Comput., 13(7):1443–1471, 2001.
[38] Stanislav Pidhorskyi, Ranya Almohsen, Donald A. Adjeroh, and Gianfranco Doretto. Generative probabilistic novelty detection with adversarial autoencoders. In NIPS, pages 6823–6834, 2018.
[39] Mayu Sakurada and Takehisa Yairi. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In MLSDA, pages 4–11, 2014.
[40] J. J. Zhao, M. Mathieu, and Y. LeCun. Energy­based generative adversarial network. In ICLR, 2017.
[41] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. Least squares
generative adversarial networks. In IEEE ICCV, pages 2813–2821, 2017.
[42] Y. LeCun, C. Cortes, and C. J. C. Burges. The mnist database of handwritten digits. 1998.
[43] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop, 2011.
[44] A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.
[45] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modi­fications. In ICLR, 2017.
[46] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi­supervised learning with ladder networks. In NIPS, 2015.
[47] A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial machine learning at scale. In ICLR, 2017.
[48] S. Moosavi­Dezfooli, A. Fawzi, and P. Frossard. Deepfool: A simple and accurate method to fool deep neural networks. In IEEE CVPR, 2016.
[49] Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. Columbia object image li­brary (coil­20). 1996.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/handle/123456789/758-
dc.description.abstract新穎資料泛指那些不落在訓練資料的分佈中的資料,而他們在某些應用是很重要的,如半監督學習、增強網路的穩定性和異常偵測等。新穎資料通常難以取得,但是如果能夠有演算法能夠產生這些資料並在訓練時使用,那麼將可以大幅增強模型。因此如何產生這些資料是一個常見的研究議題。不同應用所需要的新穎資料往往不太相同,目前針對各種應用也有不同的方法。在這篇論文中,我們提出一個演算法-差集生成對抗網路,能夠產生各種新穎資料。我們發現新穎資料所在的分佈常常是兩個已知分佈的差集,而這兩個已知分佈的資料是比較容易蒐集到的,甚至都可以從訓練資料變化而來。我們將差集對抗網路應用在半監督學習、加強深度網路的穩定性以及異常偵測,實驗結果證明我們的方法是有效的。除此之外,我們也提供理論的證明保證演算法的收歛性。zh_TW
dc.description.abstractUnseen data, which are not samples from the distribution of training data and are difficult to collect, have exhibited the importance in many applications (e.g., novelty detection, semi-supervised learning, adversarial training and so on.). In this paper, we introduce a general framework, called Difference-Seeking Generative Adversarial Network (DSGAN), to create various kinds of unseen data. The novelty is to consider the probability density of unseen data distribution to be the difference between those of two distributions p_bar_d and p_d, whose samples are relatively easy to collect. DSGAN can learn the target distribution p_t (or the unseen data distribution) via only the samples from the two distributions p_d and p_bar_d. Under our scenario, p_d is the distribution of seen data and p_bar_d can be obtained from p_d via simple operations, implying that we only need the samples of p_d during training. Three key applications, semi-supervised learning, increasing the robustness of neural network and novelty detection, are taken as case studies to illustrate that DSGAN enables to produce various unseen data. We also provide theoretical analyses about the convergence of DSGAN.en
dc.description.provenanceMade available in DSpace on 2021-05-11T05:00:41Z (GMT). No. of bitstreams: 1
ntu-108-R06942076-1.pdf: 4003890 bytes, checksum: 42413a7dd090a87982527536913a0883 (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents誌謝
iii
Acknowledgements
v
摘要 vii
Abstract ix
1 Introduction 1
2 Backgrounds 5
2.1 Deep Generative Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Generative Adversarial Network . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Wasserstein GAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Semi­Supervised Learning with GANs . . . . . . . . . . . . . . . . . . . 8
2.5 Robust Issue of Neural Networks . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Novelty Detection by Reconstruction Method . . . . . . . . . . . . . . . 8
2.7 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Proposed Method­DSGAN 11
3.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Case Study on Synthetic Data and MNIST . . . . . . . . . . . . . . . . . 13
3.2.1 Case Study on Various Unseen Data Generation . . . . . . . . . . 13
3.3 Discussions about the objective function of DSGAN . . . . . . . . . . . 15
3.4 Tricks for Stable Training . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 Appendix: More Results for Case Study . . . . . . . . . . . . . . . . . . 17
4 Theoretical Results 21
5 Applications 27
5.1 Semi­Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Robustness Enhancement of Deep Networks . . . . . . . . . . . . . . . . 28
5.3 Novelty Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6 Experiments 33
6.1 DSGAN in Semi­Supervised Learning . . . . . . . . . . . . . . . . . . . 33
6.1.1 Datasets: MNIST, SVHN, and CIFAR­10 . . . . . . . . . . . . . 34
6.1.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1.3 Appendix: Experimental Details . . . . . . . . . . . . . . . . . . 36
6.2 DSGAN in Robustness Enhancement of Deep Networks . . . . . . . . . 37
6.2.1 Experiments Settings . . . . . . . . . . . . . . . . . . . . . . . . 40
6.2.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.2.3 Appendix: Experimental Details . . . . . . . . . . . . . . . . . . 42
6.3 DSGAN in Novelty Detection . . . . . . . . . . . . . . . . . . . . . . . 43
6.3.1 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.3.2 Experimental Details . . . . . . . . . . . . . . . . . . . . . . . . 46
7 Conclusions 47
Bibliography 49
dc.language.isoen
dc.title差集生成網路--新穎資料生成zh_TW
dc.titleDifference-Seeking Generative Adversarial Network--Unseen Data Generationen
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee丁建均(Jian-Jiun Ding),曾建誠(Chien-Cheng Tseng),黃文良(Wen-Liang Hwang),鍾國亮(Kuo-Liang Chung)
dc.subject.keyword差集學習,生成對抗網路,半監督式學習,強健的深度網路,異常偵測,zh_TW
dc.subject.keywordDifference-Seeking,Generative Adversarial Network,Semi-Supervised Learning,Robustness of Neural Network,Novelty Detection,en
dc.relation.page53
dc.identifier.doi10.6342/NTU201901502
dc.rights.note同意授權(全球公開)
dc.date.accepted2019-07-18
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電信工程學研究所zh_TW
顯示於系所單位:電信工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf3.91 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved