請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/1144
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 林守德(Shou-De Lin) | |
dc.contributor.author | Tsung-Hsing Lin | en |
dc.contributor.author | 林宗興 | zh_TW |
dc.date.accessioned | 2021-05-12T09:33:16Z | - |
dc.date.available | 2020-08-08 | |
dc.date.available | 2021-05-12T09:33:16Z | - |
dc.date.copyright | 2018-08-08 | |
dc.date.issued | 2018 | |
dc.date.submitted | 2018-08-01 | |
dc.identifier.citation | [1] Fake news challenge. http://www.fakenewschallenge.org, 2017.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [3] A. M. Dai and Q. V. Le. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087, 2015. [4] J. Ebrahimi, A. Rao, D. Lowd, and D. Dou. Hotflip: White-box adversarial examples for nlp. arXiv preprint arXiv:1712.06751, 2017. [5] H. Guo. Generating text with deep reinforcement learning. arXiv preprint arXiv:1510.09202, 2015. [6] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [7] C. S. Ian J. Goodfellow, Jonathon Shlens. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [8] M. Iyyer, V. Manjunatha, J. Boyd-Graber, and H. Daumé III. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1681–1691, 2015. [9] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. [10] T. Miyato, A. M. Dai, and I. Goodfellow. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725, 2016. [11] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [12] K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015. [13] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. [14] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014. [15] R. S. Sutton, A. G. Barto, et al. Reinforcement learning: An introduction. MIT press, 1998. [16] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [17] X. Zhang, J. Zhao, and Y. LeCun. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649-657, 2015. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/handle/123456789/1144 | - |
dc.description.abstract | 文本分類問題是自然語言處理中的一類問題,目標是學習一個模型可以去理解句子的語意,進而去分類出不同類別。雖然現今深度類神經網路越來越熱門並且被應用到各式各樣的領域包括自然語言處理,也有許多研究在討論深度模型脆弱的地方。藉由一種人工產生的資料 ── 對抗樣本 (adversarial example),某方面說明了一個機器學習模型的弱點。在這篇論文中我們試圖去改進文本分類問題中的對抗樣本尋找方法。並更進一步的利用找到的對抗樣本於對抗訓練 (adversarialtraining) 中,發現這樣可以增進模型的泛化能力在沒看過的資料上。我們期望這發現可以幫助我們訓練出更強健的模型,特別在資料不多的情況下。 | zh_TW |
dc.description.abstract | Text classification is a specific task in natural language processing that aims at learning a model to know the meaning of given sentences. While deep neural network is becoming more and more popular and be widely used in many domain including natural language processing nowadays, there are some works discussing the vulnerability of deep models. Adversarial examples, a kind of synthetic data, somehow show the weakness of a machine learning model. In this work we aim to improve the efficiency of finding adversarial examples in text classification tasks. Moreover, we use the adversarial examples to do adversarial training and find that it may improve the generalization on the unseen data. We hope this discovery could help us train a robust enough model in the future if the dataset isn’t large enough. | en |
dc.description.provenance | Made available in DSpace on 2021-05-12T09:33:16Z (GMT). No. of bitstreams: 1 ntu-107-R05922007-1.pdf: 3836122 bytes, checksum: 6bc23ab93b22af6520802672b4b3091c (MD5) Previous issue date: 2018 | en |
dc.description.tableofcontents | 口試委員會審定書 i
誌謝 ii 摘要 iii Abstract iv 1 Introduction 1 2 Related 3 3 Problem Definition 5 4 Model 8 5 Experiments 17 6 Conclusion 23 Bibliography 24 | |
dc.language.iso | en | |
dc.title | 應用強化學習於文本分類問題中對抗樣本的尋找方法 | zh_TW |
dc.title | Finding Adversarial Examples for Text Classification: A Reinforcement Learning Approach | en |
dc.type | Thesis | |
dc.date.schoolyear | 106-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳信希(HSIN-HSI CHEN),陳縕儂(YUN-NUNG CHEN),鄭卜壬(Pu-Jen Cheng),林軒田(Hsuan-Tien Lin) | |
dc.subject.keyword | 文本分類,對抗樣本,對抗訓練,強化學習, | zh_TW |
dc.subject.keyword | text classification,adversarial example,adversarial training,reinforcement learning, | en |
dc.relation.page | 25 | |
dc.identifier.doi | 10.6342/NTU201802182 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2018-08-01 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-107-1.pdf | 3.75 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。