Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/643
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor林守德(Shoe-De Lin)
dc.contributor.authorYen-Ting Leeen
dc.contributor.author李彥霆zh_TW
dc.date.accessioned2021-05-11T04:51:52Z-
dc.date.available2020-08-20
dc.date.available2021-05-11T04:51:52Z-
dc.date.copyright2019-08-20
dc.date.issued2019
dc.date.submitted2019-08-15
dc.identifier.citation[1] Yelp open dataset. https://www.yelp.com/dataset/.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
[3] S. Bird, E. Klein, and E. Loper. Natural Language Processing with Python. O’Reilly Media, Inc., 1st edition, 2009.
[4] S.R.Bowman,L.Vilnis,O.Vinyals,A.Dai,R.Jozefowicz,andS.Bengio.Generat- ing sentences from a continuous space. In Proceedings of The 20th SIGNLL Confer- ence on Computational Natural Language Learning, pages 10–21, Berlin, Germany, Aug. 2016. Association for Computational Linguistics.
[5] C. K. Chen, Z. F. Pan, M. Sun, and M. Liu. Unsupervised stylish image description generation via domain layer norm. CoRR, abs/1809.06214, 2018.
[6] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 2672–2680, Cambridge, MA, USA, 2014. MIT Press.
[7] K. Guu, T. B. Hashimoto, Y. Oren, and P. Liang. Generating sentences by editing prototypes. CoRR, abs/1709.08878, 2017.
[8] M. Honnibal and I. Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear, 2017.
[9] A. Karpathy, J. Johnson, and F. Li. Visualizing and understanding recurrent net- works. CoRR, abs/1506.02078, 2015.
[10] D.P.KingmaandM.Welling.Auto-encodingvariationalbayes.In2ndInternational Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14- 16, 2014, Conference Track Proceedings, 2014.
[11] J.Li,X.Chen,E.Hovy,andD.Jurafsky.Visualizingandunderstandingneuralmod- els in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 681–691, San Diego, California, June 2016. Association for Computational Linguistics.
[12] T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal, Sept. 2015. Association for Computational Linguistics.
[13] D. Rezende and S. Mohamed. Variational inference with normalizing flows. In F. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1530–1538, Lille, France, 07–09 Jul 2015. PMLR.
[14] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3483–3491. Curran Associates, Inc., 2015.
[15] S. Subramanian, S. R. Mudumba, A. Sordoni, A. Trischler, A. C. Courville, and C. Pal. Towards text generation with adversarially learned neural outlines. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems 31, pages 7551–7563. Curran Associates, Inc., 2018.
[16] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc., 2014.
[17] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017.
[18] L. Yu, W. Zhang, J. Wang, and Y. Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, pages 2852–2858. AAAI Press, 2017.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/handle/123456789/643-
dc.description.abstract自然語言生成在最近發展的相當蓬勃,無論是基於對抗式生成網 路 (GAN) 或是變分自動編碼器 (VAE),都有相當的佳作發表。在自然 語言生成的領域中,條件式的改寫是較少人專注的題目,在這篇論文 中,我們對這個題目有更正式的定義-將句子依據給定的條件改寫, 且生成的句子需和原句相像並滿足給定的條件。我們提出了一個基於 序列變分自動編碼器的模型解決這個問題。這個模型在訓練時和自動 編碼器相同,輸入和目標是同個句子,但我們額外加入了條件提醒的 機制,讓模型在生成句子時會去注意在我們給定的條件上,達成控制 的目標。最後我們的實驗結果支持這個模型能生成好品質的句子並符 合條件的改寫。zh_TW
dc.description.abstractNatural language generation has been a popular field with lots of quality works published based on generative adversarial network (GAN) or varia- tional autoencoder (VAE). However, rephrasing with condition is a problem that few people focus on. In this work, the problem is formally defined as ”rephrase a sentence with given condition, and the generated sentence should be similar to the origin sentence and it should satisfy the given condition”. Moreover, we propose a conditional model based on sentence-VAE to solve the problem. The model is trained as an autoencoder, but we can control the condition of the generated sentence. And, it inherits the nature of autoencoder that the generated sentences would be similar to the input sentence. With experiment results supported, the model can solve the problem with quality sentences.en
dc.description.provenanceMade available in DSpace on 2021-05-11T04:51:52Z (GMT). No. of bitstreams: 1
ntu-108-R06922008-1.pdf: 1595243 bytes, checksum: 0fd00218f3fc984d2e8af5ba0d011a1c (MD5)
Previous issue date: 2019
en
dc.description.tableofcontents誌謝 i
Acknowledgements ii
摘要 iii
Abstract iv
1 Introduction 1
2 Related Works 3
2.1 Sentence-VAE................................ 3
2.2 PrototypeEditing .............................. 4
3 Problem Definition 5
3.1 Definition.................................. 5
3.2 Example................................... 6
4 Proposed Method 7
4.1 ConditionalSentenceVariationalAutoencoder. . . . . . . . . . . . . . . 7
4.2 ConditionMechanism............................ 9
5 Experiments 10
5.1 Dataset ................................... 10
5.2 ConditionEvaluation ............................ 11
5.2.1 ConditionFunction......................... 11
5.2.2 Accuracy .............................. 12
5.2.3 GeneratedSentences ........................ 14
5.3 GenerationQuality ............................. 17
5.4 CanAnAutoencoderAlsoWork? ..................... 17
6 Conclusions and Future Work 19
Bibliography 20
dc.language.isoen
dc.subject非監督式機器學習zh_TW
dc.subject自然語言生成zh_TW
dc.subjectunsupervised machine learningen
dc.subjectnatural language generationen
dc.subjectvariational autoencoderen
dc.title條件式句子改寫且不使用成對資料訓練zh_TW
dc.titleConditional Sentence Rephrasing without Pairwise Training Corpusen
dc.date.schoolyear107-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳信希(Hsin-Hsi Chen),陳縕儂(Yun-Nung Chen),李宏毅(Hung-Yi Lee)
dc.subject.keyword自然語言生成,非監督式機器學習,zh_TW
dc.subject.keywordnatural language generation,variational autoencoder,unsupervised machine learning,en
dc.relation.page22
dc.identifier.doi10.6342/NTU201903579
dc.rights.note同意授權(全球公開)
dc.date.accepted2019-08-15
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf1.56 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved