請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21408完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳信希 | |
| dc.contributor.author | Tsung Lin | en |
| dc.contributor.author | 林聰 | zh_TW |
| dc.date.accessioned | 2021-06-08T03:33:15Z | - |
| dc.date.copyright | 2019-08-13 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-08-07 | |
| dc.identifier.citation | Chang, C.-T., Huang, C.-C., Yang, C.-Y., & Hsu, J. Y.-J. (2018). A Hybrid Word-Character Approach to Abstractive Summarization. arXiv preprint arXiv:1802.09968.
Chen, W.-F., Wachsmuth, H., Al Khatib, K., & Stein, B. (2018). Learning to Flip the Bias of News Headlines. Paper presented at the Proceedings of the 11th International Conference on Natural Language Generation. Chen, Y.-C., & Bansal, M. (2018). Fast abstractive summarization with reinforce-selected sentence rewriting. arXiv preprint arXiv:1805.11080. Dai, Z., Yang, Z., Yang, Y., Cohen, W. W., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Du, J., Xu, R., He, Y., & Gui, L. (2017). Stance classification with target-specific neural attention networks. Fan, A., Grangier, D., & Auli, M. (2017). Controllable abstractive summarization. arXiv preprint arXiv:1711.05217. Fu, Z., Tan, X., Peng, N., Zhao, D., & Yan, R. (2018). Style transfer in text: Exploration and evaluation. Paper presented at the Thirty-Second AAAI Conference on Artificial Intelligence. Gavrilov, D., Kalaidin, P., & Malykh, V. (2019). Self-Attentive Model for Headline Generation. Paper presented at the European Conference on Information Retrieval. Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. Paper presented at the Proceedings of the 34th International Conference on Machine Learning-Volume 70. Hsu, W.-T., Lin, C.-K., Lee, M.-Y., Min, K., Tang, J., & Sun, M. (2018). A unified model for extractive and abstractive summarization using inconsistency loss. arXiv preprint arXiv:1805.06266. Hu, B., Chen, Q., & Zhu, F. (2015). Lcsts: A large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865. Kågebäck, M., Mogren, O., Tahmasebi, N., & Dubhashi, D. (2014). Extractive summarization using continuous vector space models. Paper presented at the Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC). Liao, Y., Bing, L., Li, P., Shi, S., Lam, W., & Zhang, T. (2018). QuaSE: Sequence Editing under Quantifiable Guidance. Paper presented at the Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Liu, L., Lu, Y., Yang, M., Qu, Q., Zhu, J., & Li, H. (2018). Generative adversarial network for abstractive text summarization. Paper presented at the Thirty-Second AAAI Conference on Artificial Intelligence. Nallapati, R., Zhou, B., Gulcehre, C., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Nallapati, R., Zhou, B., & Ma, M. (2016). Classify or select: Neural architectures for extractive document summarization. arXiv preprint arXiv:1611.04244. Paulus, R., Xiong, C., & Socher, R. (2017). A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Paper presented at the Advances in neural information processing systems. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention is all you need. Paper presented at the Advances in neural information processing systems. Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. Paper presented at the Proceedings of the IEEE international conference on computer vision. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/21408 | - |
| dc.description.abstract | 隨著類神經網路的興起,許多自然語言處理的研究有了全新的進展。文字生成的研究是其中之一,類神經網路能理解複雜的語言邏輯、並生成類似人寫的句子。除了使用類神經網路來加強傳統的文字生成任務,像是機器翻譯和文章摘要等外,其他研究也開始嘗試在文字生成時,加入各種條件像是時態、字數、情緒等。除了文字生成外,類神經網路也常被應用在一些自然語言處理相關的分類任務,立場的偵測和分類就是其中一個熱門的研究。受到文字生成和立場分類的啟發,本論文嘗試產生符合特定台灣媒體立場的新聞標題 | zh_TW |
| dc.description.abstract | As neural network model thrives, nature language processing enters into a new chapter. Powerful models motivate the innovation and renovation of text generation tasks. Text generation tasks are no longer the simple task like text summarization or machine translation, they try to generate text with a variety of novel conditions, e.g., sentence length, tense and sentiment. Neural models also have a great success in some classification task. Stance classification is one of popular research topics. Inspired from conditional text generation and stance classification, we innovate a task to generate news headline with specific stances of Taiwan’s news media. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-08T03:33:15Z (GMT). No. of bitstreams: 1 ntu-108-R06922121-1.pdf: 799906 bytes, checksum: 90746dbdff22cba61378553a2e63a3e7 (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 ii Abstract iii Table of Contents iv Content of Figures vi Content of Tables vii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Organization 3 Chapter 2 Related Work 4 2.1 Text Summarization and Headline Generation 4 2.1.2 Attention Mechanism 5 2.1.3 Models for Sequence to Sequence 6 2.1.5 Chinese Text Summarization 7 2.2.1 Condition Transfer or controllable text generation 8 2.2.2 Controllable Abstractive Summarization 9 2.3 NLP Researches about Stance 9 2.3.1 Stance Classification 9 2.3.2 Stance Transfer 10 Chapter 3 Dataset 11 3.1 Data Scrapping & Cleaning 11 3.1.1 LTN 11 3.1.2 CTS & UDN 12 3.2 News Alignment 14 3.3 Character as Token 16 Chapter 4 Methodology and Experiments 18 4.1 Transformer 18 4.2 Generate News Headline with Stance 20 4.2.1 Independent Model 21 4.2.2 Independent Decoder 21 4.2.3 Stance Token 23 4.2.4 Stance Query 23 4.3 Experimental Settings 24 4.3.1 Tools 24 4.3.2 Training Settings 25 The major training settings that include dataset features and model features are listed below. 25 4.3.3 Testing Settings: 26 We list our testing settings below. Testing setting is significant because they may change the final rouge score. 26 4.3.4 Metrics 26 4.4 Pre-Experiment 28 4.4.1 Bi-LSTM versus Transformer 28 4.4.2 Number of Paragraphs 29 4.4.3 Addition of Aligned Data 30 Chapter 5 Method Improvement 35 5.1 Ensemble Generation 35 5.1.1 Methodology 35 5.1.2 Experiments 36 5.2 Multi-Task Learning 39 5.2.1 Methodology 39 5.2.2 Experiments 40 Chapter 6 Human Evaluation and Examples 42 6.1 Metric 42 6.2 Generation Pairs and Dataset 43 6.3 Experimental Results 45 6.6.1 China Concern 47 6.6.2 NTU President Amid Concern 48 6.6.3 Nuclear Power Concern 50 6.6.4 Hong Kong Anti-Extradition Bill Protests 51 Chapter 7 Conclusion and Future Work 54 Reference 55 | |
| dc.language.iso | en | |
| dc.title | 兼顧新聞立場之標題產生方法研究 | zh_TW |
| dc.title | Learning to Generate News Headlines with Media’s Stance | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 107-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 陳冠宇,鄭卜壬,蔡宗翰 | |
| dc.subject.keyword | 文章標題自動生成,文章自動摘要,可控制的文字生成,立場,Transformer, | zh_TW |
| dc.subject.keyword | Headline Generation,Abstractive Summarization,Controllable Text Generation,Stance,Transformer, | en |
| dc.relation.page | 56 | |
| dc.identifier.doi | 10.6342/NTU201902714 | |
| dc.rights.note | 未授權 | |
| dc.date.accepted | 2019-08-07 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 781.16 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
