Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84381
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳信希(Hsin-Hsi Chen)
dc.contributor.authorPo-Cheng Wuen
dc.contributor.author吳柏承zh_TW
dc.date.accessioned2023-03-19T22:09:53Z-
dc.date.copyright2022-04-26
dc.date.issued2022
dc.date.submitted2022-03-23
dc.identifier.citationKe Wang and Xiaojun Wan. Sentiment analysis of peer review texts for scholarly papers. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 175–184, 2018. Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, and Pushpak Bhattacharyya. Deepsentipeer: Harnessing sentiment in review texts to recommend peer review decisions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1120–1130, 2019. Shruti Singh, Mayank Singh, and Pawan Goyal. Compare: a taxonomy and dataset of comparison discussions in peer reviews. arXiv preprint arXiv:2108.04366, 2021. Panagiotis Fytas, Georgios Rizos, and Lucia Specia. What makes a scientific paper be accepted for publication? arXiv preprint arXiv:2104.07112, 2021. Gang Wang, Qi Peng, Yanfeng Zhang, and Mingyang Zhang. What have we learned from openreview? arXiv preprint arXiv:2103.05885, 2021. Qingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Heng Ji, and Nazneen Fatema Rajani. Reviewrobot: Explainable paper review generation based on knowledge synthesis. arXiv preprint arXiv:2010.06119, 2020. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srini- vasan, and Dragomir Radev. Graph-based neural multi-document summarization. arXiv preprint arXiv:1706.06681, 2017. Hanqi Jin,Tianming Wang,and Xiaojun Wan. Multi-granularity interaction network for extractive and abstractive multi-document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6244–6254, 2020. Chaitanya Bhatia, Tribikram Pradhan, and Sukomal Pal. Metagen: An academic meta-review generation system. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1653–1656, 2020. Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, and Luo Si. Mred: A meta-review dataset for controllable text generation. arXiv preprint arXiv:2110.07474, 2021. Christian Stab and Iryna Gurevych. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46–56, 2014. Huy Nguyen and Diane Litman. Argument mining for improving the automated scoring of persuasive essays. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. LENZMirko,PremtimSahitaj,SeanKallenberg,ChristopherCoors,LorikDumani, Ralf Schenkel, and Ralph Bergmann. Towards an argument mining pipeline transforming texts to argument graphs. Computational Models of Argument: Proceedings of COMMA 2020, 326:263, 2020. Ehud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfreund, and Noam Slonim. A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In Proceedings of the first workshop on argumentation mining, pages 64–68, 2014. Sara Rosenthal and Kathleen McKeown. Detecting opinionated claims in online discussions. In 2012 IEEE sixth international conference on semantic computing, pages 30–37. IEEE, 2012. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. Context dependent claim detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1489–1500, 2014. Marco Lippi and Paolo Torroni. Context-independent claim detection for argument mining. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. Theodosis Goudas, Christos Louizos, Georgios Petasis, and Vangelis Karkaletsis. Argument extraction from news, blogs, and social media. In Hellenic Conference on Artificial Intelligence, pages 287–299. Springer, 2014. Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. Coarse-grained argumentation features for scoring persuasive essays. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 549–554, 2016. Makoto Miwa and Mohit Bansal. End-to-end relation extraction using lstms on sequences and tree structures. arXiv preprint arXiv:1601.00770, 2016. Christian Stab and Iryna Gurevych. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619–659, 2017. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. Neural end-to-end learning for computational argumentation mining. arXiv preprint arXiv:1704.06104, 2017. Peter Potash,Alexey Romanov,and Anna Rumshisky.Here’s my point: Joint pointer architecture for argument mining. arXiv preprint arXiv:1612.08994, 2016. Kuo-Yu Huang, Hen-Hsen Huang, and Hsin-Hsi Chen. Hargan: Heterogeneous argument attention network for persuasiveness prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13045–13054, 2021. Filip Boltužić and Jan Šnajder. Back up your stance: Recognizing arguments in online discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49–58, 2014. Michael Fromm, Evgeniy Faerman, Max Berrendorf, Siddharth Bhargava, Ruoxia Qi, Yao Zhang, Lukas Dennert, Sophia Selle, Yang Mao, and Thomas Seidl. Argument mining driven analysis of peer-reviews. arXiv preprint arXiv:2012.07743, 2020. Xinyu Hua, Mitko Nikolov, Nikhil Badugu, and Lu Wang. Argument mining for understanding peer reviews. arXiv preprint arXiv:1903.10104, 2019. Neha Nayak Kennard, Tim O’Gorman, Akshay Sharma, Chhandak Bagchi, Matthew Clinton, Pranay Kumar Yelugam, Rajarshi Das, Hamed Zamani, and Andrew McCallum. A dataset for discourse structure in peer review discussions. arXiv preprint arXiv:2110.08520, 2021. Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, and Luo Si.Argument pair extraction from peer review and rebuttal via multi-task learning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7000–7011, 2020. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197, 2019. Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. Bottom-up abstractive summarization. arXiv preprint arXiv:1808.10792, 2018. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. Deep communicating agents for abstractive summarization. arXiv preprint arXiv:1803.10357, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018. Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156–5165. PMLR, 2020. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformers for longer sequences. In NeurIPS, 2020. Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328–11339. PMLR, 2020. ColinB.Clement,MatthewBierbaum,KevinP.O’Keeffe,andAlexanderA.Alemi. On the use of arxiv as a dataset, 2019. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008. Eva Sharma, Chen Li, and Lu Wang. Bigpatent: A large-scale dataset for abstractive and coherent summarization. arXiv preprint arXiv:1906.03741, 2019. Rada Mihalcea and Paul Tarau. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411, 2004. Jianpeng Cheng and Mirella Lapata. Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252, 2016. Potsawee Manakul and Mark Gales. Cued_speech at trec 2020 podcast summarisation track. arXiv preprint arXiv:2012.02535, 2020. Günes Erkan and Dragomir R Radev. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479, 2004. Gunes Erkan and Dragomir Radev. Lexpagerank: Prestige in multi-document text summarization. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 365–371, 2004. Xiaojun Wan and Jianwu Yang. Improved affinity graph based multi-document summarization. In Proceedings of the human language technology conference of the NAACL, Companion volume: Short papers, pages 181–184, 2006. Xiaojun Wan. An exploration of document impact on graph-based multi-document summarization. In Proceedings of the 2008 conference on empirical methods in natural language processing, pages 755–762, 2008. Angela Fan, Claire Gardent, Chloé Braud, and Antoine Bordes. Using local knowledge graph construction to scale seq2seq models to multi-document inputs. arXiv preprint arXiv:1910.08435, 2019. Ramakanth Pasunuru, Mengwen Liu, Mohit Bansal, Sujith Ravi, and Markus Dreyer. Efficiently summarizing text and graph encodings of multi-document clusters. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4768–4779, 2021. Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, and Junping Du. Leveraging graph to improve abstractive multi-document summarization. arXiv preprint arXiv:2005.10043, 2020. Diego Antognini and Boi Faltings. Learning to create sentence semantic relation graphs for multi-document summarization. arXiv preprint arXiv:1909.12231, 2019. Hao Zhou, Weidong Ren, Gongshen Liu, Bo Su, and Wei Lu. Entity-aware abstractive multi-document summarization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 351–362, 2021. Weizhe Yuan, Pengfei Liu, and Graham Neubig. Can we automate scientific reviewing? arXiv preprint arXiv:2102.00176, 2021. Liying Cheng, Tianyu Wu, Lidong Bing, and Luo Si. Argument pair extraction via attention-guided multi-layer multi-cross encoding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6341–6353, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. G David Forney. The viterbi algorithm. Proceedings of the IEEE, 61(3):268–278, 1973. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. Allennlp: A deep semantic natural language processing platform. arXiv preprint arXiv:1803.07640, 2018. Potsawee Manakul and Mark Gales. Long-span summarization via local attention and content selection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6026–6041, 2021. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. Jiaao Chen and Diyi Yang. Structure-aware abstractive conversation summarization via discourse and action graphs. arXiv preprint arXiv:2104.08400, 2021. Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. ” O’Reilly Media, Inc.”, 2009. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Lin CY ROUGE. A package for automatic evaluation of summaries. In Proceedings of Workshop on Text Summarization of ACL, Spain, 2004. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675, 2019. Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. Primer: Pyramid-based masked sentence pre-training for multi-document summarization. arXiv preprint arXiv:2110.08499, 2021.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/84381-
dc.description.abstract同儕審查(Peer review)為學術領域中相當重要的一環,在審查的過程中,研究論文會交由數位審稿人進行評估。大多數的頂尖會議上皆會舉行作者答辯的階段,提供投稿作者答覆審稿人評論論點的機會,以穩固他們的投稿作品。審稿人對於投稿文章所指出的優缺點,以及投稿作者相對應的答覆,將一併交由領域主席(area chair)做最後的評估,而在做此最終決定的同時也會撰寫綜合審稿評論(meta-review)說明予以錄用或拒絕的原因。過往的研究中有嘗試透過基於Transformer結構的摘要生成模型以做綜合審稿評論生成,然而,較少研究有考慮到作者答覆的內容以及評論及答辯中論點的交互關係,其答辯論證中的說服力在做最終決定時有著重要的影響力。為了生成能夠良好彙整審稿人論點與作者答覆的綜合審稿評論,我們提出了一個新的生成模型得以明確地引入審稿時複雜的論證結構,以了解審稿人與作者間、以及跨評論間論點的交互關係。實驗結果顯示我們的模型在自動化評估及人工評斷下相較於其他現行模型皆取得更好的表現,說明我們所提出的方法的有效性。zh_TW
dc.description.abstractPeer review is an essential part of the scientific process in which the research papers are assessed by several reviewers. The author rebuttal phase, which is held at most top conferences, provides an opportunity for the authors to defend their work against the arguments made by the reviewers. The strengths and the weaknesses pointed out by the reviewers, as well as the authors' responses, will be evaluated by the area chair. The final decisions generally accompany meta-reviews regarding the reason for acceptance/rejection. Previous research has studied the generation of meta-review using transformer-based summarization models. However, few of them consider the rebuttals' content and the interaction between reviews and rebuttals’ arguments, where the argumentation persuasiveness plays an important role in affecting the final decision. To generate a comprehensive meta-review that well organizes reviewers' opinions and authors' responses, we present a novel generation model that is capable of explicitly modeling the complicated argumentation structure from not only arguments between the reviewers and the authors but also the inter-reviewer discussions. Experimental results show that our model outperforms baselines in terms of both automatic evaluation and human evaluation, demonstrating the effectiveness of our approach.en
dc.description.provenanceMade available in DSpace on 2023-03-19T22:09:53Z (GMT). No. of bitstreams: 1
U0001-1803202220402900.pdf: 4545027 bytes, checksum: 6ccbf220996020adebb557e4e6cccbea (MD5)
Previous issue date: 2022
en
dc.description.tableofcontentsAcknowledgements i 摘要 ii Abstract iii Contents v List of Figures viii List of Tables ix Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 2 1.3 Thesis Organization 6 Chapter 2 Related Work 7 2.1 Argument Mining 7 2.1.1 Methods 7 2.1.2 Applications 8 2.2 Summarization 9 2.2.1 Long Document Summarization 10 2.2.2 Multi-Document Summarization 11 Chapter 3 Datasets 13 3.1 Peer Review Content 13 3.2 Argumentative Structure 14 3.3 Aspect Typology 15 Chapter 4 Methodology 17 4.1 Argumentative Structure Construction 17 4.1.1 Intra-discussion Relation Extraction 18 4.1.1.1 Sentence Encoder 19 4.1.1.2 Multi-Cross Encoder 21 4.1.1.3 Predictor 24 4.1.2 Graph Construction 25 4.1.3 Graph Augmentation 26 4.2 Structure Enhanced Meta-Review Generation 27 4.2.1 Content Encoder 27 4.2.2 Graph Encoder 29 4.2.3 Multi-Granularity Decoder 30 Chapter 5 Experiments 32 5.1 Data Filtering 32 5.2 Experimental Setup 33 5.3 Results of Argumentative Structure Extraction 35 5.4 Results of Meta-review Generation 37 Chapter 6 Discussion 42 6.1 Dataset Analytics 42 6.2 Error Analysis 43 6.2.1 Performance Difference in terms of Rating Score 44 6.2.2 Performance Difference in terms of Divergence of Rating Scores 46 6.2.3 Performance Difference in terms of Number of Extracted Arguments 47 6.3 Human Evaluation 49 6.4 Case Study 50 Chapter 7 Conclusion 53 References 55
dc.language.isoen
dc.subject綜合審稿評論生成zh_TW
dc.subject論點探勘zh_TW
dc.subject反論點辨識zh_TW
dc.subjectCounter-Argument Identificationen
dc.subjectMeta-Review Generationen
dc.subjectArgument Miningen
dc.title引入同儕審查及答辯中反論點之綜合審稿評論生成zh_TW
dc.titleIncorporating Peer Reviews and Rebuttal Counter-Arguments for Meta-Review Generationen
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.oralexamcommittee陳建錦(Chien-Chin Chen),古倫維(Lun-Wei Ku),蔡銘峰(Ming-Feng Tsai)
dc.subject.keyword綜合審稿評論生成,論點探勘,反論點辨識,zh_TW
dc.subject.keywordMeta-Review Generation,Argument Mining,Counter-Argument Identification,en
dc.relation.page64
dc.identifier.doi10.6342/NTU202200644
dc.rights.note同意授權(限校園內公開)
dc.date.accepted2022-03-23
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept資訊工程學研究所zh_TW
dc.date.embargo-lift2022-04-26-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
U0001-1803202220402900.pdf
授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務)
4.44 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved