請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93183
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳信希 | zh_TW |
dc.contributor.advisor | Hsin-Hsi Chen | en |
dc.contributor.author | 曾奕崴 | zh_TW |
dc.contributor.author | I-Wei Tseng | en |
dc.date.accessioned | 2024-07-23T16:10:36Z | - |
dc.date.available | 2024-07-24 | - |
dc.date.copyright | 2024-07-23 | - |
dc.date.issued | 2024 | - |
dc.date.submitted | 2024-07-19 | - |
dc.identifier.citation | [1] AI@Meta. Llama 3 model card. 2024. https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
[2] P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Majumder, A. McNamara, B. Mitra, T. Nguyen, M. Rosenberg, X. Song, A. Stoica, S. Tiwary, and T. Wang. Ms marco: A human generated machine reading comprehension dataset, 2018. [3] S. Barnett, S. Kurniawan, S. Thudumu, Z. Brannelly, and M. Abdelrazek. Seven failure points when engineering a retrieval augmented generation system, 2024. [4] C.-Y. Chen and W.-Y. Ma. Word embedding evaluation datasets and Wikipedia title embedding for Chinese. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, and T. Tokunaga, editors, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 2018. European Language Resources Association (ELRA). [5] E. Choi, H. He, M. Iyyer, M. Yatskar, W. tau Yih, Y. Choi, P. Liang, and L. Zettlemoyer. Quac : Question answering in context, 2018. [6] F. Cuconasu, G. Trappolini, F. Siciliano, S. Filice, C. Campagnano, Y. Maarek, N. Tonellotto, and F. Silvestri. The power of noise: Redefining retrieval for rag systems. arXiv preprint arXiv:2401.14887, 2024. [7] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. F. Moura, D. Parikh, and D. Batra. Visual dialog, 2017. [8] J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. [9] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. [10] Y. Duan, L. Jiang, T. Qin, M. Zhou, and H. Y. Shum. An empirical study on learning to rank of tweets. In Proceedings of the 23rd international conference on computational linguistics (Coling 2010), pages 295–303, 2010. [11] L. Gao, X. Ma, J. Lin, and J. Callan. Precise zero-shot dense retrieval without relevance labels. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1762–1777, Toronto, Canada, July 2023. Association for Computational Linguistics. [12] Y. Gao, J. Li, C.-S. Wu, M. R. Lyu, and I. King. Open-retrieval conversational machine reading, 2021. [13] Y. Gao, C.-S. Wu, J. Li, S. Joty, S. C. H. Hoi, C. Xiong, I. King, and M. R. Lyu. Discern: Discourse-aware entailment reasoning network for conversational machine reading, 2020. [14] Y. Gao, Y. Xiong, X. Gao, K. Jia, J. Pan, Y. Bi, Y. Dai, J. Sun, M. Wang, and H. Wang. Retrieval-augmented generation for large language models: A survey, 2024. [15] Google. Gemma model card. 2024. https://huggingface.co/google/gemma-7b-it/blob/main/README.md. [16] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen. Lora: Low-rank adaptation of large language models, 2021. [17] C.-W. Huang, A.-Z. Yen, H.-H. Huang, and H.-H. Chen. Follow-up question modeling for open-retrieval conversational machine reading comprehension with wh-questions. Available at SSRN 4710309. [18] A. Karatzoglou, L. Baltrunas, and Y. Shi. Learning to rank for recommender systems. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys ’13, page 493–494, New York, NY, USA, 2013. Association for Computing Machinery. [19] V. Karpukhin, B. Oğuz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W. tau Yih. Dense passage retrieval for open-domain question answering, 2020. [20] J. R. Landis and G. G. Koch. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174, 1977. [21] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents, 2014. [22] J. Lei, X. Chen, N. Zhang, M. Wang, M. Bansal, T. L. Berg, and L. Yu. Loopitr: Combining dual and cross encoder architectures for image-text retrieval, 2022. [23] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc., 2020. [24] C.-Y. Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. [25] Y. Lv and C. Zhai. Lower-bounding term frequency normalization. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM ’11, page 7–16, New York, NY, USA, 2011. Association for Computing Machinery. [26] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space, 2013. [27] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality, 2013. [28] N. Nashid, M. Sintaha, and A. Mesbah. Retrieval-based prompt selection for code-related few-shot learning. In Proceedings of the 45th International Conference on Software Engineering, ICSE ’23, page 2450–2462. IEEE Press, 2023. [29] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback, 2022. [30] S. Ouyang, Z. Zhang, and H. Zhao. Dialogue graph modeling for conversational machine reading, 2021. [31] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, page 311–318, USA, 2002. Association for Computational Linguistics. [32] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, 2014. [33] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2018. https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf. [34] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2023. [35] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehension of text. In J. Su, K. Duh, and X. Carreras, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, Nov. 2016. Association for Computational Linguistics. [36] S. Reddy, D. Chen, and C. D. Manning. Coqa: A conversational question answering challenge, 2019. [37] R. Řehůřek and P. Sojka. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta, May 2010. ELRA. http://is.muni.cz/publication/884893/en. [38] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. [39] N. Reimers and I. Gurevych. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2020. [40] S. Robertson and H. Zaragoza. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3(4):333–389, apr 2009. [41] J. Saad-Falcon, O. Khattab, K. Santhanam, R. Florian, M. Franz, S. Roukos, A. Sil, M. Sultan, and C. Potts. UDAPDR: Unsupervised domain adaptation via LLM prompting and distillation of rerankers. In H. Bouamor, J. Pino, and K. Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11265–11279, Singapore, Dec. 2023. Association for Computational Linguistics. [42] M. Saeidi, M. Bartolo, P. Lewis, S. Singh, T. Rocktäschel, M. Sheldon, G. Bouchard, and S. Riedel. Interpretation of natural language rules in conversational machine reading, 2018. [43] A. Saha, V. Pahuja, M. M. Khapra, K. Sankaranarayanan, and S. Chandar. Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph, 2018. [44] J. Schulman, B. Zoph, C. Kim, J. Hilton, J. Menick, J. Weng, J. F. C. Uribe, L. Fedus, L. Metz, M. Pokorny, R. G. Lopes, S. Zhao, A. Vijayvergiya, E. Sigler, A. Perelman, C. Voss, M. Heaton, J. Parish, D. Cummings, R. Nayak, V. Balcom, D. Schnurr, T. Kaftan, C. Hallacy, N. Turley, N. Deutsch, V. Goel, J. Ward, A. Konstantinidis, W. Zaremba, L. Ouyang, L. Bogdonoff, J. Gross, D. Medina, S. Yoo, T. Lee, R. Lowe, D. Mossing, J. Huizinga, R. Jiang, C. Wainwright, D. Almeida, S. Lin, M. Zhang, K. Xiao, K. Slama, S. Bills, A. Gray, J. Leike, J. Pachocki, P. Tillet, S. Jain, G. Brockman, N. Ryder, A. Paino, Q. Yuan, C. Winter, B. Wang, M. Bavarian, I. Babuschkin, S. Sidor, I. Kanitscheider, M. Pavlov, M. Plappert, N. Tezak, H. Jun, W. Zhuk, V. Pong, L. Kaiser, J. Tworek, A. Carr, L. Weng, S. Agarwal, K. Cobbe, V. Kosaraju, A. Power, S. Polu, J. Han, R. Puri, S. Jain, B. Chess, C. Gibson, O. Boiko, E. Parparita, A. Tootoonchian, K. Kosic, and C. Hesse. Introducing chatgpt. 2022. https://openai.com/index/chatgpt/. [45] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu. Mpnet: Masked and permuted pretraining for language understanding, 2020. [46] Taide. Taide model card. 2024. https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1/blob/main/README.md. [47] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama: Open and efficient foundation language models, 2023. [48] X. Wang and H. Ning. Tf-idf keyword extraction method combining context and semantic classification. In Proceedings of the 3rd International Conference on Data Science and Information Technology, DSIT 2020, page 123–128, New York, NY, USA, 2020. Association for Computing Machinery. [49] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. [50] S. Xiao, Z. Liu, P. Zhang, and N. Muennighoff. C-pack: Packaged resources to advance general chinese embedding, 2023. [51] M. Yasunaga, A. Aghajanyan, W. Shi, R. James, J. Leskovec, P. Liang, M. Lewis, L. Zettlemoyer, and W. tau Yih. Retrieval-augmented multimodal language modeling, 2023. [52] W. Yu, H. Zhang, X. Pan, K. Ma, H. Wang, and D. Yu. Chain-of-note: Enhancing robustness in retrieval-augmented language models, 2023. [53] Z. Zhang, S. Ouyang, H. Zhao, M. Utiyama, and E. Sumita. Smoothing dialogue states for open conversational machine reading. In M.-F. Moens, X. Huang, L. Specia, and S. W.-t. Yih, editors, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3685–3696, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics. [54] G. Zhao, Y. Liu, W. Zhang, and Y. Wang. Tfidf based feature words extraction and topic modeling for short text. In Proceedings of the 2018 2nd International Conference on Management Engineering, Software Engineering and Service Sciences, ICMSS 2018, page 188–191, New York, NY, USA, 2018. Association for Computing Machinery. [55] J. Zhao, G. Haffar, and E. Shareghi. Generating synthetic speech from spokenvocab for speech translation, 2023. [56] P. Zhao, H. Zhang, Q. Yu, Z. Wang, Y. Geng, F. Fu, L. Yang, W. Zhang, J. Jiang, and B. Cui. Retrieval-augmented generation for ai-generated content: A survey, 2024. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/93183 | - |
dc.description.abstract | 隨著大型語言模型的能力愈發強健,多輪對話式規章問答任務的主軸也從如何理解文句間的蘊含關係(entailment)、如何訓練模型產生自然語言的回覆或反問,轉變為如何使用大型語言模型提供更精確的答案以及更細緻、更多元的對話歷程。
本文以國立臺灣大學的校園問題為發想,實作了一個專注於校園場域常見問題的問答系統。我們探究了如何借助大型語言模型生成 HyDE、RHyDE 等 expansion 資料增加資料檢索的準確度,也設計了一個供問答系統反問使用者、適時擷取資料的框架,同時實驗了雜訊文章對於問答系統的效用。另外,因為實驗主題是臺灣大學,我們也收集了現實生活中師生會遇到的議題,並結合數百篇學校各處室的條文,彙集成一個力求忠實於現實情境的資料集。 總的來說,本篇論文的主要貢獻有二。一者,為製作了以校園場域為主題的多輪對話式問答資料集。二者,為設計了一個準確率提升、對話更細緻的問答系統框架。 | zh_TW |
dc.description.abstract | As the capabilities of large language models (LLMs) continue to grow, the focus of multi-turn conversational rule-based question answering (QA) tasks has shifted from understanding entailment relationships between sentences and training models to generate natural language responses or follow-up questions, to utilizing LLMs to provide more accurate answers and more nuanced and diverse dialogue.
This paper presents a question-answering system tailored to address common inquiries within the National Taiwan University (NTU) campus. We explore the effectiveness of utilizing LLMs to generate expansion data, such as HyDE and RHyDE, to improve data retrieval accuracy. Additionally, we propose a framework that enables the question-answering system to engage in follow-up questions with users and dynamically retrieve relevant information. We also investigate the impact of noise passages on the system's performance. Since our work focuses on NTU, we carefully curated a dataset encompassing real-world issues encountered by students and faculty, along with hundreds of documents and regulations from various university departments, ensuring adherence to realistic scenarios. Overall, this paper makes two main contributions. First, it presents a multi-turn dialogue question answering dataset focusing on the school domain. Second, it proposes a question-answering system framework that improves accuracy and generates more detailed dialogues. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-07-23T16:10:36Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2024-07-23T16:10:36Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation and Research Goals . . . . . . . . . . . . . . . . . . . . 3 1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 2 Related Works 7 2.1 Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Conversational Question-Answering Task . . . . . . . . . . . . . . . 8 2.2.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Common Approaches on Conversational Rule-Based Question-Answering Tasks Before Large Language Model . . . . . . . . . . . . . . . . . 12 2.3 Retrieval-Augmented Generation . . . . . . . . . . . . . . . . . . . 13 Chapter 3 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.1 NTU QA Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.1.1 Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.1.2 Different Types and Difficulty of QA Pairs . . . . . . . . . . . . . . 18 3.1.3 Identity and Scenario of User . . . . . . . . . . . . . . . . . . . . . 21 3.1.4 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Chapter 4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.1 Conversational Question-Answering System Framework . . . . . . . 25 4.2 Retriever . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2.2 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.2.3 Keyword Generation . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2.4 HyDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2.5 RHyDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.2.6 Sparse and Dense Retriever . . . . . . . . . . . . . . . . . . . . . . 35 4.3 Reasoner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.3.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.3.2 Dynamic Passage Determination . . . . . . . . . . . . . . . . . . . 36 4.3.3 Model Drafting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.3.4 Follow-up Question Determination . . . . . . . . . . . . . . . . . . 39 4.3.5 Noise Passage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.4 ChatNTU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.4.1 System Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.4.2 Overall Experiment Testing Method . . . . . . . . . . . . . . . . . 45 4.5 RAG and Finetune . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Chapter 5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.1 Retriever . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.1.1 Retriever Overall Result . . . . . . . . . . . . . . . . . . . . . . . 47 5.1.2 RHyDE Aggregation Method . . . . . . . . . . . . . . . . . . . . . 54 5.1.3 Ensembling Sparse and Dense Retriever . . . . . . . . . . . . . . . 55 5.2 Reasoner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2.1 System Overall Result . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2.2 Performance on Different Difficulty . . . . . . . . . . . . . . . . . 59 5.2.3 Effect of Noise Passage . . . . . . . . . . . . . . . . . . . . . . . . 60 5.3 RAG? Or Finetune? . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Chapter 6 Conclusion and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Appendix A — Prompt Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 A.1 Noir Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 A.2 Keyword Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 80 A.3 HyDE Document Generation . . . . . . . . . . . . . . . . . . . . . . 80 A.4 RHyDE Query Generation . . . . . . . . . . . . . . . . . . . . . . . 80 A.5 Dynamic Passage Determination . . . . . . . . . . . . . . . . . . . . 82 A.6 Model Drafting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 A.7 Follow-up Question Determination . . . . . . . . . . . . . . . . . . 85 A.8 Bot User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 A.9 Local Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Appendix B — Noise Passage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 B.1 Noise Passage in Amis Language . . . . . . . . . . . . . . . . . . . 89 B.2 Noise Passage in Min Dong Language . . . . . . . . . . . . . . . . . 89 B.3 Analysis of Noise Passages on Different Languages . . . . . . . . . . 90 B.4 Analysis of Noise Passages on Different Topics . . . . . . . . . . . . 91 | - |
dc.language.iso | en | - |
dc.title | ChatNTU: 檢索增強生成於校園場域的反問機制與資料檢索研究 | zh_TW |
dc.title | ChatNTU: A Study of Follow-up Questioning Mechanism and Information Retrieval on Retrieval-Augmented Generation System of Campus | en |
dc.type | Thesis | - |
dc.date.schoolyear | 112-2 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 陳建錦;蔡宗翰;陳柏琳 | zh_TW |
dc.contributor.oralexamcommittee | Chien-Chin Chen;Tzong-Han Tsai;Berlin Chen | en |
dc.subject.keyword | 問答系統,檢索增強生成,資料檢索,對話式機器閱讀理解,反問機制,大型語言模型, | zh_TW |
dc.subject.keyword | question answering system,retrieval-augmented generation,information retrieval,conversational machine comprehension,follow-up questioning mechanism,large language model, | en |
dc.relation.page | 91 | - |
dc.identifier.doi | 10.6342/NTU202401963 | - |
dc.rights.note | 未授權 | - |
dc.date.accepted | 2024-07-19 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊工程學系 | - |
顯示於系所單位: | 資訊工程學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-112-2.pdf 目前未授權公開取用 | 2.45 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。