請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98193完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳建錦 | zh_TW |
| dc.contributor.advisor | Chien-Chin Chen | en |
| dc.contributor.author | 陳亭佑 | zh_TW |
| dc.contributor.author | Ting-Yu Chen | en |
| dc.date.accessioned | 2025-07-30T16:17:10Z | - |
| dc.date.available | 2025-07-31 | - |
| dc.date.copyright | 2025-07-30 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-07-24 | - |
| dc.identifier.citation | J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
A. Asai, Z. Wu, Y. Wang, A. Sil, and H. Hajishirzi. Self-rag: Learning to retrieve, generate, and critique through self-reflection. 2024. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. N. Carlini, D. Ippolito, M. Jagielski, K. Lee, F. Tramer, and C. Zhang. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations, 2022. A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023. Y. Gao, Y. Xiong, X. Gao, K. Jia, J. Pan, Y. Bi, Y. Dai, J. Sun, H. Wang, and H. Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997, 2(1), 2023. K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR, 2020. M. Honnibal, I. Montani, S. Van Landeghem, A. Boyd, et al. spacy: Industrial-strength natural language processing in python. 2020. G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave. Atlas: Few-shot learning with retrieval augmented language models. Journal of Machine Learning Research, 24(251):1–43, 2023. S. Jeong, J. Baek, S. Cho, S. J. Hwang, and J. C. Park. Adaptive-rag: Learning to adapt retrieval-augmented large language models through question complexity. arXiv preprint arXiv:2403.14403, 2024. Z. Jiang, F. F. Xu, L. Gao, Z. Sun, Q. Liu, J. Dwivedi-Yu, Y. Yang, J. Callan, and G. Neubig. Active retrieval augmented generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7969–7992, 2023.\ M. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017. N. Kandpal, H. Deng, A. Roberts, E. Wallace, and C. Raffel. Large language models struggle to learn long-tail knowledge. In International conference on machine learning, pages 15696–15707. PMLR, 2023. T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019. K. Lee, M.-W. Chang, and K. Toutanova. Latent retrieval for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300, 2019. P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems, 33:9459–9474, 2020. D. Li, A. S. Rawat, M. Zaheer, X. Wang, M. Lukasik, A. Veit, F. Yu, and S. Kumar. Large language models with controllable working memory. arXiv preprint arXiv:2211.05110, 2022. X. Ma, Y. Gong, P. He, H. Zhao, and N. Duan. Query rewriting in retrieval-augmented large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5303–5315, 2023. A. Mallen, A. Asai, V. Zhong, R. Das, D. Khashabi, and H. Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511, 2022. C. Niu, Y. Wu, J. Zhu, S. Xu, K. Shum, R. Zhong, J. Song, and T. Zhang. Ragtruth: A hallucination corpus for developing trustworthy retrieval-augmented language models. arXiv preprint arXiv:2401.00396, 2023. F. Petroni, A. Piktus, A. Fan, P. Lewis, M. Yazdani, N. De Cao, J. Thorne, Y. Jernite, V. Karpukhin, J. Maillard, et al. Kilt: a benchmark for knowledge intensive language tasks. arXiv preprint arXiv:2009.02252, 2020. Y. Razeghi, R. L. Logan IV, M. Gardner, and S. Singh. Impact of pretraining term frequencies on few-shot reasoning. arXiv preprint arXiv:2202.07206, 2022. N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. Z. Shao, Y. Gong, Y. Shen, M. Huang, N. Duan, and W. Chen. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy. arXiv preprint arXiv:2305.15294, 2023. W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652, 2023. K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:2104.07567, 2021. K. Song, X. Tan, T. Qin, J. Lu, and T.-Y. Liu. Mpnet: Masked and permuted pretraining for language understanding. Advances in neural information processing systems, 33:16857–16867, 2020. W. Su, Y. Tang, Q. Ai, Z. Wu, and Y. Liu. Dragin: dynamic retrieval augmented generation based on the information needs of large language models. arXiv preprint arXiv:2403.10081, 2024. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Y. Wang, P. Li, M. Sun, and Y. Liu. Self-knowledge guided retrieval augmentation for large language models. arXiv preprint arXiv:2310.05002, 2023. W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang. Generate rather than retrieve: Large language models are strong context generators. arXiv preprint arXiv:2209.10063, 2022. Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen, et al. Siren's song in the ai ocean: A survey on hallucination in large language models. Computational Linguistics, pages 1–45, 2025. Z. Zhang, M. Fang, and L. Chen. Retrievalqa: Assessing adaptive retrievalaugmented generation for short-form open-domain question answering. arXiv preprint arXiv:2402.16457, 2024. C. Zhou, G. Neubig, J. Gu, M. Diab, P. Guzman, L. Zettlemoyer, and M. Ghazvininejad. Detecting hallucinated content in conditional neural sequence generation. arXiv preprint arXiv:2011.02593, 2020. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98193 | - |
| dc.description.abstract | 近年來,大型語言模型(LLMs)在自然語言處理任務中取得了顯著的進展,但受限於其固定的參數記憶,難以處理動態且不斷擴展的知識,導致在回答複雜查詢時容易產生事實錯誤或幻覺。檢索增強生成(RAG)通過整合外部知識來源以改善此問題,但傳統 RAG 無差別地對所有查詢進行檢索,造成對簡單查詢的低效處理與計算資源浪費。為此,本研究提出了一個自適應檢索增強生成(ARAG)方法,基於查詢複雜度動態決定是否進行外部知識檢索。該方法從查詢中提取語義特徵、命名實體識別特徵及頁面瀏覽量特徵,並採用堆疊式集成學習(Stacking Ensemble Learning)訓練一個分類器,以預測查詢是否需要進行外部檢索。實驗結果顯示,本方法在 RetrievalQA、TriviaQA 及 NQ-open 資料集上的分類準確率均優於現有基線方法,展示其有效性與穩健性。 | zh_TW |
| dc.description.abstract | In recent years, Large Language Models (LLMs) have achieved remarkable advances in natural language processing tasks, but their parametric knowledge limits their ability to handle dynamic and evolving knowledge, leading to factual errors or hallucinations when answering complex queries. Retrieval-Augmented Generation (RAG) integrates external knowledge sources to address this limitation, but conventional RAG uniformly performs retrieval regardless of query complexity, resulting in inefficiency for simple queries and unnecessary computational overhead. To address this, we propose an Adaptive Retrieval-Augmented Generation (ARAG) method that dynamically determines whether to perform external retrieval based on query complexity. The proposed method extracts semantic features, named entity recognition features, and page view features from the query, and employs a stacking ensemble learning approach to train a classifier that predicts whether retrieval is necessary. Experimental results show that our method achieves higher classification accuracy compared to baseline methods on the RetrievalQA, TriviaQA, and NQ-open datasets, demonstrating its effectiveness and robustness. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-07-30T16:17:10Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-07-30T16:17:10Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員會審定書 ⅰ
誌謝 ⅱ 摘要 ⅲ Abstract ⅳ Contents ⅴ List of Tables ⅶ List of Figures ⅷ Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 1 1.3 Purpose 2 Chapter 2 Related Work 4 2.1 Retrieval-Augmented Generation 4 2.2 Adaptive Retrieval 6 2.2.1 Generation Process Guided Adaptive Retrieval 6 2.2.2 Query Analysis Guided Adaptive Retrieval 7 Chapter 3 Method 10 3.1 Problem Definition 11 3.2 Feature Extraction 12 3.2.1 Semantic Features 12 3.2.2 NER Features 12 3.2.3 Page View Features 13 3.3 Stacking Ensemble Learning 14 3.3.1 Architecture 14 3.3.2 Base Classifiers 16 3.3.3 Meta Classifier 18 Chapter 4 Experiments 20 4.1 Datasets 20 4.2 Data Labeling 21 4.3 Baselines and Evaluation Metric 23 4.4 Results 24 4.5 Ablation Study 25 4.6 Feature Importance 27 4.7 Effects of Training Data Sizes 28 Chapter 5 Conclusions 30 References 31 | - |
| dc.language.iso | en | - |
| dc.subject | 檢索增強生成 | zh_TW |
| dc.subject | 大型語言模型 | zh_TW |
| dc.subject | 堆疊式集成學習 | zh_TW |
| dc.subject | 自適應檢索 | zh_TW |
| dc.subject | Large Language Models | en |
| dc.subject | Stacking Ensemble Learning | en |
| dc.subject | Retrieval-Augmented Generation | en |
| dc.subject | Adaptive Retrieval | en |
| dc.title | 基於堆疊式集成學習之自適應檢索增強生成方法 | zh_TW |
| dc.title | An Adaptive Retrieval-Augmented Generation Method Based on Stacking Ensemble Learning | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 張詠淳;陳孟彰 | zh_TW |
| dc.contributor.oralexamcommittee | Yung-Chun Chang ;Meng-Chang Chen | en |
| dc.subject.keyword | 大型語言模型,檢索增強生成,自適應檢索,堆疊式集成學習, | zh_TW |
| dc.subject.keyword | Large Language Models,Retrieval-Augmented Generation,Adaptive Retrieval,Stacking Ensemble Learning, | en |
| dc.relation.page | 35 | - |
| dc.identifier.doi | 10.6342/NTU202502079 | - |
| dc.rights.note | 未授權 | - |
| dc.date.accepted | 2025-07-25 | - |
| dc.contributor.author-college | 管理學院 | - |
| dc.contributor.author-dept | 資訊管理學系 | - |
| dc.date.embargo-lift | N/A | - |
| 顯示於系所單位: | 資訊管理學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 未授權公開取用 | 2.27 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
