Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90096
Title: 步步精心:用於少樣本情境學習的階段性學習範例選擇策略
Let’s Select Step by Step(LSSS): A Demonstration Selection Strategy For Few-Shot In-Context Learning
Authors: 楊敦捷
Dun-Jie Yang
Advisor: 許永真
Yung-Jen Hsu
Keyword: 少樣本情境學習,大型語言模型,提示工程,自然語言處理,
Few-shot In-context Learning,Large Language Models,Prompt Engineering,Natural Language Processing,
Publication Year : 2023
Degree: 碩士
Abstract: 大型語言模型近年來已展示出其驚人的能力,特別是在不需要任何參數更新的情況下,能使用少量的範例進行學習來完成各種任務。然而,學習範例的選擇大大影響了模型的表現,進而影響了穩定性。受到相關研究利用大型語言模型的推理能力和選擇多元化範例提升表現的啟發,我們提出了新的學習範例選擇策略,名為步步精心(Let's Select Step by Step)。我們利用大型語言模型進行客製於下游任務的資料分群並生成解釋,接著透過高效率的篩選機制選擇最佳的學習範例。實驗結果表明,我們的方法能在大型語言模型擅長的任務上更進一步地提高了模型表現和穩定性,而此方法的成效更是隨著利用進階模型而增強,在需要進階能力的任務表現上也有所提升。
Large Language Models (LLMs) have shown a remarkable ability to perform various tasks using few-shot in-context learning from a limited number of demonstration examples, without requiring parameter updates. However, the performance of such learning is notably inconsistent across different example sets. Inspired by related work highlighting the benefits of utilizing the reasoning ability of LLMs and diverse examples, we propose a method, Let's Select Step by Step (LSSS), for demonstration example selection. By leveraging LLMs, we carry out task-specific clustering and explanation generation, followed by an efficient evaluation to select better demonstration examples. Experimental results indicate that our method enhances both performance and stability on tasks where LLMs typically excel. Moreover, for tasks demanding specific linguistic capabilities, employing more advanced LLMs could further boost the effectiveness of our approach.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/90096
DOI: 10.6342/NTU202303119
Fulltext Rights: 同意授權(限校園內公開)
metadata.dc.date.embargo-lift: 2027-06-07
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-111-2.pdf
  Restricted Access
4.47 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved