Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98548| Title: | 有限樣本在語言模型中的高效價值轉換 Efficient Value Transformation in Language Models Using Limited Training Samples |
| Authors: | 林佐竺 Zuo-Zhu Lin |
| Advisor: | 黃從仁 Tsung-Ren Huang |
| Keyword: | 大型語言模型,意識形態偏見,微調,資料篩選,自由聯想, LLMs,Ideological Bias,Fine-Tuning,Data Selection,Free Association, |
| Publication Year : | 2025 |
| Degree: | 碩士 |
| Abstract: | 大型語言模型預設的意識形態偏見威脅資訊客觀性,但傳統操控模型立場的方法往往成本高昂且容易受到資料雜訊影響,難以落實。本研究首次提出以認知心理學中的「自由聯想」為基礎的新方法,透過模型自身生成關鍵詞,低成本且高效地篩選建構 IdeoFA 資料集。實驗證明,IdeoFA 資料集能有效地提高模型的意識形態操控能力並具有跨主題泛化性。本方法克服傳統方法之缺點,為未來 LLM 精確控制提供實務上立即可行的解決方案。 Default ideological biases in large language models (LLMs) threaten objectivity, yet traditional steering methods are expensive and sensitive to dataset noise, limiting real‑world use. We introduce a free‑association approach that prompts the model to generate keywords, constructing the IdeoFA dataset at minimal cost. Experimental evidence shows that IdeoFA significantly enhances the robustness of ideological controllability and generalises across topics, thereby offering an immediately deployable solution for precise LLM governance. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98548 |
| DOI: | 10.6342/NTU202501741 |
| Fulltext Rights: | 同意授權(全球公開) |
| metadata.dc.date.embargo-lift: | 2025-08-18 |
| Appears in Collections: | 統計碩士學位學程 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-113-2.pdf | 5.17 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
