請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101004完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 廖世偉 | zh_TW |
| dc.contributor.advisor | Shih-Wei Liao | en |
| dc.contributor.author | 陳家潁 | zh_TW |
| dc.contributor.author | Chia-Yin Chen | en |
| dc.date.accessioned | 2025-11-26T16:26:02Z | - |
| dc.date.available | 2025-11-27 | - |
| dc.date.copyright | 2025-11-26 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-09-25 | - |
| dc.identifier.citation | Alec Radford and Karthik Narasimhan. Improving language understanding by generative pre-training. 2018.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. OpenAI(2023). Gpt-4 technical report, 2024. Google Gemini Team. Gemini: A family of highly capable multimodal models, 2025. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, EdChi, Quoc Le, and DennyZhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, DaleSchuurmans,ClaireCui, OlivierBousquet, QuocLe, andEdChi. Least-to-most prompting enables complex reasoning in large language models, 2023. LuyuGao,AmanMadaan,ShuyanZhou,UriAlon,PengfeiLiu,YimingYang,Jamie Callan, and Graham Neubig. Pal: Program-aided language models, 2023. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models, 2021. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Languageagenttree search unifies reasoning acting and planning in language models, 2024. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. Graph of thoughts: Solving elaborate problems with large language models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16):17682–17690, March 2024. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models, 2023. Priyanka Kargupta, Ishika Agarwal, Tal August, and Jiawei Han. Tree-of-debate: Multi-persona debate trees elicit critical thinking for scientific comparative analysis, 2025. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback, 2023. WenlongHuang, Pieter Abbeel, DeepakPathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents, 2022. Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning, 2023. Peter E. Hart, Nils J. Nilsson, and Bertram Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100–107, 1968. 33 Cameron B. Browne, Edward Powley, Daniel Whitehouse, Simon M. Lucas, Peter I. Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1):1–43, 2012. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/101004 | - |
| dc.description.abstract | 近年來,大型語言模型(LLMs)在各種任務上展現出優異的表現。然而,其推理能力仍受到傳統提示方法的限制,例如直接的輸入輸出(I/O)映射或是線性的思維鏈(Chain-of-Thought, CoT)提示。雖然「思維樹」(Tree of Thought, ToT)框架透過探索多條推理路徑來提升模型的推理表現,但因為存在大量重複節點與高昂的 token 使用量,使其在效率上仍有不足。
本論文提出一個改良的框架──「思維格」(Lattice of Thought, LoT),以解決 ToT 中的效率問題。該方法透過辨識並合併語意相同的推理狀態,將原本的樹狀結構壓縮為格狀結構,從而減少重複探索與不必要的 token 消耗。在 Game of 24 任務上的實驗結果顯示,LoT 在維持與 ToT 相當甚至更高的成功率下,平均 token 使用量降低了 29.04%。此外,本研究也對多種提示策略進行了成本、效率與成功率的分析與比較。 結果證明,LoT 提供了一種更有效率且可擴展的推理方式,為大型語言模型在多步驟問題解決上的應用帶來新的可能性。 | zh_TW |
| dc.description.abstract | In recent years, large language models (LLMs) have shown impressive performance across a wide range of tasks. However, their reasoning ability is often limited by traditional prompting methods such as direct input-output (I/O) mapping or sequential Chain-of-Thought (CoT) prompting. While the Tree of Thought (ToT) framework improves reasoning by exploring multiple reasoning paths in a tree structure, it suffers from inefficiency due to redundant node expansion and high token usage.
This thesis proposes a new framework called Lattice of Thought (LoT), which enhances ToT by identifying and merging semantically equivalent reasoning states into a lattice structure. By eliminating repeated paths and overlapping subtrees, LoT reduces unnecessary token consumption and computational cost during exploration. Our experiments on the Game of 24 task show that LoT achieves a 29.04% reduction in token usage while maintaining a comparable or higher success rate than ToT. We also provide a detailed analysis of cost, efficiency, and success rate across various prompting strategies. The results demonstrate that LoT offers a more efficient and scalable approach to structured reasoning in LLMs and opens up opportunities for further improvements in multi-step problem-solving tasks. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-11-26T16:26:02Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-11-26T16:26:02Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | Verification Letter from the Oral Examination Committee i
Acknowledgements ii 摘要 iii Abstract iv Contents vi List of Figures ix List of Tables x Chapter 1 Introduction 1 1.1 Introduction 1 Chapter 2 Related Work 3 2.1 Large Language Model 3 2.2 Chain of Thought 4 2.3 Tree of Thought (ToT) 5 2.3.1 Motivation and Cognitive Foundations 6 2.3.2 Benefits and Limitations 7 2.3.3 Extensions on Top of ToT 8 2.3.4 Practical Guidelines 8 2.3.5 Summary 9 2.4 Graph of Thought 9 2.5 Other Prompt Engineering Method 11 2.6 Summary 12 Chapter 3 Methodology 13 3.1 Framework 13 3.2 Tree of Thought (ToT) Algorithm 14 3.2.1 Notation and Inputs 14 3.2.2 Algorithm (Breadth-First Search Version) 15 3.2.3 Remarks 15 3.3 Lattice of Thought (LoT) Algorithm 16 3.3.1 Notation and Inputs 16 3.3.2 Algorithm (BFS with Merging) 18 3.3.3 Discussion and Advantages 19 3.4 The Game of 24 19 3.4.1 Redundancy in Tree of Thought 21 Chapter 4 Evaluation 22 4.1 Experimental Setup 22 4.2 Baseline Comparisons with Tree of Thought 23 4.3 Efficiency Analysis 24 4.4 Merge Ratio in LoT 25 4.5 Cost and Efficiency Analysis 25 4.6 Summary 27 Chapter 5 Conclusion 28 5.1 Conclusion 28 5.2 Discussion 29 5.3 Future Work 29 References 31 | - |
| dc.language.iso | en | - |
| dc.subject | 大型語言模型 (LLM) | - |
| dc.subject | 提示工程 | - |
| dc.subject | 格子 | - |
| dc.subject | 符元減少 | - |
| dc.subject | Large Language Model (LLM) | - |
| dc.subject | prompt engineering | - |
| dc.subject | lattice | - |
| dc.subject | token reduction | - |
| dc.title | 思維格:自思維樹演進之結構化推理方法 | zh_TW |
| dc.title | Lattice of Thought: Advancing Structured Reasoning from Tree of Thought | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 李逸元;盧瑞山 | zh_TW |
| dc.contributor.oralexamcommittee | Yi-Yuan Lee;Ruei-Shan Lu | en |
| dc.subject.keyword | 大型語言模型 (LLM),提示工程格子符元減少 | zh_TW |
| dc.subject.keyword | Large Language Model (LLM),prompt engineeringlatticetoken reduction | en |
| dc.relation.page | 34 | - |
| dc.identifier.doi | 10.6342/NTU202504446 | - |
| dc.rights.note | 同意授權(限校園內公開) | - |
| dc.date.accepted | 2025-09-25 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 資訊工程學系 | - |
| dc.date.embargo-lift | 2025-11-27 | - |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 3.26 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
