請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94049完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳銘憲 | zh_TW |
| dc.contributor.advisor | Ming-Syan Chen | en |
| dc.contributor.author | 賴宥辰 | zh_TW |
| dc.contributor.author | Yu-Chen Lai | en |
| dc.date.accessioned | 2024-08-14T16:26:09Z | - |
| dc.date.available | 2024-08-15 | - |
| dc.date.copyright | 2024-08-13 | - |
| dc.date.issued | 2024 | - |
| dc.date.submitted | 2024-08-10 | - |
| dc.identifier.citation | [1] Ahmedhelow. Exploring llms’ performance on accounting exams. Published in Sage Ai, March 27 2024.
[2] Marc Eulerich, Aida Sanatizadeh, Hamid Vakilzadeh, and David A. Wood. Is it all hype? chatgpt’s performance and disruptive potential in the accounting and auditing industries, November 17 2023. Available at SSRN: https://ssrn. com/abstract=4452175 or http://dx.doi.org/10.2139/ssrn.4452175. [3] Chris Gaetano. We had chatgpt take the cpa exam — and it failed. Accounting Today, May 08 2023. [4] Marc Eulerich, Aida Sanatizadeh, Hamid Vakilzadeh, and David Wood. Can artificial intelligence pass accounting certification exams? chatgpt: Cpa, cma, cia, and ea? SSRN Electronic Journal, 01 2023. [5] OpenAI. Gpt-3.5 technical report, 2022. [6] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Ku ̈ttler, Mike Lewis, Wen-tau Yih, Tim Rocktaschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc., 2020. [7] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrieval augmented language model pre-training. In Hal Daum ́e III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929–3938. PMLR, 13–18 Jul 2020. [8] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey, 2024. [9] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. [10] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. [11] ekmungai. Python accounting, 2024. [12] LangChain Contributors. Langchain. https://github.com/hwchase17/langchain, 2024. [13] Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2152–2161, Lille, France, 07–09 Jul 2015. PMLR. [14] Michael Wooldridge and Nicholas R Jennings. Intelligent agents: Theory and practice. The knowledge engineering review, 10(2):115–152, 1995. [15] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. arxiv 2022. arXiv preprint arXiv:2205.06175, pages 1–40. [16] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023. [17] Zhiheng Xi, Yiwen Ding, Wenxiang Chen, Boyang Hong, Honglin Guo, Junzhe Wang, Dingwen Yang, Chenyang Liao, Xin Guo, Wei He, et al. Agentgym: Evolving large language model-based agents across diverse environments. arXiv preprint arXiv:2406.04151, 2024. [18] OpenAI. Openai embedding model. https://platform.openai.com/docs/guides/embeddings, 2024. [19] Anton Troynikov Suvansh Sanjeev. Chroma technical reprot. https://github.com/chroma-core/chroma, https://www.trychroma.com/, 2024. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/94049 | - |
| dc.description.abstract | 人工智慧中大型語言模型(LLM)的使用激增凸顯了它們在文字處理和生成方面的先進能力。然而,它們在會計和金融等專業領域的熟練程度仍然受到審查,特別是在註冊會計師(CPA)考試等複雜任務方面。在美國,CPA考試由美國註冊會計師協會(AICPA)監督,包括四個部分:審計和簽證(AUD)、商業環境與理論(BEC)、財務會計和報告(FAR)以及法規(REG)。過往研究表明,包括ChatGPT在內的LLM在CPA考試中遇到了複雜的問題解決場景和多樣化的問題類型,這表明需要進一步改進才能有效地處理此類特定領域的任務。
為了解決CPA考試的挑戰,引入了一種稱為檢索增強推理(RAR)的新方法,將平均通過率從0.5提高到0.62。RAR使用任務路由器將問題分為知識密集和推理密集型類別。對於知識密集型問題,它使用檢索增強生成(RAG)從外部資料庫中提取相關信息,以提高答案準確性。對於推理密集問題,RAR採用推理行動(ReAct)、代理人(Agent)和思想鏈(CoT)方法,並整合會計Python庫等外部工具,模仿真實考試環境,有效解決複雜問題。 | zh_TW |
| dc.description.abstract | The surge in the use of Large Language Models (LLMs) in artificial intelligence highlights their advanced capabilities in text processing and generation. However, their proficiency in specialized fields, such as accounting and finance, remains under scrutiny, particularly regarding complex tasks like the Certified Public Accountant (CPA) examination. The CPA exam, overseen by the American Institute of CPAs, encompasses four sections: Auditing and Attestation (AUD), Business Environment and Concepts (BEC), Financial Accounting and Reporting (FAR), and Regulation (REG). Research indicates that LLMs, including ChatGPT, struggle with the exam's complex problem-solving scenarios and varied question types, demonstrating the need for further improvement to handle such domain-specific tasks effectively.
To address the challenges of the CPA exam, a new method called Retrieval Augmented Reasoning (RAR) has been introduced, improving the average pass rate from 0.5 to 0.62. RAR employs a task router to classify questions into knowledge-intensive and reasoning-intensive categories. For knowledge-intensive questions, it uses Retrieval Augmented Generation (RAG) to extract relevant information from external databases, enhancing answer accuracy. For reasoning-intensive questions, RAR utilizes ReAct, Agent, and Chain of Thought (CoT) approach, and integrates external tools like the accounting Python library to solve complex problems effectively, mimicking the real exam environment. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-08-14T16:26:09Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2024-08-14T16:26:09Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 口試委員審定書 ........................... i
致謝 ........................... ii 摘要 ........................... iii Abstract ........................... iv Contents ........................... v List of Tables ........................... vii List of Figures ........................... viii 1 Introduction ........................... 1 2 Problems and Contribution ........................... 3 2.1 Problems.................................. 3 2.2 Contribution................................ 4 3 Related Works ........................... 6 3.1 Use ChatGPT to take the CPA exam.................. 6 3.1.1 Retrieval Augmented Generation(RAG).................. 7 3.1.2 ReAct ............................... 7 3.1.3 Agent ............................... 7 4 Methodology ........................... 9 4.1 Task router for classification....................... 10 4.2 RAG for knowledge-intensive questions ................. 11 4.3 RAR for reasoning-intensive questions ................. 12 4.3.1 Agent from ReAct ........................ 13 4.3.2 Chain of Thought(CoT)..................... 14 5 Experiments ........................... 15 5.1 Datasets and experiment setting..................... 15 5.2 Zero-shot prompting ........................... 15 5.3 Results................................... 16 5.3.1 Comparison of zero-shot prompting, RAG, and RAR ........................... 16 5.3.2 RAG on knowledge-intensive questions ........................... 18 5.3.3 RAR on reasoning-intensive questions ........................... 19 5.3.4 More analysis ........................... 20 6 Conclusion ........................... 23 7 Future Works ........................... 24 Reference ........................... 25 | - |
| dc.language.iso | en | - |
| dc.subject | 思想鏈 | zh_TW |
| dc.subject | 註冊會計師考試 | zh_TW |
| dc.subject | 檢索增強推理 | zh_TW |
| dc.subject | 檢索增強生成 | zh_TW |
| dc.subject | 推理行動 | zh_TW |
| dc.subject | 代理人 | zh_TW |
| dc.subject | ReAct | en |
| dc.subject | RAR | en |
| dc.subject | RAG | en |
| dc.subject | CoT | en |
| dc.subject | CPA | en |
| dc.title | 利用檢索增強推理解決註冊會計師考試 | zh_TW |
| dc.title | RAR: Tackling Reasoning-Intensive CPA Exams with Retrieval Augmented Reasoning | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 112-2 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 葉彌妍;賴冠廷;吳齊人 | zh_TW |
| dc.contributor.oralexamcommittee | Mi-Yen Yeh;Kuan-Ting Lai;Chi-Jen Wu | en |
| dc.subject.keyword | 註冊會計師考試,檢索增強推理,檢索增強生成,推理行動,代理人,思想鏈, | zh_TW |
| dc.subject.keyword | CPA,RAR,RAG,ReAct,CoT, | en |
| dc.relation.page | 27 | - |
| dc.identifier.doi | 10.6342/NTU202402573 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2024-08-13 | - |
| dc.contributor.author-college | 電機資訊學院 | - |
| dc.contributor.author-dept | 電機工程學系 | - |
| dc.date.embargo-lift | 2025-12-31 | - |
| 顯示於系所單位: | 電機工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-112-2.pdf | 1.91 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
