Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98186
Title: 運用行動思維鏈及多模態模型進行可擴展自動化行動裝置使用者介面資料集生成及使用者操作定位
Scalable Automated Mobile UI Dataset Generation Using Chain-of-Action-Thought Framework and Multimodal Models for User Action Localization
Authors: 高子維
Tzu-Wei Kao
Advisor: 廖世偉
Shih-Wei Liao
Keyword: 行動裝置圖像使用者介面測試,行動裝置使用者介面資料集,使用者操作定位,行動思維鏈,視覺語言模型,
Mobile GUI testing,Mobile UI Dataset,User Action localization,Chain-of-Action-Thought,Vision-Language Model,
Publication Year : 2025
Degree: 碩士
Abstract: 本研究提出一個以「行動思維鏈」(Chain-of-Action-Thought, CoAT)為核心的可擴展行動裝置使用者介面資料集自動化生成流程。該流程透過視覺語言模型模擬真實使用者互動,涵蓋畫面描述、行為推理、動作執行與結果驗證,無需人工參與即可產生高品質的互動資料集。資料集包含原始截圖、標註之介面元件、動作動畫以及詳細的語意推理紀錄。
為驗證資料集之有效性,我們提出一個多模態模型,結合 3D U-Net 用於視覺訊號處理,與 BERT 編碼器處理經由 OCR 擷取的文字資訊。我們以 AITW 與本研究資料集進行多組訓練與測試組合實驗。結果顯示,本資料集能提升動作定位準確性,尤其對滑動操作具優勢。本研究為行動裝置使用者介面測試之中的使用者操作定位提供一套完整解方。
This work presents a fully automated pipeline for scalable mobile UI dataset generation, driven by a Chain-of-Action-Thought (CoAT) framework. The pipeline simulates realistic user interactions using a vision-language model to describe screen content, reason through actions, execute commands, and validate outcomes—without human intervention. The resulting dataset includes raw screenshots, annotated UI elements, action animations,and detailed semantic reasoning traces.
To demonstrate the effectiveness of the generated dataset, we introduce a multimodal model that combines 3D U-Net for visual understanding and a BERT encoder for processing textual information extracted via OCR. We evaluate this model across different training and testing configurations, using both AITW and our dataset. Results show that our dataset improves action localization performance, particularly for swipe-based interactions. This work contributes a robust solution for user action localization in mobile GUI testing.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98186
DOI: 10.6342/NTU202502184
Fulltext Rights: 未授權
metadata.dc.date.embargo-lift: N/A
Appears in Collections:資訊工程學系

Files in This Item:
File SizeFormat 
ntu-113-2.pdf
  Restricted Access
1.94 MBAdobe PDF
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved