Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 重點科技研究學院
  3. 積體電路設計與自動化學位學程
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97434
標題: 即時事件驅動的混合式外觀與特徵型凝視追蹤方法
Real-Time Hybrid Appearance-Based and Feature-Based Event-Driven Gaze Tracking
作者: 王辰淯
Chen-Yu Wang
指導教授: 簡韶逸
Shao-Yi Chien
關鍵字: 事件相機,眼球追蹤,近眼凝視追蹤,
Event Camera,Eye Tracking,Near-eye Gaze Tracking,
出版年 : 2025
學位: 碩士
摘要: 事件相機(Event Camera)憑藉其高時間解析度、低延遲、寬動態範圍及低功耗等特性,特別適合需要高響應性與強健性的眼動追蹤應用。相較於傳統幀式相機(Frame-based Camera)易受運動模糊與冗餘數據採集之限制,事件相機僅在場景亮度變化時輸出非同步數據,此特性使其能以極低延遲精準捕捉快速眼球運動。
本論文提出一套即時混合式眼動追蹤系統,結合外觀式與特徵式方法,並建構於事件驅動框架之上。透過充分發揮事件相機的優勢,所提出之系統可實現準確且穩定的注視點估計,適用於 AR/VR 以及其他互動式應用環境。此系統的一項重要貢獻為在追蹤階段引入了信心機制(confidence mechanism),進一步提升了混合方法在可靠性與精準度上的表現。相較於傳統的深度學習方法,本系統展現出更高的即時性與響應能力,透過混合初始化的分割策略結合輕量化的匹配模組與連續追蹤模組,成功省略了頻繁重新初始化的需求。實驗結果顯示,本研究所提出的架構在準確率與即時性方面均優於目前的先進方法,突顯了混合式與事件驅動策略在推進眼動追蹤技術,尤其是在沉浸式應用 (如 AR/VR)中所具備的潛力與價值。
Event cameras, with their high temporal resolution, low latency, wide dynamic range, and low power consumption, are particularly well-suited for eye tracking applications that demand responsiveness and robustness under challenging conditions. Unlike conventional frame-based cameras that suffer from motion blur and redundant data capture, event cameras output asynchronous data only when brightness changes occur in the scene, making them ideal for capturing rapid eye movements with minimal delay.
This thesis presents a real-time hybrid gaze tracking system that integrates appearance-based and feature-based methods within an event-driven framework. By leveraging the unique advantages of event cameras, the proposed system enables accurate and robust online gaze estimation suitable for AR/VR and other interactive environments. A key contribution is the introduction of a confidence mechanism in the tracking stage, which improves reliability and precision over existing hybrid approaches. Compared to deep learning-based methods, the proposed system achieves higher accuracy and lower inference latency. Furthermore, experimental results show that the proposed framework outperforms the state-of-the-art method in both accuracy and responsiveness, enabled by a hybrid-initialized segmentation strategy with lightweight matching and continuous tracking—eliminating the need for frequent reinitialization. Additionally, the low power consumption of event-based processing supports deployment on resource-constrained or wearable platforms. This work highlights the promise of hybrid and event-driven techniques in advancing gaze tracking, particularly for immersive applications such as AR/VR.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97434
DOI: 10.6342/NTU202500985
全文授權: 未授權
電子全文公開日期: N/A
顯示於系所單位:積體電路設計與自動化學位學程

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  未授權公開取用
32.88 MBAdobe PDF
顯示文件完整紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved