請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77880
標題: | 適用於頭戴式穿戴裝置且不易受環境光影響基於圖像表徵之眼神追蹤演算法 Illumination-Robust Appearance-Based Gaze Estimation on Head-Mounted Devices |
作者: | Po-Jung Chiu 邱柏榕 |
指導教授: | 簡韶逸(Shao-Yi Chien) |
關鍵字: | 眼動儀,眼動偵測,視線追蹤,無紅外光, eye tracking,gaze estimation,appearance-based,illumination- robust, |
出版年 : | 2017 |
學位: | 碩士 |
摘要: | 眼動儀以往常被用來作為科學、醫療研究或人類行為研究的輔助工具,近年來由於擴增實境(AR)與虛擬實境(VR)的興起,眼動儀開始被注意到,並且使用在此類頭戴式裝置中。 眼動資訊對於擴增實境與虛擬實境來說相當的重要,許多研究人員與開發者致力於將眼動儀整合進AR/VR裝置中,並做出許多相當有趣的應用。
然而市面上的眼動儀絕大多數都是採用以紅外光為基礎的技術,這衍生出了一些潛在的問題: 首先,由於太陽光的光譜屬於全波段光,眼動儀所使用的近紅外光(Near-Infrared)被包含於其中。這導致眼動儀在戶外的環境中,精準度會下降,甚至有可能會無法正常運作。 另外一個潛在的問題是關於紅外光有可能對眼睛造成的傷害,由於人眼看不到近紅外光,在使用傳統眼動儀的同時,雖然人眼看不到,但其實同時間是有多盞紅外光源圍繞著眼睛。如果長時間配戴這類紅外線眼動儀,眼球將會持續暴露在紅外線的照射中。 如果想要將眼動儀應用於可能長時間、全天候配戴的AR裝置上,這兩個問題必須要能被克服。因此我們提出一種應用於頭戴裝置且無需額外紅外光源的眼神追蹤系統,它使用一種基於圖象表徵的可見光演算法,特色是不易受環境光源影響,適用於全天候的環境中。 實驗結果顯示,使用我們提出的特徵抽取技術可以將注視點誤差降低38%∼68.2%,測試於兩種不同類型的測資上。 Recently, eye trackers become not only a tool for medical research or human behavioral reseach, but also used in head-mounted devices such as VR (virtual reality) or AR (augmented reality). Eye gaze informations are valuable in AR/VR environments. Many researchers and developers are devoted to develope interesting applications which utilize the eye gaze information as additional user input. However, there are some disadvantages in traditional eye trackers. First, commercial eye trackers use infrared illuminators as additional light sources. The robustness at outdoor environment is not guaranteed, especially under harsh sunlight. Another issue is about health concerns. The reason is that near-infrared light cannot be seen by human eyes, but it is still absorbed by the retina. Wearing a such IR-based eye tracker for a long time may cause health concerns on eyes. To address these issues, in this thesis, we propose an appearance-based gaze estimation algorithm that do not need external light source, and a novel eye gaze dataset featuring multi-illumination labels is proposed in this work as well. The proposed algorithm is robust against illumination variaions, while other appearance-based methods perform badly under such common usage scenarios. Experimental result reports that, with the proposed illumination-robust feature extraction method, gaze estimation errors are able to decrease by 38% and 68.2% on synthetic and real data, respectively. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/77880 |
DOI: | 10.6342/NTU201702490 |
全文授權: | 有償授權 |
顯示於系所單位: | 電子工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-R04943039-1.pdf 目前未授權公開取用 | 21.91 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。