請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3672完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳炳宇 | |
| dc.contributor.author | Li-Ming Yang | en |
| dc.contributor.author | 楊立銘 | zh_TW |
| dc.date.accessioned | 2021-05-13T08:35:52Z | - |
| dc.date.available | 2017-02-20 | |
| dc.date.available | 2021-05-13T08:35:52Z | - |
| dc.date.copyright | 2017-02-20 | |
| dc.date.issued | 2016 | |
| dc.date.submitted | 2017-02-16 | |
| dc.identifier.citation | [1] F. Anderson and W. F. Bischof. Learning and performance with gesture guides. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1109–1118. ACM, 2013.
[2] G. Barnes. Rapid learning of pursuit target motion trajectories revealed by responses to randomizedtransientsinusoids. JournalofEyeMovementResearch,5(3):4,2012. [3] O.BauandW.E.Mackay.Octopocus:adynamicguideforlearninggesture-basedcommand sets. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pages 37–46. ACM, 2008. [4] M. Bennett, K. McCarthy, S. O’Modhrain, and B. Smyth. Simpleflow: enhancing gestural interaction with gesture prediction, abbreviation and autocompletion. In Human-Computer Interaction–INTERACT 2011, pages 591–608. Springer, 2011. [5] D. S. Best and A. T. Duchowski. A rotary dial for gaze-based pin entry. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pages 69–76. ACM, 2016. [6] A. Bragdon, A. Uguray, D. Wigdor, S. Anagnostopoulos, R. Zeleznik, and R. Feman. Gesture play: motivating online gesture learning with fun, positive reinforcement and physical metaphors. In ACM international conference on interactive tabletops and surfaces, pages 39–48. ACM, 2010. [7] A. Bragdon, R. Zeleznik, B. Williamson, T. Miller, and J. J. LaViola Jr. Gesturebar: improving the approachability of gesture-based interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2269–2278. ACM, 2009. [8] X. Cao and S. Zhai. Modeling human performance of pen stroke gestures. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 1495–1504. ACM, 2007. [9] G. Casiez, N. Roussel, and D. Vogel. 1C filter: a simple speed-based low-pass filter for noisy input in interactive systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2527–2530. ACM, 2012. [10] W. Delamare, C. Coutrix, and L. Nigay. Designing guiding systems for gesture-based interaction. In Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pages 44–53. ACM, 2015. [11] H. Drewes and A. Schmidt. Interacting with the computer using gaze gestures. In Human- Computer Interaction–INTERACT 2007, pages 475–488. Springer, 2007. [12] D. Freeman, H. Benko, M. R. Morris, and D. Wigdor. Shadowguides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pages 165–172. ACM, 2009. [13] E. Ghomi, S. Huot, O. Bau, M. Beaudouin-Lafon, and W. E. Mackay. Arpe`ge: Learning multitouch chord gestures vocabularies. In Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces, pages 209–218. ACM, 2013. [14] J. P. Hansen, H. Lund, F. Biermann, E. M?llenbach, S. Sztuk, and J. S. Agustin. Wrist-worn pervasive gaze interaction. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, pages 57–64. ACM, 2016. [15] H. Heikkila ̈ and K.-J. Ra ̈iha ̈. Simple gaze gestures and the closure of the eyes as an interaction technique. In Proceedings of the symposium on eye tracking research and applications, pages 147–154. ACM, 2012. [16] A. Huckauf and M. H. Urbina. Gazing with peyes: towards a universal input for various applications. In Proceedings of the 2008 symposium on Eye tracking research & applications, pages 51–54. ACM, 2008. [17] P. Isokoski. Text input methods for eye trackers using off-screen targets. In Proceedings of the 2000 symposium on Eye tracking research & applications, pages 15–21. ACM, 2000. [18] H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, and S. Vickers. Designing gaze gestures for gaming: an investigation of performance. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pages 323–330. ACM, 2010. [19] R. J. Jacob. What you look at is what you get: eye movement-based interaction techniques. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 11–18. ACM, 1990. [20] A. Kurauchi, W. Feng, A. Joshi, C. Morimoto, and M. Betke. Eyeswipe: Dwell-free text entry using gaze paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 1952–1956. ACM, 2016. [21] E. Mollenbach, J. P. Hansen, M. Lillholm, and A. G. Gale. Single stroke gaze gestures. In CHI’09 Extended Abstracts on Human Factors in Computing Systems, pages 4555–4560. ACM, 2009. [22]Q.Roy,S.Malacria,Y.Guiard,E ́.Lecolinet,andJ.Eagan.Augmentedletters:mnemonic gesture-based shortcuts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2325–2328. ACM, 2013. [23] C. Scho ̈nauer, K. Fukushi, A. Olwal, H. Kaufmann, and R. Raskar. Multimodal motion guidance: techniques for adaptive and dynamic feedback. In Proceedings of the 14th ACM international conference on Multimodal interaction, pages 133–140. ACM, 2012. [24] O.Tuisku,P.Majaranta,P.Isokoski,andK.-J.Ra ̈iha ̈.Nowdasher!dashaway!:longitudinal study of fast text entry by eye gaze. In Proceedings of the 2008 symposium on Eye tracking research & applications, pages 19–26. ACM, 2008. [25] M. H. Urbina, M. Lorenz, and A. Huckauf. Pies with eyes: the limits of hierarchical pie menus in gaze control. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pages 93–96. ACM, 2010. [26] D. Vanacken, A. Demeure, K. Luyten, and K. Coninx. Ghosts in the interface: Meta-user interface visualizations as guides for multi-touch interaction. In Horizontal Interactive Human Computer Systems, 2008. TABLETOP 2008. 3rd IEEE International Workshop on, pages 81–84. IEEE, 2008. [27] R. Vertegaal et al. Attentive user interfaces. Communications of the ACM, 46(3):30–33, 2003. [28] M. Vidal, A. Bulling, and H. Gellersen. Pursuits: spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, pages 439– 448. ACM, 2013. [29] D. J. Ward, A. F. Blackwell, and D. J. MacKay. Dasher—a data entry interface using continuous gestures and language models. In Proceedings of the 13th annual ACM symposium on User interface software and technology, pages 129–137. ACM, 2000. [30] J.O.Wobbrock,B.A.Myers,andJ.A.Kembel.Edgewrite:astylus-basedtextentrymethod designed for high accuracy and stability of motion. In Proceedings of the 16th annual ACM symposium on User interface software and technology, pages 61–70. ACM, 2003. [31] J.O.Wobbrock,J.Rubinstein,M.W.Sawyer,andA.T.Duchowski.Longitudinalevaluation of discrete consecutive gaze gestures for text entry. In Proceedings of the 2008 symposium on Eye tracking research & applications, pages 11–18. ACM, 2008. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/3672 | - |
| dc.description.abstract | 連續型視線輸入解決了以視線輸入時眼球顫動及誤觸操作的問題。過去的研究多將焦點集中在掃視式的視線輸入,透過將筆畫操作拆解為階段直線運動的方式來達到增進操作效率的目的,但為這種機制設計的視覺導引系統難以直接套用在任意的筆畫手勢上。因此我們結合連續型視線追蹤與動態視覺導引的概念,提出一個專為連續型視線輸入設計的漸進式動態視覺導引系統 GazeBeacon,能在操作過程中於視線輸入端點周圍持續地提供即時的視覺回饋與前饋。在前導實驗中我們發現兩個問題。我們利用套用平滑濾波器及輸入點重新取樣的兩個方式減緩眼球顫動隊對長度計算帶來的影響,並在路徑導引前方增加一視線集中焦點以解決使用者錯誤判讀引導的問題,設計一個基於不同筆畫組成類型探討的實驗來驗證這個做法。最後我們將 GazeBeacon 與傳統快捷查詢表做操作時間、操作辨識率與操作正確率的比較,發現雖然前者讓使用者花費更多時間,但也顯著地降低了在連續型視線輸入介面上的操作失誤率。 | zh_TW |
| dc.description.abstract | Gaze-gesture interaction solved the jittering and the Midas Touch problem of gaze-controlled interfaces. Previous works mainly focused on saccadic gaze gestures, increasing efficiencies by dividing gestures into segments. However, those guidance techniques could not directly be applied to any graffiti-like gestures. By combining the concept of smooth pursuit and dynamic guide, we propose GazeBeacon, a gradual visual guiding system designed for gaze-gesture interaction. It continuously provides real-time feedback and feedforward graphical cues under gaze points during the progress of the interaction.
Two issues were found in our pilot study: miscalculation and misestimation. We mitigated the miscalculation problem by adding smoothing filters on gaze points and resampling the path before length calculation, while the misestimation problem was solved by adding a focus point at the end of the guidance path. These methods were verified by a user study based on different gesture primitives. Finally, we evaluated the completion time, the recognition rate and the selection accuracy of GazeBeacon, compared with the traditional crib-sheet guide. The result shows that although GazeBeacon makes users spent more time on execution, it significantly improves the accuracy of menu selections on gaze-gesture interfaces. | en |
| dc.description.provenance | Made available in DSpace on 2021-05-13T08:35:52Z (GMT). No. of bitstreams: 1 ntu-105-R03725028-1.pdf: 20810971 bytes, checksum: 3a264a606aface849f37fa3bf475012e (MD5) Previous issue date: 2016 | en |
| dc.description.tableofcontents | Chapter 1 Introduction 1
1.1 Motivation..................................... 1 1.2 ProposedMethod................................. 3 1.3 Contribution.................................... 4 1.4 Organization ................................... 4 Chapter 2 Related Work 5 2.1 VisualGuidanceforGesture-BasedInteraction. . . . . . . . . . . . . . . . 5 2.2 VisualCuesforGazeGestureInterface .................... 7 Chapter 3 Design Space 10 3.1 PilotStudy:UnderstandingOctoPocus .................... 10 3.2 SolvingtheMiscalculationProblemofPathLength . . . . . . . . . . . . . 13 3.2.1 GazePointSmoothingFilter...................... 13 3.2.2 ResampleforLengthCalculation................... 13 3.3 Solving the Misestimation Problem of Path Guidance . . . . . . . . . . . 15 3.3.1 ModificationsforGlidingInteraction. . . . . . . . . . . . . . . . . 17 3.3.2 TaskandProcedure........................... 19 3.3.3 Participants ............................... 23 3.3.4 ResultandDiscussion ......................... 23 Chapter 4 GazeBeacon 26 4.1 GazeBeacon.................................... 26 4.1.1 GuidanceDesign ............................ 27 4.1.2 PossibleApplications.......................... 30 4.2 Evaluation..................................... 31 4.2.1 TaskandProcedure........................... 31 4.2.2 Participants ............................... 32 4.2.3 ResultandDiscussion ......................... 33 Chapter 5 Conclusion and Future Work 36 5.1 Conclusion .................................... 36 5.2 FutureWork.................................... 37 5.2.1 Continuousv.s.DiscreteGazeGestures . . . . . . . . . . . . . . . 37 5.2.2 ModificationsontheRecognizer ................... 38 5.2.3 Generalv.s.SpecificGuidanceDesign................ 38 Chapter A Mean Paths of the Gesture Input 40 Bibliography 43 | |
| dc.language.iso | en | |
| dc.subject | 動態視覺導引 | zh_TW |
| dc.subject | 連續型視線輸入 | zh_TW |
| dc.subject | 連續型視線追蹤 | zh_TW |
| dc.subject | Dynamic guide | en |
| dc.subject | Gaze gesture | en |
| dc.subject | Smooth pursuit | en |
| dc.title | 利用漸進式視覺引導以完成連續型視線輸入 | zh_TW |
| dc.title | GazeBeacon: Enabling Smooth Pursuit Gaze Gestures by Gradual Visual Guidance | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 105-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 梁容豪,林文杰,朱宏國 | |
| dc.subject.keyword | 連續型視線輸入,連續型視線追蹤,動態視覺導引, | zh_TW |
| dc.subject.keyword | Gaze gesture,Smooth pursuit,Dynamic guide, | en |
| dc.relation.page | 46 | |
| dc.identifier.doi | 10.6342/NTU201700648 | |
| dc.rights.note | 同意授權(全球公開) | |
| dc.date.accepted | 2017-02-16 | |
| dc.contributor.author-college | 管理學院 | zh_TW |
| dc.contributor.author-dept | 資訊管理學研究所 | zh_TW |
| 顯示於系所單位: | 資訊管理學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-105-1.pdf | 20.32 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
