請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2554
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 葉怡玉(Yei-Yu Yeh) | |
dc.contributor.author | Yi-Hsuan Lee | en |
dc.contributor.author | 李宜軒 | zh_TW |
dc.date.accessioned | 2021-05-13T06:41:57Z | - |
dc.date.available | 2017-06-12 | |
dc.date.available | 2021-05-13T06:41:57Z | - |
dc.date.copyright | 2017-06-12 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-05-14 | |
dc.identifier.citation | Ashby, J., Rayner, K., & Clifton, C. (2005). Eye movements of highly skilled and average readers: Differential effects of frequency and predictability. The Quarterly Journal of Experimental Psychology Section A, 58, 1065-1086.
Ashmore, M., Duchowski, A. T., & Shoemaker, G. (2005, May). Efficient eye pointing with a fisheye lens. In K. Inkpen (Chair), Proceedings of graphics interface 2005. Symposium conducted at the meeting of Canadian Human-Computer Communications Society, Victoria, British Columbia. Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390-412. Bates, R., & Istance, H. (2002, July). Zooming interfaces!: Enhancing the performance of eye controlled pointing devices. In V. L. Hanson (Chair), Proceedings of the fifth international ACM conference on assistive technologies. Symposium conducted at the meeting of ACM, Edinburgh, Scotland. Bates, D., Maechler, M., Bolker, B., & Walker, S. (2014). lme4: Linear mixed-effects models using Eigen and S4. R Package Version, 1(7). Beatty, J. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 91, 276. Bednarik, R., Vrzakova, H., & Hradis, M. (2012, March). What do you want to do next: A novel approach for intent prediction in gaze-based interaction. In C. H. Morimoto & H. Istance (Chair), Proceedings of the symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, Santa Barbara, California, USA. Benedek, J., & Miner, T. (2002). Measuring desirability: New methods for evaluating desirability in a usability lab setting. Proceedings of Usability Professionals Association, 2003, 8-12. Biedert, R., Buscher, G., & Dengel, A. (2010). The eyebook–using eye tracking to enhance the reading experience. Informatik-Spektrum, 33, 272-281. Biedert, R., Buscher, G., Schwarz, S., Hees, J., & Dengel, A. (2010, April). Text 2.0. In E. Mynatt & D. Schoner (Chair), CHI'10 extended abstracts on human factors in computing systems. Symposium conducted at the meeting of ACM, Atlanta, Georgia, USA. Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry, 189(194), 4-7. Brysbaert, M., Drieghe, D., & Vitu, F. (2005). Word skipping: Implications for theories of eye movement control in reading. Cognitive Processes in Eye Guidance, 3, 53-77. Brysbaert, M., & New, B. (2009). Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods, 41, 977-990. Bulling, A., Ward, J. A., Gellersen, H., & Troster, G. (2011). Eye movement analysis for activity recognition using electrooculography. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 741-753. Buscher, G., Dengel, A., van Elst, L., & Mittag, F. (2008, April). Generating and using gaze-based document annotations. In D. Tan (Chair), CHI'08 extended abstracts on human factors in computing systems. Symposium conducted at the meeting of ACM, Florence, Italy. Calvi, C., Porta, M., & Sacchi, D. (2008, July). e5Learning, an e-learning environment based on eye tracking. In I. Aedo & E. Mora (Chair), Proceedings of the 8th IEEE international conference on advanced learning technologies. Symposium conducted at the meeting of IEEE, Santander, Cantabria, Spain. Chace, K. H., Rayner, K., & Well, A. D. (2005). Eye movements and phonological parafoveal preview: Effects of reading skill. Canadian Journal of Experimental Psychology, 59, 209. Cheng, S., Sun, Z., Sun, L., Yee, K., & Dey, A. K. (2015, April). Gaze-based annotations for reading comprehension. In B. Begole & J. Kim (Chair), Proceedings of the 33rd annual ACM conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Seoul, Republic of Korea. Cockburn, A., Kristensson, P. O., Alexander, J., & Zhai, S. (2007, April). Hard lessons: Effort-inducing interfaces benefit spatial learning. In M. B. Rosson (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, San Jose, California, USA. D'Mello, S., Olney, A., Williams, C., & Hays, P. (2012). Gaze tutor: A gaze-reactive intelligent tutoring system. International Journal of Human-Computer Studies, 70(5), 377-398. Drewes, H., & Schmidt, A. (2007, September). Interacting with the computer using gaze gestures. In A. M. Pejtersen (Chair), IFIP conference on human-computer interaction. Symposium conducted at the meeting of IFIP, Rio de Janeiro, Brazil. Duchowski, A. T. (2002). A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, & Computers, 34, 455-470. Duchowski, A. T. (2007). Eye tracking methodology: Theory and practice. London: Springer Science & Business Media. Duchowski, A. T., Cournia, N., & Murphy, H. (2004). Gaze-contingent displays: A review. CyberPsychology & Behavior, 7, 621-634. Dybdal, M. L., Agustin, J. S., & Hansen, J. P. (2012, March). Gaze input for mobile devices by dwell and gestures. In C. H. Morimoto & H. Istance (Chair), Proceedings of the symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, Santa Barbara, California, USA. Eaddy, M., Blasko, G., Babcock, J., & Feiner, S. (2004, October). My own private kiosk: Privacy-preserving public displays. In T. Martin (Chair), Proceedings of the eighth international symposium on wearable computers. Symposium conducted at the meeting of IEEE, Washington, DC, USA. Engbert, R., Longtin, A., & Kliegl, R. (2002). A dynamical model of saccade generation in reading based on spatially distributed lexical processing. Vision Research, 42, 621-636. Engbert, R., Nuthmann, A., Richter, E. M., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112, 777-813. Engelhardt, P. E., Ferreira, F., & Patsenko, E. G. (2010). Pupillometry reveals processing load during spoken language comprehension. The Quarterly Journal of Experimental Psychology, 63, 639-645. Erlhagen, W., & Schöner, G. (2002). Dynamic field theory of movement preparation. Psychological Review, 109, 545. Fairclough, S. H. (2010). Physiological computing: Interfacing with the human nervous system. In J. Westerink, M. Krans, & M. Ouwerkerk (Eds.), Sensing emotions (pp. 1-20). Dordrecht: Springer Netherlands. Fono, D., & Vertegaal, R. (2005, April). EyeWindows: evaluation of eye-controlled zooming windows for focus selection. In R. Grinter et al., (Chair), Proceedings of the SIGCHI conference on human factors in computing system. Symposium conducted at the meeting of ACM, Portland, Oregon, USA. Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178-210. Frazier, L., & Rayner, K. (1987). Resolution of syntactic category ambiguities: Eye movements in parsing lexically ambiguous sentences. Journal of Memory and Language, 26, 505-526. Garland, K. J., & Noyes, J. M. (2004). CRT monitors: Do they interfere with learning?. Behaviour & Information Technology, 23, 43-52. Hansen, J. P., Tørning, K., Johansen, A. S., Itoh, K., & Aoki, H. (2004, March). Gaze typing compared with input by head and hand. In A. T. Duchowski & R. Vertegaal (Chair), Proceedings of the 2004 symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, San Antonio, Texas, USA. Horvitz, E., Kadie, C., Paek, T., & Hovel, D. (2003). Models of attention in computing and communication: From principles to applications. Communications of the ACM, 46, 52-59. Hosseiny, M., Biedert, R., Dengel, A., & Buscher, G. (2011, February). The eyePad-Tom Riddle in the 21st Century. In Y. Nakano, C. Conati, & T. Bader (Chair), Proceedings of the 2nd workshop on eye gaze in intelligent human machine interaction. Symposium conducted at the meeting of IUI 2011, Palo Alto, California, USA. Hyönä, J., Tommola, J., & Alaja, A. M. (1995). Pupil dilation as a measure of processing load in simultaneous interpretation and other language tasks. The Quarterly Journal of Experimental Psychology, 48, 598-612. Hyrskykari, A. (2006). Eyes in attentive interfaces: Experiences from creating iDict, a gaze-aware reading aid. University of Tampere, Finland. Hyrskykari, A., Majaranta, P., Aaltonen, A., & Räihä, K. J. (2000, November). Design issues of iDICT: A aze-assisted translation aid. In A. T. Duchowski (Chair), Proceedings of the 2000 symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, Palm Beach Gardens, Florida, USA. Hyrskykari, A., Majaranta, P., & Räihä, K. J. (2003, June). Proactive response to eye movements. In C. Stephanidis (Chair), Proceedings of the international conference on human-computer interaction. Symposium conducted at the meeting of HCI, Amsterdam, Netherlands. Hyrskykari, A., Majaranta, P., & Räihä, K. J. (2005, July). From gaze control to attentive interfaces. In B. Schneiderman (Chair), Proceedings of the 11th international conference on human-computer interaction. Symposium conducted at the meeting of HCI, Las Vegas, USA. Inhoff, A. W., & Weger, U. W. (2005). Memory for word location during reading: Eye movements to previously read words are spatially selective but not precise. Memory & Cognition, 33, 447-461. Iqbal, S. T., Zheng, X. S., & Bailey, B. P. (2004, April). Task-evoked pupillary response to mental workload in human-computer interaction. In E. Dykstra-Erickson & M. Tscheligi (Chair), CHI'04 extended abstracts on human factors in computing systems. Symposium conducted at the meeting of ACM, Vienna, Austria. Ishiguro, Y., & Rekimoto, J. (2011, March). Peripheral vision annotation: Noninterference information presentation method for mobile augmented reality. In M. Inami & J. Rekimoto (Chair), Proceedings of the 2nd augmented human international conference. Symposium conducted at the meeting of ACM, Tokyo, Japan. Isokoski, P. (2000, November). Text input methods for eye trackers using off-screen targets. In A. T. Duchowski (Chair), Proceedings of the 2000 symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, Palm Beach Gardens, Florida, USA. Jacob, R. J. (1991). The use of eye movements in human-computer interaction techniques: What you look at is what you get. ACM Transactions on Information Systems (TOIS), 9, 152-169. Jacob, R. J. (1993). Eye movement-based human-computer interaction techniques: Toward non-command interfaces. Advances in Human-Computer Interaction, 4, 151-190. Jacob, R. J. (1995). Eye tracking in advanced interface design. In W. Barfield & T. A. Furness (Eds.), Virtual environments and advanced interface design (pp. 258-288). New York, NY, USA: Oxford University Press. Jacob, R. J. (2006, April). What is the next generation of human-computer interaction?. In G. Olson (Chair), CHI'06 extended abstracts on human factors in computing systems. Symposium conducted at the meeting of ACM, Montréal, Québec, Canada. Jacob, R. J., & Karn, K. S. (2003). Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. Mind, 2, 4. Johnson, C. I., & Mayer, R. E. (2012). An eye movement analysis of the spatial contiguity effect in multimedia learning. Journal of Experimental Psychology: Applied, 18, 178. Just, M. A., & Carpenter, P. A. (1987). The psychology of reading and language comprehension. Needham Heights, Massachusetts, USA: Allyn & Bacon. Kandemir, M., & Kaski, S. (2012, October). Learning relevance from natural eye movements in pervasive interfaces. In LP. Morency, D. Bohus, & H. Aghajan (Chair), Proceedings of the 14th ACM international conference on multimodal interaction. Symposium conducted at the meeting of ACM, Santa Monica, California, USA. Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R., & Broussard, C. (2007). What’s new in Psychtoolbox-3. Perception, 36(14), 1. Kliegl, R., Grabner, E., Rolfs, M., & Engbert, R. (2004). Length, frequency, and predictability effects of words on eye movements in reading. European Journal of Cognitive Psychology, 16, 262-284. Maglio, P. P., & Campbell, C. S. (2003). Attentive user interfaces: Attentive agents. Communications of the ACM, 46, 47-51. Maglio, P., Matlock, T., Campbell, C., Zhai, S., & Smith, B. (2000, October). Gaze and speech in attentive user interfaces. In T. Tan, Y. Shi, & W. Gao (Chair), Proceedings of the third international conference on advances in multimodal interfaces. Symposium conducted at the meeting of ACM, Beijing, China. Majaranta, P. (2011). Communication and text entry by gaze. In P. Majaranta (Eds.), Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies (pp. 63-77). Hershey, Pennsylvania, USA: IGI Global. Majaranta, P., Ahola, U. K., & Špakov, O. (2009, April). Fast gaze typing with an adjustable dwell time. In D. R. Olsen & R. B. Arthur (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Boston, Massachusetts, USA. Majaranta, P., & Bulling, A. (2014). Eye tracking and eye-based human–computer interaction. In S. Fairclough & K. Gilleade (Eds.), Advances in physiological computing (pp. 39-65). London: Springer-Verlag. Majaranta, P., MacKenzie, I. S., Aula, A., & Räihä, K. J. (2006). Effects of feedback and dwell time on eye typing speed and accuracy. Universal Access in the Information Society, 5, 199-208. Majaranta, P., & Räihä, K. J. (2002, March). Twenty years of eye typing: Systems and design issues. In A. T. Duckowski (Chair), Proceedings of the 2002 symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, New Orleans, Louisiana, USA. Marian, V., Blumenfeld, H. K., & Kaushanskaya, M. (2007). The Language Experience and Proficiency Questionnaire (LEAP-Q): Assessing language profiles in bilinguals and multilinguals. Journal of Speech, Language, and Hearing Research, 50, 940-967. Miniotas, D., Špakov, O., Tugoy, I., & MacKenzie, I. S. (2006, March). Speech-augmented eye gaze interaction with small closely spaced targets. In KJ. Räihä & A. T. Duchowski (Chair), Proceedings of the 2006 symposium on eye tracking research and applications. Symposium conducted at the meeting of ACM, San Diego, California, USA. Mohammad, Y., Okada, S., & Nishida, T. (2010, February). Autonomous development of gaze control for natural human-robot interaction. In E. André & J. Y. Chai (Chair), Proceedings of the 2010 workshop on eye gaze in intelligent human machine interaction. Symposium conducted at the meeting of ACM, Hong Kong, China. Nielsen, J. (1993). Noncommand user interfaces. In Communications of the ACM, 36, 83-99. Ohno, T., Mukawa, N., & Kawato, S. (2003, April). Just blink your eyes: A head-free gaze tracking system. In G. Cockton & P. Korhonen (Chair), CHI '03 extended abstracts on human factors in computing systems. Symposium conducted at the meeting of ACM, Ft. Lauderdale, Florida, USA. Okoso, A., Kunze, K., & Kise, K. (2014, September). Implicit gaze based annotations to support second language learning. In AJ Brush & A. Friday (Chair), Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing. Symposium conducted at the meeting of ACM, Seattle, Washington, USA. Pollatsek, A., & Hyönä, J. (2005). The role of semantic transparency in the processing of Finnish compound words. Language and Cognitive Processes, 20, 261-290. Pollatsek, A., Reichle, E. D., & Rayner, K. (2003). Modeling eye movements in reading: Extensions of the EZ Reader model. The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, 18, 361-390. Pollatsek, A., Reichle, E. D., & Rayner, K. (2006). Tests of the EZ Reader model: Exploring the interface between cognition and eye-movement control. Cognitive Psychology, 52(1), 1-56. Qvarfordt, P., & Zhai, S. (2005, April). Conversing with the user based on eye-gaze patterns. In W. Kellogg & S. Zhai (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Portland, Oregon, USA. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372. Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. The Quarterly Journal of Experimental Psychology, 62, 1457-1506. Rayner, K., Chace, K. H., Slattery, T. J., & Ashby, J. (2006). Eye movements as reflections of comprehension processes in reading. Scientific Studies of Reading, 10, 241-255. Rayner, K., & Well, A. D. (1996). Effects of contextual constraint on eye movements in reading: A further examination. Psychonomic Bulletin & Review, 3, 504-509. Roda, C., & Thomas, J. (2006). Attention aware systems. The Encyclopedia Human-Computer Interaction, 58, 38. Sanders, M. S., & McCormick, E. J. (1987). Human factors in engineering and design. New York, NY, USA: Mcgraw-Hill Book Company. Schad, D. J., & Engbert, R. (2012). The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model. Visual Cognition, 20, 391-421. Shell, J. S., Vertegaal, R., & Skaburskis, A. W. (2003, April). EyePliances: Attention-seeking devices that respond to visual attention. In G. Cockton & P. Korhonen (Chair), CHI '03 extended abstracts on human factors in computing systems. Symposium conducted at the meeting of ACM, Ft. Lauderdale, Florida, USA. Sibert, J. L., Gokturk, M., & Lavine, R. A. (2000, November). The reading assistant: Eye gaze triggered auditory prompting for reading remediation. In M. Ackerman & K. Edwards (Chair), Proceedings of the 13th annual ACM symposium on user interface software and technology. Symposium conducted at the meeting of ACM, San Diego, California, USA. Sibert, L. E., & Jacob, R. J. (2000, April). Evaluation of eye gaze interaction. In T. Turner & G. Szwillus (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Hague, Netherlands. Skovsgaard, H., Räihä, K. J., & Tall, M. (2011). Computer control by gaze. In P. Majaranta, H. Aoki, M. Donegan, D. W. Hansen, J. P. Hansen, A. Hyrskykari, & K. J. Räihä (Eds.), Gaze interaction and applications of eye tracking: Advances in assistive technologies (pp. 78-102). Hershey, PA, USA: IGI Global. Slattery, T. J., & Rayner, K. (2010). The influence of text legibility on eye movements during reading. Applied Cognitive Psychology, 24, 1129-1148. Špakov, O., & Majaranta, P. (2012, September). Enhanced gaze interaction using simple head gestures. In A. K. Day (Chair), Proceedings of the 2012 ACM conference on ubiquitous computing. Symposium conducted at the meeting of ACM, Pittsburgh, Pennsylvania, USA. Squire, L., Berg, D., Bloom, F. E., Du Lac, S., Ghosh, A., & Spitzer, N. C. (2012). Fundamental neuroscience. San Diego, California, USA: Academic Press. Starker, I., & Bolt, R. A. (1990, April). A gaze-responsive self-disclosing display. In J. C. Chew & J. Whiteside (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Seattle, Washington, USA. Stellmach, S., & Dachselt, R. (2012, May). Look & touch: gaze-supported target acquisition. In J. A. Konstan (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Austin, Texas, USA. Surakka, V., Illi, M., & Isokoski, P. (2004). Gazing and frowning as a new human--computer interaction technique. ACM Transactions on Applied Perception (TAP), 1(1), 40-56. Takagi, H. (1998, April). Development of an eye-movement enhanced translation support system. In R. Grishman (Chair), Proceedings of the fifth conference on applied natural language processing. Symposium conducted at the meeting of ACL, Washington, DC, USA. Tateosian, L. G., Glatz, M., Shukunobe, M., & Chopra, P. (2014). GazeGIS: A Gaze-based Reading and Dynamic Geographic Information System. North Carolina State University, North Carolina State, USA. Tennenhouse, D. (2000). Proactive computing. Communications of the ACM, 43, 43-50. Tessendorf, B., Bulling, A., Roggen, D., Stiefmeier, T., Feilner, M., Derleth, P., & Tröster, G. (2011). Recognition of hearing needs from body and eye movements to improve hearing instruments. In K. Lyons, J. Hightower, E. M. Huang (Eds.), Pervasive Computing (pp. 314-331). Heidelberg: Springer Berlin. Toyama, T., Kieninger, T., Shafait, F., & Dengel, A. (2011, May). Museum guide 2.0-an eye-tracking based personal assistant for museums and exhibits. In L. Ciolfi (Chair), Proceedings of the international conference on re-thinking technology in museums. Symposium conducted at the meeting of ACM, Germany. Toyama, T., Sonntag, D., Dengel, A., Matsuda, T., Iwamura, M., & Kise, K. (2014, February). A mixed reality head-mounted text translation system using eye gaze input. In T. Kuflik & O. Stock (Chair), Proceedings of the 19th international conference on intelligent user interfaces. Symposium conducted at the meeting of ACM, Haifa, Israel. Vertegaal, R. (2003). Attentive user interfaces. Communications of the ACM, 46, 30-33. Vertegaal, R., Weevers, I., Sohn, C., & Cheung, C. (2003, April). GAZE-2: conveying eye contact in group video conferencing using eye-controlled camera direction. In G. Cockton & P. Korhonen (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Ft. Lauderdale, Florida, USA. Vitu, F., & McConkie, G. W. (2000). Regressive saccades and word perception in adult reading. Reading as a Perceptual Process, 12, 301-326. Wang, J., Zhai, S., & Su, H. (2001, March). Chinese input with keyboard and eye-tracking: an anatomical study. In J. Jacko & A. Sears (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Seattle, Washington, USA. Ware, C., & Mikaelian, H. H. (1987, April). An evaluation of an eye tracker as a device for computer input2. In J. M. Carroll & P. P. Tanner (Chair), Proceedings of the SIGCHI/GI conference on human factors in computing systems and graphics interface. Symposium conducted at the meeting of ACM, Toronto, Ontario, Canada. White, S. J., Rayner, K., & Liversedge, S. P. (2005). The influence of parafoveal word length and contextual constraint on fixation durations and word skipping in reading. Psychonomic Bulletin & Review, 12, 466-471. Williams, R., & Morris, R. (2004). Eye movements, word familiarity, and vocabulary acquisition. European Journal of Cognitive Psychology, 16, 312-339. Zhai, S. (2003). What's in the eyes for attentive input. Communications of the ACM, 46, 34-39. Zhai, S., Morimoto, C., & Ihde, S. (1999, May). Manual and gaze input cascaded (MAGIC) pointing. In M. G. Williams & M. W. Altom (Chair), Proceedings of the SIGCHI conference on human factors in computing systems. Symposium conducted at the meeting of ACM, Pittsburgh, Pennsylvania, USA. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/2554 | - |
dc.description.abstract | 閱讀第二語言的文章時,讀者常需即時的翻譯,以便理解篇章內容。針對此需求,過去研究試圖以眼球追蹤來反映讀者的心智運作,並由此推算讀者當下的需求,俾利閱讀流暢,增進體驗。然而,過去的即時翻譯研究多以固定閾值做為是否提供協助的準則,並未考慮閱讀行為的個別差異。此外,研究多以主觀報告為評估標準,缺乏客觀的資料佐證。是故本研究模擬視線感知翻譯工具原型,參照心理學於眼動與閱讀的發現,採用眼動控制數學模型中詞彙特性與凝視時間的計算邏輯,分別以停滯時間與回視眼跳為顯示時機的計算方式,並提供中文翻譯與無意義刺激,以觀察因應閱讀偏差而觸發的註解,如何影響個體的眼動表現、認知負荷及閱讀策略。結果顯示系統主動呈現之「中文翻譯」顯著增進閱讀理解、降低工作負荷,卻也同時改變使用者的閱讀策略,包括延長凝視時間、減少略視現象、增加再閱讀的比例等。再者,目標詞的詞頻高低亦與其是否會被凝視、被反向閱讀有關。根據整體與區域性眼動指標,基於個人化的停滯時間所提供的註解對於詞彙處理歷程具明顯助益。本研究歸納視線互動之設計指南供後人參考,而此實驗結果能否應用至真實場域或各式載具上,尚待未來研究進一步釐清。 | zh_TW |
dc.description.abstract | Non-native speakers often need instant translation for comprehending documents in foreign language. To meet this need, prior research has attempted to map users’ eye movement to cognitive processes for recognizing their intention in real-time and improving the reading experience. However, recent research in gaze-based interaction mostly used fixed dwell-time as the threshold to determine whether or not to provide assistance. This approach ignores individual differences in reading behavior. Moreover, the benefits of gaze-contingent feedback were based on subjective evaluation so that the impact on objective performance is still missing. This study developed a gaze-aware instant translation prototype based on the psychological aspects of eye movements in reading by incorporating a mathematical model of eye-movement control in reading for computing fixation durations as a function of lexical difficulty. The system automatically provided Chinese translations or a meaningless X mask based on dwell-time or regressive saccades to investigate how the annotations triggered by deviation in reading pattern affect a user’s eye movements, cognitive load and comprehension accuracy. The results showed that proactive translations could improve reading comprehension and reduce cognitive load. Instant annotations also influence a user’s reading strategy, including prolonging fixation durations, decreasing skipping rate and increasing regression rate. Furthermore, the frequency of the target words was related to their fixation patterns. According to global and local eye-movement measures, the annotations provided based on personalized thresholds of dwell time were significantly helpful for word processing. Considering the methodological issues and observed results, this thesis suggests design guidelines and recommendations for developing gaze-aware reading applications. Future research could investigate whether a personalized gaze-aware annotation could be applied to the real-world settings or adapted to different devices. | en |
dc.description.provenance | Made available in DSpace on 2021-05-13T06:41:57Z (GMT). No. of bitstreams: 1 ntu-106-R04227117-1.pdf: 6786542 bytes, checksum: b98b092b0684c4004cc827d41d333c5b (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 第壹章 緒論 1
第一節 研究動機 1 第二節 研究目的 3 第貳章 文獻回顧 5 第一節 眼動之特性與困境 6 第二節 視線互動之發展與應用 8 第三節 視線感知系統 12 第四節 眼動於閱讀 16 第五節 眼動控制數學模型 19 第六節 瞳孔與認知負荷 22 第參章 前測:使用者訪談 24 第肆章 實驗方法與設計 27 第一節 實驗準備 28 第二節 量測指標與定義 35 第伍章 分析結果 38 第一節 感知系統使用情形 38 第二節 篇章閱讀的眼動表現 41 第陸章 討論與問題 51 第一節 綜合討論 51 第二節 設計指南 56 第三節 研究限制與建議 59 參考文獻 63 附錄 77 附錄一 實驗材料範例 77 附錄二 語言經歷與語言水平問卷(LEAP-Q) 78 附錄三 系統易用性量表(System Usability Scale, SUS) 80 附錄四 產品反應卡(Product reaction cards) 81 附錄五 各主要效果統計摘要表 82 | |
dc.language.iso | zh-TW | |
dc.title | 視線感知即時翻譯工具的開發與評估 | zh_TW |
dc.title | The Development and Evaluation of a Gaze-aware Real-time Translation Tool | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 汪曼穎(Man-Ying Wang),蔡介立(Jie-Li Tsai),顏妙璇(Miao-Hsuan Yen) | |
dc.subject.keyword | 即時翻譯,注意力感知系統,視線互動,眼球追蹤,眼動誘發回饋, | zh_TW |
dc.subject.keyword | Attention Aware System,eye tracking,gaze-based interaction,gaze-contingent feedback,real-time translation, | en |
dc.relation.page | 87 | |
dc.identifier.doi | 10.6342/NTU201700806 | |
dc.rights.note | 同意授權(全球公開) | |
dc.date.accepted | 2017-05-15 | |
dc.contributor.author-college | 理學院 | zh_TW |
dc.contributor.author-dept | 心理學研究所 | zh_TW |
顯示於系所單位: | 心理學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf | 6.63 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。