請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51639完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳彥仰 | |
| dc.contributor.author | Han-Yu Wang | en |
| dc.contributor.author | 王瀚宇 | zh_TW |
| dc.date.accessioned | 2021-06-15T13:42:22Z | - |
| dc.date.available | 2016-02-15 | |
| dc.date.copyright | 2016-02-15 | |
| dc.date.issued | 2015 | |
| dc.date.submitted | 2015-12-29 | |
| dc.identifier.citation | [1] R. Aigner, D. Wigdor, H. Benko, M. Haller, D. Lindbauer, A. Ion, S. Zhao, and J. Koh. Understanding mid-air hand gestures: A study of human preferences in usage of gesture types for hci. Microsoft Research TechReport MSR-TR-2012-111, 2012.
[2] L. Anthony, Q. Brown, J. Nias, B. Tate, and S. Mohan. Interaction and recogni- tion challenges in interpreting children’s touch and gesture input on mobile devices. ITS’12, pages 225–234. ACM. [3] App Annie. . [4] T. Baba, T. Ushiama, R. Tsuruno, and K. Tomimatsu. Video game that uses skin contact as controller input. SIGGRAPH’07. ACM. [5] A. Biskupski, A. R. Fender, T. M. Feuchtner, M. Karsten, and J. D. Willaredt. Drunken ed: A balance game for public large screen displays. CHI EA’14, pages 289–292. ACM. [6] L. Chan, C.-H. Hsieh, Y.-L. Chen, S. Yang, D.-Y. Huang, R.-H. Liang, and B.-Y. Chen. Cyclops: Wearable and single-piece full-body gesture input device. CHI’15. ACM. [7] L.Chan,R.-H.Liang,M.-C.Tsai,K.-Y.Cheng,C.-H.Su,M.Y.Chen,W.-H.Cheng, and B.-Y. Chen. Fingerpad: Private and subtle interaction using fingertips. UIST’13. ACM. [8] S. Christian, J. Alves, A. Ferreira, D. Jesus, R. Freitas, and N. Vieira. Volcano sal- vation: Interaction through gesture and head tracking. CHI EA’14, pages 297–300. ACM. [9] A. Colaco, A. Kirmani, H. S. Yang, N.-W. Gong, C. Schmandt, and V. K. Goyal. Mime: Compact, low power 3d gesture sensing for interaction with head mounted displays. UIST’13, pages 227–236. ACM. [10] Dollar N Multistroke Recognizer. . [11] J. Epps, S. Lichman, and M. Wu. A study of hand shape use in tabletop gesture interaction. CHI’06, pages 748–753. [12] Epson BT-100 Specs. . [13] Essential Facts About The Computer And Video Game Industry. [14] GameStop. . [15] Google Glass wiki. . [16] D. Grijincu, M. A. Nacenta, and P. O. Kristensson. User-defined interface gestures: Dataset and analysis. ITS’14, pages 25–34. ACM. [17] S. Gustafson, C. Holz, and P. Baudisch. Imaginary phone: Learning imaginary in- terfaces by transferring spatial memory from a familiar device. UIST’11, pages 283– 292. ACM. [18] S. Harada, J. O. Wobbrock, and J. A. Landay. Voice games: Investigation into the use of non-speech voice input for making computer games more accessible. INTER- ACT’11, pages 11–29. Springer-Verlag. [19] C. Harrison, H. Benko, and A. D. Wilson. Omnitouch: Wearable multitouch inter- action everywhere. UIST’11, pages 441–450. ACM. [20] C. Harrison, D. Tan, and D. Morris. Skinput: Appropriating the body as an input surface. CHI’10, pages 453–462. ACM. [21] C.-Y. Hsu, Y.-C. Tung, H.-Y. Wang, S. Chyou, J.-W. Lin, and M. Y. Chen. Glass shooter: Exploring first-person shooter game control with google glass. ICMI’14, pages 70–71. ACM. [22] L. Jing, Z. Cheng, Y. Zhou, J. Wang, and T. Huang. Magic ring: A self-contained gesture input device on finger. MUM’13, pages 39:1–39:4. ACM. [23] Cohen’s kappa - Wikipedia, the free encyclopedia. [24] M. Karam et al. A taxonomy of gestures in human computer interactions. 2005. [25] D. Kim, O. Hilliges, S. Izadi, A. D. Butler, J. Chen, I. Oikonomidis, and P. Olivier. Digits: Freehand 3d interactions anywhere using a wrist-worn gloveless sensor. UIST’12, pages 167–176. ACM. [26] H.-N. Liang, C. Williams, M. Semegen, W. Stuerzlinger, and P. Irani. User-defined surface+motion gestures for 3d manipulation of objects at a distance through a mo- bile device. APCHI’12, pages 299–308. ACM. [27] Google Glass Mini Games. . [28] C. S. Montero, J. Alexander, M. T. Marshall, and S. Subramanian. Would you do that?: Understanding social acceptance of gestural interfaces. MobileHCI’10, pages 275–278. ACM. [29] M. R. Morris. Web on the wall: Insights from a multimodal interaction elicitation study. ITS’12, pages 95–104. ACM. [30] L. E. Nacke, M. Kalyn, C. Lough, and R. L. Mandryk. Biofeedback game design: Using direct and indirect physiological control to enhance game interaction. CHI’11, pages 103–112. ACM. [31] M. Nielsen, M. Storring, T. B. Moeslund, and E. Granum. A procedure for develop- ing intuitive and ergonomic gesture interfaces for hci. In Gesture-Based Communi- cation in Human-Computer Interaction, pages 409–420. Springer, 2004. [32] T.Piumsomboon,A.Clark,M.Billinghurst,andA.Cockburn.User-definedgestures for augmented reality. CHI’13, pages 955–960, New York, NY, USA. ACM. [33] D. Pyryeskin, M. Hancock, and J. Hoey. Comparing elicited gestures to designer- created gestures for selection above a multitouch surface. ITS’12, pages 1–10. ACM. [34] S. Reis. Expanding the magic circle in pervasive casual play. ICEC’12, pages 486– 489. Springer-Verlag. [35] M. Serrano, B. M. Ens, and P. P. Irani. Exploring the use of hand-to-face input for interacting with head-worn displays. CHI’14, pages 3181–3190. ACM. [36] A. J. Sporka, S. H. Kurniawan, M. Mahmud, and P. Slavik. Non-speech input and speech recognition for real-time control of computer games. Assets’06, pages 213– 220. ACM. [37] Steam. . [38] Top 90 Casual Games List. , Analyzed at 2014-08-14. [39] VGChartz. . [40] S. Vickers, H. Istance, and A. Hyrskykari. Performing locomotion tasks in immer- sive computer games with an adapted eye-tracking interface. ACM Trans. Access. Comput., 5(1):2:1–2:33, Sept. 2013. [41] J. R. Williamson, S. Brewster, and R. Vennelakanti. Mo!games: Evaluating mobile gestures in the wild. ICMI’13, pages 173–180. ACM. 33 [42] J. O. Wobbrock, H. H. Aung, B. Rothrock, and B. A. Myers. Maximizing the guess- ability of symbolic input. CHI EA’05, pages 1869–1872. ACM. [43] J. O. Wobbrock, M. R. Morris, and A. D. Wilson. User-defined gestures for surface computing. CHI’09, pages 1083–1092. ACM. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/51639 | - |
| dc.description.abstract | 智慧型眼鏡(如:Google Glass)與傳統的主機或行動遊戲平台不 同,具備有隨時都可遊玩的特性,有創造無處不在的遊戲體驗的潛質。 然而現行的智慧型眼鏡遊戲操作方式侷限在現有的軟硬體感應科技上。 為了瞭解使用者真正想要的設計方向,在本篇論文中我們跳脫現行科 技限制,探索了使用者在公眾場合中喜歡的智慧型眼鏡遊戲操作方式。
我們找了二十四名受測者執行了一場使用者定義遊戲操作實驗 (User-Defined Game Input Study),實驗範圍囊括了十七種常見的遊戲 操作、三種類別的人機互動方式跟兩種不同型式的智慧型眼鏡,最後共執行了兩千四百四十八次的遊戲操作嘗試。 根據我們的結果表示,相較於手持操作器,使用者有顯著性差異的較喜歡使用非觸碰式的操作(如:空中手勢)。在非手持的觸碰式 操作中,使用者最喜歡的互動操作位置是手掌而不是穿戴式裝置 (51% vs 20%)。除此之外還發現在公眾場合中使用者會考慮社會認可的問題 (Issue of Soical Acceptance) ,所以較喜歡使用不引人注意的操作方式, 導致在使用空中手勢時使用者喜歡的操作範圍是在軀體 (Torso) 前方而 非面前 (63% vs 37%)。 | zh_TW |
| dc.description.abstract | Smart glasses, such as Google Glass, provide always-available displays not offered by console and mobile gaming devices, and could potentially of- fer a pervasive gaming experience. However, research on input for games on smart glasses has been constrained by the available sensors to date. To help in- form design directions, this paper explores user-defined game input for smart glasses beyond the capabilities of current sensors, and focuses on the inter- action in public settings. We conducted a user-defined input study with 24 participants, each performing 17 common game control tasks using 3 classes of interaction and 2 form factors of smart glasses, for a total of 2448 trials. Re- sults show that users significantly preferred non-touch and non-handheld in- teraction over using handheld input devices, such as in-air gestures. Also, for touch input without handheld devices, users preferred interacting with their palms over wearable devices (51% vs 20%). In addition, users preferred in- teractions that are less noticeable due to concerns with social acceptance, and preferred in-air gestures in front of the torso rather than in front of the face (63% vs 37%). | en |
| dc.description.provenance | Made available in DSpace on 2021-06-15T13:42:22Z (GMT). No. of bitstreams: 1 ntu-104-R02944002-1.pdf: 3588148 bytes, checksum: e9df0b36e90b66f048dcf72fb665ba75 (MD5) Previous issue date: 2015 | en |
| dc.description.tableofcontents | 口試委員會審定書 ii
誌謝 iii 摘要 iv Abstract v 1 Introduction ................................. 1 2 Related Work ................................ 5 2.1 GameInput ................................. 5 2.2 MobileInputTechnology ..............5 2.3 GesturesinHCI.............................5 2.4 UserElicitationStudies.................6 3 Developing a User-Defined Game Input Set....8 3.1 Overview .................................. 8 3.2 InteractionMethods............................. 9 3.3 GameTasks................................. 9 3.4 FormFactorofGlasses........................... 10 3.5 Participants ................................. 11 3.6 Environment ................................ 11 3.7 Procedure.................................. 11 4 Results.................................. 13 4.1 PreferenceBetweenInteractionMethods. . . . . . . . . . . . . . . . . . 13 4.2 BehaviorwithDifferentFormFactorofGlasses . . . . . . . . . . . . . . 14 4.3 ClassificationofGameInputs ....................... 14 4.3.1 TaxonomyofGameInput ..................... 14 4.3.2 Taxonometric Breakdown of Input Actions in our Data . . . . . . 17 4.4 User-DefinedGameInputSet........................ 18 4.4.1 Agreement ............................. 18 4.4.2 Properties of the User-defined Game Input Set . . . . . . . . . . 20 4.4.3 Taxonometric Breakdown of User-Defined Game Inputs . . . . . 22 4.5 MentalModelObservations ........................ 22 4.5.1 SocialAcceptanceandInputArea................. 22 4.5.2 BiasbyExistingGameInput.................... 23 4.5.3 IdenticalGesturesonDifferentSurfaces . . . . . . . . . . . . . . 23 5 Discussion ................. 26 5.1 ImplicationsforTouchInputTechnology ................. 26 5.2 Implications for Non-Touch Interaction Technology . . . . . . . . . . . . 26 5.3 ImplicationsforGameDesign ....................... 27 5.4 ContributiontoNon-gamingScenarios................... 27 5.5 LimitationandNextSteps ......................... 28 6 Conclusion ......................... 29 Bibliography ......................... 30 | |
| dc.language.iso | en | |
| dc.subject | 公共場合 | zh_TW |
| dc.subject | 人機互動 | zh_TW |
| dc.subject | 智慧型眼鏡 | zh_TW |
| dc.subject | 虛擬實境 | zh_TW |
| dc.subject | Smart Glasses | en |
| dc.subject | Public Space | en |
| dc.subject | Virtual Reality | en |
| dc.subject | HCI | en |
| dc.title | 智慧型眼鏡在公眾場合中的使用者定義遊戲操作 | zh_TW |
| dc.title | User-Defined Game Input for Smart Glasses in Public Space | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 104-1 | |
| dc.description.degree | 碩士 | |
| dc.contributor.oralexamcommittee | 王浩全,陳顥齡,余能豪 | |
| dc.subject.keyword | 人機互動,智慧型眼鏡,虛擬實境,公共場合, | zh_TW |
| dc.subject.keyword | HCI,Smart Glasses,Virtual Reality,Public Space, | en |
| dc.relation.page | 34 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2015-12-29 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-104-1.pdf 未授權公開取用 | 3.5 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
