請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62514
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳炳宇(Bing-Yu Chen) | |
dc.contributor.author | Kai-Yin Cheng | en |
dc.contributor.author | 鄭鎧尹 | zh_TW |
dc.date.accessioned | 2021-06-16T16:03:37Z | - |
dc.date.available | 2013-07-08 | |
dc.date.copyright | 2013-07-08 | |
dc.date.issued | 2013 | |
dc.date.submitted | 2013-07-01 | |
dc.identifier.citation | [1] Weiser, M. The Computer for the 21st Century. In Scientific American 1991, Vol. 265, No. 3, pp. 94–104.
[2] Wisneski, C., Ishii, H., Dahley, A. , Gorbet, M., Brave, S., Ullmer, B. and Yarin., P. Ambient displays: turning architectual space into an interface between people and digital information. In Proceedings of International Workshop on Cooperative Buildings CoBuild, 1998. [3] Kruger, J. B.. A picture of Generation Y. In PMA Magazine January 2009, pp. 32–34. [4] Rodden, K. and Wood, K. R.. How do people manage their digital photographs? In Proceedings of the 2003 ACM CHI, pp. 409–416. [5] Gemmell, J., Bell, G., Lueder, R., Drucker, S. and Wong, C. Mylifebits: fulfilling the memex vision. In Proceedings of the 2001 ACM Multimedia, pp. 235–238. [6] Apple QuickTime Player, Apple Corporation Inc., http://www.apple.com/quicktime/ [7] CyberLink PowerDVD, CyberLink Corporation Inc., http://www.cyberlink.com/multi/products/main_1_ENU. html [8] Microsoft Windows Media, Microsoft Corporation Inc., http://www.microsoft.com/windows/windowsmedia/def ault.mspx [9] Real Networks RealOne Player, http://www.real.com/ [10] Google, Inc. Google Picasa. http://picasa.google.com/ [11] ACD Systems. ACDSee Pro. http://www.acdsystems.com/ [12] Microsoft Corporation. Microsoft Photo Story. http://www.microsoft.com/windowsxp/using/digitalphotography/photostory/ [13] Chen, J.-C., Chu, W.-T., Kuo, J.-H., Weng, C.-Y. and Wu, J.-L. Tiling slideshow. In Proceedings of the 2006 ACM Multimedia, pp. 25–34. [14] Rother, C., Bordeaux, L., Hamadi, Y. and Blake, A. Autocollage. In Proceedings of the 2006 ACM SIGGRAPH, pp.847–852. [15] Bederson, B. B. Photomesa: a zoomable image browser using quantum treemaps and bubblemaps. In Proceedings of the 2001 ACM Symposium on User Interface Software and Technology, pp. 71–80. [16] Huynh, D. F., Drucker, S. M., Baudisch, P. and Wong, C. Time quilt: scaling up zoomable photo browsers for large, unstructured photo collections. In ACM CHI 2005 Extended Abstracts, pp. 1937–1940. [17] Platt, J. C. Autoalbum: Clustering digital photographs using probabilistic model merging. In Proceedings of the 2000 IEEE Workshop on Content-Based Access of Image and Video Libraries, pp. 96. [18] Platt, J. C., Czerwinski, M. and Field, B. A. Phototoc: Automatic clustering for browsing personal photographs. In Proceedings of the 2003 IEEE Pacific-Rim Conference on Multimedia, Vol. 1, pp. 6–10. [19] Microsoft Corporation. Microsoft Windows Vista. http://www.microsoft.com/windows/windows-vista/ [20] Google, Inc. Google Desktop. http://www.google.com/desktop/ [21] IncrediMail Ltd. PhotoJoy. http://www.photojoy.com/ [22] Truong, B. T. and Venkatesh, S. Video abstraction: A systematic review and classification. ACM Transactions on Multimedia Computing, Communications and Applications 2007, Vol. 3, Issue 1, Article No. 3. [23] Liu, X., Mei, T., Hua, X.-S., Yang, B., and Zhou, H.-Q. Video collage. In Proceedings of the 2007 ACM Multimedia, pp. 461–462. [24] Chiu, P., Girgensohn, A., and Liu, Q. Stained-glass visualization for highly condensed video summaries. In Proceedings of the 2004 IEEE International Conference on Multimedia and Expo, pp. 2059–2062. [25] Uchiashi, S., Foote, J., Girgensohn, A., and Boreczky, J. Video manga: Generating semantically meaningful video summaries. In Proceedings of the 1999 ACM Multimedia, pp. 383-392. [26] Peker, K. A. and Divakaran, A. An extended framework for adaptive playback-based video summarization. In Proceedings of the 2003 SPIE Internet Multimedia Management Systems IV, Vol. 5242, pp. 26–33. [27] Peker, K. A., Divakaran, A., and Sun, H. Constant pace skimming and temporal sub-sampling of video using motion activity. In Proceedings of the 2001 IEEE International Conference on Image Processing, pp. 414–417. [28] Lie, W.-N. and Hsu, K.-C. Video summarization based on semantic feature analysis and user preference. In Proceedings of the 2008 IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, pp. 486–491. [29] Itti, L., Koch, C., and Niebur., E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 1998, Vol. 20, Issue. 11, pp. 1254–1259. [30] Ma , Y.-F. and Zhang., H.-J. Contrast-based image attention analysis by using fuzzy growing. In Proceedings of the 2003 ACM Multimedia, pp. 374–381. [31] Liu, S., Chia, L.-T. and Rajan, D. Attention region selection with information from professional digital camera. In Proceedings of the 2005 ACM Multimedia, pp. 391–394. [32] Sinnett, Costa, S., Costa, A. and S. Soto-Faraco. Manipulating inattentional blindness within and across sensory modalities. Quarterly Journal of Experimental Psychology 2006, Vol. 59, pp.1425–1442. [33] Chu, W.-T. and Wu, J.-L. Explicit semantic events detection and development of realistic applications for broadcasting baseball videos. Multimedia Tools and Applications 2008, Vol. 38, Issue. 1, pp. 27–50. [34] Tien, M.-C., Wang, Y.-T., Chou, C.-W., Hsieh, K.-Y., Chu, W.-T., and Wu, J.-L. Event detection in tennis matches based on video data mining. In Proceedings of the 2008 IEEE International Conference on Multimedia and Expo, pp.1477–1480. [35] Cheng, W.-H., Chuang, Y.-Y., Lin, Y.-T., Hsieh, C.-C., Fang, S.-Y., Chen, B.-Y., and Wu, J.-L. Semantic analysis for automatic event recognition and segmentation of wedding ceremony videos. IEEE Transactions on Circuits and Systems for Video Technology 2008, Vol. 18, Issue 11, pp.1639–1650. [36] Chen, H.-W., Kuo, J.-H., Chu, W.-T., and Wu, J.-L. Action movies segmentation and summarization based on tempo analysis. In Proceedings of the 2004 ACM SIGMM International Workshop on Multimedia Information Retrieval, pp. 251–258. [37] Sundaram, H. and Chang, S.-F. Video skims: Taxonomies and an optimal generation framework. In Proceedings of the 2002 IEEE International Conference on Image Processing, pp. 21–24. [38] Carvey, A., Gouldstone, J., Vedurumudi, P., Whiton, A. and Ishii, H. Rubber shark as user interface. In ACMCHI 2006 Extended Abstracts, pp. 634–639. [39] Ljungstrand, P., Redstrom, J. and Holmquist, L. E. WebStickers: using physical tokens to access, manageand share bookmarks to the web. In Proceedings of the DARE 2000 on Designing Augmented Reality Environments, pp. 23–31. [40] Siio, I. InfoBinder: A pointing device for a virtual desktop system. In Proceedings of the 1995 HCI International Conference, Vol. 2, pp. 261–264. [41] Mistry, P., Kuroki, T. and Chang, C. TaPuMa: tangible public map for information acquirement through the things we carry. In Proceedings of the 2008 International Conference on Ambient Media and Systems, pp.1–5. [42] Rosenberg, I. and Perlin, K. The UnMousePad: an interpolating multi-touch force-sensing input pad. In Proceedings of the 2009 ACM SIGGRAPH. Article No. 65 [43] Microsft Surface. http://www.microsoft.com/surface/ [44] Han, J. Y. Low-cost multi-touch sensing through frustrated total internal reflection. In Proceedings of the 2005 ACM Symposium on User Interface Software and Technology, pp. 115–118. [45] Violet Mir:ror. http://en.wikipedia.org/wiki/Mir:ror [46] Hung, Y.-P., Yang, Y.-S., Chen, Y.-S., Hsieh, I.- B., AND Fuh, C.-S. Free-hand pointer by use of an active stereo vision system. In Proceedings of the 1998 IEEE ICPR, Vol. 2, pp. 1244–1246. [47] Baudel, T., AND Beaudouin-lafon, M. Charade: remote control of objects using free-hand gestures. In Communications of the ACM 1993, Vol. 36, Issue. 7, pp. 28–35. [48] Vogel, D., AND Balakrishnan, R. Distant free-hand pointing and clicking on very large, high resolution displays. In Proceedings of the 2005 ACM Symposium on User Interface Software and Technology, pp. 33–42. [49] Ishii, K., Zhao, S., Inami, M., Igarashi, T., and Imai, M. Designing laser gesture interface for robot control. In Proceedings of the Interact 2009, pp. 479–492. [50] Nickel, K., and Stiefelhangen, R. Pointing gesture recognition based on 3D-tracking of face, hands and head orientation. In Proceedings of the ICMI 2003, pp. 140–146. [51] Mistry, P., Maes, P., and Chang, L. WUW - wear Ur world: a wearable gestural interface. In ACM CHI 2009 Extended Abstracts, pp. 4111–4116. [52] Antoniac, P., and Pulli, P. Marisil - mobile user interface framework for virtual enterprise. In Proceedings of ICE 2001, pp. 171–180. [53] Tamaki, E., Miyaki, T., and Rekimoto, J. Brainy hand: an ear-worn hand gesture interaction device. In ACM CHI 2009 Extended Abstracts, pp. 4255–4260. [54] Rekimoto, J. GestureWrist and gesturePad: Unobtrusive wearable interaction devices. In Proceedings of ISWC 2001, pp. 21–27. [55] Saponas, T. S., Tan, D. S., Morris, D., Balakrishnan, R., Turner, J., and Landay, J. A. Enabling always-available input with muscle-computer interfaces. In Proceedings of the 2009 ACM Symposium on User Interface Software and Technology, pp. 167–176. [56] Harrison, C., Tan, D., and Morris, D. Skinput: appropriating the body as an input surface. In Proceedings of the 2010 ACM CHI, pp. 453–462. [57] Microsoft Kinect. http:// www.xbox.com/kinect [58] Balabanovi ́c, M., Chu, L. L. and Wolff, G. J. Storytelling with digital photographs. In Proceedings of the 2000 ACM CHI, pp. 564–57. [59] Norman, D. A. Emotional Design: Why We Love (Or Hate) Everyday Things. Basic Books, 2004. [60] Teevan, J., Cutrell, E., Fisher, D., Drucker, S. M., Ramos, G., Andre, P. and Hu, C. Visual Snippets: Summarizing Web Pages for Search and Revisitation. In Proceedings of the 2009 ACM CHI, pp. 2023–2032. [61] Willett, W., Heer, J., and Agrawala, M. Scented widgets: Improving navigation cues with embedded visualizations. In Proceedings of the 2007 ACM CHI, pp. 51–58. [62] Hua, X.-S., Lu, L. and Zhang, H.-J. Automatically converting photograic series into video. In Proceedings of the 2004 ACM Multimedia, pp. 708–715. [63] Chen, J.-C., Chu, W.-T., Kuo, J.-H., Weng, C.-Y. and Wu, J.-L. Tiling slideshow. In Proceedings of the 2006 ACM Multimedia, pp. 25–34. [64] Rother, C., Bordeaux, L., Hamadi, Y. and Blake, A. Autocollage. In Proceedings of the 2006 ACM SIGGRAPH, Vol. 25, Issue 3, pp. 847–852. [65] Weiser, M. The future of ubiquitous computing on campus. In Communications of the ACM 1998, Vol. 41, Issue. 1, pp.41–42. [66] Mark, G., Gudith, D., and Klocke. U. The cost of interrupted work: more speed and stress. In Proceedings of the 2008 ACM CHI, pp. 107–110. [67] Graham, A., Garcia-Molina, H., Paepcke, A. and Winograd, T. Time as essence for photo browsing through personal digital libraries. In Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries 2002, pp. 326–335. [68] Lienhart, R. Comparison of automatic shot boundary detection algorithms. In Proceedings of the 1999 SPIE Storage and Retrieval for Image and Video Databases VII 3656, pp. 290–301. [69] Lucas, B. D. and Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of Imaging Understanding Workshop 1981, pp. 121–130. [70] CyberLink MagicSports, CyberLink Corporation, Inc. http://www.cyberlink.com/multi/products/main_75_EN U.html [71] Rodden, K. and Wood, K. R.. How do people manage their digital photographs? In Proceedings of the 2003 ACM CHI, pp. 409–416. [72] Kirk, D., Sellen, A., Rother, C. and Wood, K.. Understanding photowork. In Proceedings of the 2006 ACM CHI, pp. 761–770. [73] Wagenaar, W. My memory: A study of autobiographical memory over six years. Cognitive Psychology 1986, Vol. 18, pp. 225–252. [74] Ishii, H. The tangible user interface and its evolution. In Communications of the ACM 2008, Vol. 51, Issue 6, pp. 32–36. [75] Ishii, H. and Ullmer, B. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the 1997 ACM CHI, pp. 234–241. [76] Fitzmaurice, G. W., Ishii, H. and Buxton, W. A. S. Bricks: laying the foundations for graspable user interfaces. In Proceedings of the 1995 ACM CHI, pp. 442–449. [77] Streitz, N., Prante, T., Mu ̈ller-Tomfelde, C., Tandler, P. and Magerkurth, C. Roomware: the second generation. . In ACM CHI 2002 Extended Abstracts, pp. 506–507. [78] Norman, D. A. Things that make us smart: defending human attributes in the age of the machine. 1993. [79] Iqbal, S. T. and Horvitz, E. Disruption and recovery of computing tasks: field study, analysis, and directions. In Proceedings of the 2007 ACM CHI, pp. 677–686. [80] Kaltenbrunner, M. and Bencina, R. reactivision: a computer-vision framework for table-based tangible interaction. In Proceedings of 2007 International Conference on Tangible and Embedded Interaction, pp. 69–74. [81] Kirsh. D. The intelligent use of space. Artificial Intelligence 1995, Vol. 73(1-2), pp. 31–68. [82] Wobbrock, J. O., Morris, M. R. and Wilson, A. D. User-defined gestures for surface computing. In Proceedings of the 2009 ACM CHI, pp. 1083–1092. [83] Malone, T. W. How do people organize their desks?: Implications for the design of office information systems. ACM Transactions on Office Information Systems 1983, Vol. 1(1), pp. 99–112. [84] Ahmad, F., and Musilek, P. A keystroke and pointer control input interface for wearable computers. In Proceedings of the 2006 IEEE PerCom, pp. 2–11. [85] Gustafson, S., Bierwirth, D., and Baudisch, P. Imaginary interfaces: spatial interaction with empty hands and without visual feedback. In Proceedings of the 2010 ACM Symposium on User Interface Software and Technology, pp. 3–12. [86] Su, C.-Y., Chan, L.-Y., Weng, C.-T., Liang, R.-H., Cheng, K.-Y., and Chen, B.-Y. NailDisplay: Bringing Always-Available Visual Display To Fingertips. In Proceedings of the 2013 ACM CHI, pp. 1461–1464. [87] Mao, J.-Y., Vredenburg, K., Smith, P.W., and Carey, T. The state of user-centered design practice. In Communications of the ACM 2005 Vol. 48, No.3, pp. 105-109. [88] Wallis, Claudia. The Multitasking generation. Time Magazine. March 19, 2006. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/62514 | - |
dc.description.abstract | 這篇論文是在探討如何利用使用者的邊際感知(Peripheral Sense)來跟多媒體互動,所探討的範圍包括了圖形化介面(GUI, Graphical User Interface)、實體可觸性介面(TUI, Tangible User Interface)以及體感介面(KUI, Kinetic User Interface)。在圖形化介面中,我們藉著 SmartPlayer 以及 AmbientMemento 這兩個研究來探討如何藉由互動介面的設計,來利用使用者的邊際感知來瀏覽影像跟圖片。在實體可觸性介面中,iCon 以及它的延伸研究 MemoICON 則是設計如何利用使用者非常熟悉的辦公環境來達到利用邊際感知來利用桌面上的日常生活物品做為辦公電腦互動的延伸,來達到多工的可能性。在體感介面,PUB 以及其延伸研究 SonarWatch 則是探討當使用者在不用眼睛的互動情境下,怎麼樣利用自身體感來做操作,並且探討使用者在不用眼的情況下,利用前臂當輸入介面時,使用者認知的極限,跟互動的形態。藉由探討這三個不同領域的介面設計,我們可以結論出三個設計邊際感知與多媒體互動的原則,包括了:(1) 適合低精確度、低涉入度以及需要時常操作的任務;(2) 因為每個人的邊際感知的能力是不太一樣的,因此高度的客製化是需要的; (3) 使用者對於介面跟環境的熟悉程度,決定了邊際感知互動的可行程度。 | zh_TW |
dc.description.abstract | This thesis explores how to utilize users’ peripheral senses, while interacting with the multimedia content in different interaction domains, which includes GUI, TUI and KUI. In GUI, SmartPlaer and AmbientMemento are designed to utilize users’ peripheral sense to design the new experience of video and photo browsing. In TUI, iCon and its extended system MemoICON are designed to let users use their peripheral senses to utilize their everyday objects as instant and alternative controllers in their familiar working environments. Users can use the everyday objects to perform multiple tasks, which have the characteristics, such as low precision, low engagement and high frequency. In KUI, PUB and its extended system SonarWatch is designed to let users use their forearm as the controllers while in eyes-free manner. The human factor is fully explored and the design guidelines are also extracted to know that users have their limitation to discriminate the number of discrete points on their forearms and moreover, different user has different tapped patterns, which also suggested highly customized calibration procedure is required. Summarizing the design guidelines in the three different interaction spaces, GUI, TUI and KUI, we can conclude that three important guidelines to design the interactions by utilizing users’ peripheral senses while consuming multimedia. (1) Low precision, and low engagement, high frequency tasks are suitable for the peripheral interactions. (2) Highly customization for peripheral interaction is required. (3) The more familiar interface can take more benefits of the peripheral senses. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T16:03:37Z (GMT). No. of bitstreams: 1 ntu-102-D97944001-1.pdf: 3622935 bytes, checksum: bec8c96c1d451a8ee93795f21f741934 (MD5) Previous issue date: 2013 | en |
dc.description.tableofcontents | CHAPTER 1 INTRODUCTION 13
1.1 BACKGROUND AND MOTIVATION 13 1.2 RESEARCH OVERVIEW 15 1.2.1 DESIGN FOR GUI 15 1.2.2 DESIGN FOR TUI 16 1.2.3 DESIGN FOR KUI 16 1.3 THESIS ORGANIZATION 17 CHAPTER 2 RELATED WORK 19 2.1 APPLICATIONS FOR MULTIMEDIA BROWSING & ORGANIZATION 19 2.2 TECHNIQUES FOR PERIPHERAL INTERACTION IN GUI, TUI, KUI 22 2.3 SUMMARY 24 CHAPTER 3 DESIGN PERIPHERAL INTERACTION THROUGH GUI 25 3.1 INTRODUCTION 25 3.2 OBSERVATION 27 3.2.1 OBSERVATION FOR VIDEO BROWSING 28 3.2.2 OBSERVATION FOR PHOTO BROWSING 30 3.3 DESIGN CONSIDERATIONS 35 3.3.1 SOLUTION FOR VIDEO BROWSING 36 3.3.2 SOLUTION FOR PHOTO BROWSING 38 3.4 IMPLEMENTATION 42 3.4.1 SMARTPLAYER FOR VIDEO BROWSING 42 3.4.2 AMBIENTMEMENTO FOR PHOTO BROWSING 46 3.5 EVALUATION 48 3.5.1 EVALUATION FOR SMARTPLAYER 49 3.5.2 EVALUATION FOR AMBIENTMEMENTO 58 3.6 SUMMARY 65 CHAPTER 4 DESIGN PERIPHERAL INTERACTION THROUGH TUI 68 4.1 INTRODUCTION 68 4.2 OBSERVATION 69 4.3 DESIGN CONSIDERATIONS 73 4.3.1 GENERAL DESIGN CONCEPT AND GUIDELINES 73 4.3.2 GESTURE AND CONTROL MAPPING 74 4.3.3 TIME-TO-LIVE MECHANISM 76 4.3.4 SCENARIOS 77 4.4 IMPLEMENTATION 78 4.4.1 HARDWARE DESIGN 78 4.5 APPLICATIONS 82 4.5.1 SOFTWARE ARCHITECTURE 82 4.5.2 ICON APPLICATION 83 4.6 EVALUATION 84 4.6.1 PILOT STUDIES 84 4.6.2 USER STUDIES 85 4.7 EXTENDED WORK – MEMOICON 93 4.8 SUMMARY 99 CHAPTER 5 DESIGN PERIPHERAL INTERACTION THROUGH KUI 101 5.1 INTRODUCTION 101 5.2 OBSERVATION 102 5.2.1 USER STUDY 1: EXPLORE THE DIVISION ON THE FOREARM 103 5.2.2 USER STUDY 2: IMPORTANCE OF THE FEEDBACK FROM SKIN 110 5.2.3 TAPPING BEHAVIORS 111 5.3 DESIGN CONSIDERATIONS 112 5.4 IMPLEMENTATION 113 5.5 APPLICATIONS 116 5.5.1 MOBILE EYES-FREE INTERACTION 116 5.5.2 REMOTE DISPLAY INTERACTION 117 5.5.3 SONARWATCH WITH KINECT INTERACTION 118 5.5.4 SONARWATCH WITH NAILDISPLAY INTERACTION 119 5.6 SUMMARY 119 CHAPTER 6 CONCLUSION AND FUTURE WORK 121 6.1 SUMMARY OF THE THESIS 121 6.2 FUTURE DIRECTIONS 123 LIST OF REFERENCES 125 | |
dc.language.iso | en | |
dc.title | 利用邊際感知設計使用者介面於多媒體互動 | zh_TW |
dc.title | Designing User Interfaces for Peripheral Interactions with Multimedia | en |
dc.type | Thesis | |
dc.date.schoolyear | 101-2 | |
dc.description.degree | 博士 | |
dc.contributor.oralexamcommittee | 陳彥仰(Yen-Yang Chen),梁容輝(Rung-Huei Liang),許永真(Yung-Jen Hsu),王浩全(Hao-Chuan Wang),洪一平(Yi-Ping Hung) | |
dc.subject.keyword | 邊際感知,圖性化介面,實體可觸性介面,體感介面,多點觸控,穿戴式運算,使用者為中心設計, | zh_TW |
dc.subject.keyword | peripheral sense,graphical user interface,tangible user interface,kinetic user interface,multi-touch,wearable computing,user-centered design, | en |
dc.relation.page | 134 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2013-07-01 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-102-1.pdf 目前未授權公開取用 | 3.54 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。