請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/60895完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 洪一平 | |
| dc.contributor.author | Chuen-Kai Shie | en |
| dc.contributor.author | 謝淳凱 | zh_TW |
| dc.date.accessioned | 2021-06-16T10:35:14Z | - |
| dc.date.available | 2013-08-20 | |
| dc.date.copyright | 2013-08-20 | |
| dc.date.issued | 2013 | |
| dc.date.submitted | 2013-08-14 | |
| dc.identifier.citation | [1] MacKenzie, I. Scott, and Shaidah Jusoh. 'An evaluation of two input devices for remote pointing.' Engineering for Human-Computer Interaction. Springer Berlin Heidelberg, 2001. 235-250.
[2] Zhai, Shumin. 'User performance in relation to 3D input device design.' ACM Siggraph Computer Graphics 32.4 (1998): 50-54. [3] Myers, Brad A., et al. 'Interacting at a distance: measuring the performance of laser pointers and other devices.' Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves. ACM, 2002. [4] Oh, Ji-Young, and Wolfgang Stuerzlinger. 'Laser pointers as collaborative pointing devices.' Graphics Interface. 2002. [5] Olsen Jr, Dan R., and Travis Nielsen. 'Laser pointer interaction.' Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 2001. [6] Vogel, Daniel, and Ravin Balakrishnan. 'Distant freehand pointing and clicking on very large, high resolution displays.' Proceedings of the 18th annual ACM symposium on User interface software and technology. ACM, 2005. [7] Freeman, William T., and Craig Weissman. 'Television control by hand gestures.' Proc. of Intl. Workshop on Automatic Face and Gesture Recognition. 1995. [8] Schick, Alexander, et al. 'Extending touch: towards interaction with large-scale surfaces.' Proceedings of the ACM international conference on interactive tabletops and surfaces. ACM, 2009. [9] Polaček, O., Klima, M., Sporka, A. J., Žak, P., Hradiš, M., Zemčik, P., & Prochazka, V. A Comparative Study on Distant Free-Hand Pointing. [10] Freeman, Dustin, Ramadevi Vennelakanti, and Sriganesh Madhvanath. 'Freehand pose-based Gestural Interaction: Studies and implications for interface design.' Intelligent Human Computer Interaction (IHCI), 2012 4th International Conference on. IEEE, 2012. [11] Ali M. Vassigh, Christian Klein, Ernest L. Pennington, “Physical interaction zone for gesture-based user interfaces,” US Patent 20110193939 A1, Aug 11, 2011 [12] Microsoft Human_Interface_Guidelines_v1.7.0. [13] Microsoft Kinect for Windows SDK: http://www.microsoft.com/en-us/kinectforwindows/ [14] Microsoft Kinect for Windows SDK Toolkit v. 1.7.0 [15] Samsung UA55ES8000M: http://www.samsung.com/tw/consumer/televisions/televisions/smart-tv/UA55ES8000MXZW [16] Fothergill, Simon, et al. 'Instructing people for training gestural interactive systems.' Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems. ACM, 2012. [17] Wobbrock, Jacob O., Andrew D. Wilson, and Yang Li. 'Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes.' Proceedings of the 20th annual ACM symposium on User interface software and technology. ACM, 2007. [18] Kratz, Sven, and Michael Rohs. 'A $3 gesture recognizer: simple gesture recognition for devices equipped with 3D acceleration sensors.' Proceedings of the 15th international conference on Intelligent user interfaces. ACM, 2010. [19] Hong, Pengyu, Matthew Turk, and Thomas S. Huang. 'Gesture modeling and recognition using finite state machines.' Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE, 2000. [20] Samsung UA55F8000AJ http://www.samsung.com/hk/consumer/tv-av/televisions/smart-led-tv/UA55F8000AJXZK [21] Li, Yang. 'Protractor: a fast and accurate gesture recognizer.' Proceedings of the 28th international conference on Human factors in computing systems. ACM, 2010. [22] Kratz, Sven, and Michael Rohs. 'Protractor3d: a closed-form solution to rotation-invariant 3d gestures.' Proceedings of the 16th international conference on Intelligent user interfaces. ACM, 2011. [23] Microsoft Kinect: http://www.xbox.com/kinect [24] ASUS Xtion Pro: http://www.asus.com/Multimedia/Xtion_PRO/ [25] Pylvanainen, Timo. 'Accelerometer based gesture recognition using continuous HMMs.' Pattern Recognition and Image Analysis. Springer Berlin Heidelberg, 2005. 639-646. [26] Murakami, Kouichi, and Hitomi Taguchi. 'Gesture recognition using recurrent neural networks.' Proceedings of the SIGCHI conference on Human factors in computing systems: Reaching through technology. ACM, 1991. [27] Rubine, D. “Specifying gestures by example,” Proc. SIGGRAPH, 1991 [28] Keogh, Eamonn J., and Michael J. Pazzani. 'Derivative dynamic time warping.' the 1st SIAM Int. Conf. on Data Mining (SDM-2001), Chicago, IL, USA. 2001 [29] Samsung UA55F7500BJ [30] Shotton, Jamie, et al. 'Real-time human pose recognition in parts from single depth images.' Communications of the ACM 56.1 (2013): 116-124. [31] August de los Reyes’ presentation, “Predicting the Past” http://www.webdirections.org/resources/august-de-los-reyes-predicting-the-past/ | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/60895 | - |
| dc.description.abstract | 本論文提出一套基於三維手勢控制之多階層控制模式智慧型電視系統。
在整體架構上,我們提出融合自然使用者介面(Natural User Interface)和圖形使用者介面(Graphic User Interface)的操作模式,並且依據此兩種介面的性質設計不同的功能和目的和操作方法。 本系統所採用的手勢樣式為九種自然且直覺的手勢,根據其操作性質可分成五種自然使用者介面(Natural User Interface)以及四種圖形使用者介面(Graphic User Interface);我們在系統設計上使用多階層控制模式之架構,其中每種控制模式皆對應至電視的一種操控型態且各自擁有獨特手勢操作功能,此外也可藉由特定手勢控制在不同模式間切換。 對於五種自然使用者介面之手勢辨識,本論文蒐集使用者操作這些手勢常見的表達方式,藉由觀察分析這些手勢資料,我們提出混合型手勢辨識演算法,用以辨識手勢以及解決誤判的問題,並將手勢定義成使用者能輕鬆直覺操作的方法,由實驗結果可得本系統達到高度的手勢辨識準確度與只有少量的誤檢率。 在圖形使用者介面的模式中,本論文致力於解決在進行點擊(click)動作時偏移的問題,我們提出一套融合三維軌跡辨識與手部有限狀態機的演算法:對於人類的手在空間不同位置下習慣會出現的軌跡蒐集分析並且使用高斯模型進行三維空間下的點擊軌跡辨識,配合手部有限狀態機和使用者介面的組合,我們比通用演算法(只看深度變化值)提升非常多的準確率。 | zh_TW |
| dc.description.abstract | This thesis presents a multi-mode remote control method which allows the user to interact with a Smart TV by switching between four different modes: Standby Mode, TV Watch Mode, TV Control Mode, and Cursor Mode. Our system allows the users to switch among different control modes through the predefined gestures.
Among all gesture recognition approaches, we are especially interested in the geometric trajectory-based template matching approaches, which distinguish different gestures by using trajectory patterns. That is to say, those approaches are used to concentrate on recognizing isolated gesture trajectory. However, in practical case, the gesture sequence is a continuous stream of unknown length, and unknown start and end point. More importantly, some different gestures may contain similar trajectories, which are very difficult to be recognized. This paper presents a 3D gesture recognition approach, which is designed to discriminate gestures with similar palm trajectories. Some experiments have been performed to evaluate the accuracy of our 3D gesture recognition system. In Cursor Mode, we propose a freehand click gesture recognition approach by using palm trajectory. General approaches are used to recognize click gesture through detecting a straightforward press movement. However, the users usually do not press perfectly straight, so those approaches may fail to detect click gesture. Here, we named this issue as “misaligned click problem.” Unlike the general click recognition approaches may suffer misaligned click problem, our approach learns the 3D palm trajectories in locations within available click region and using our click-gesture control finite state machine to control click progress. In our thesis, some experiments have been performed to evaluate the accuracy of our 3D gesture recognition system. We have compared 3D gesture our recognition system against four recognizers: Our algorithm with elbow information, Protractor (2D xy-projection template matching approach), Protractor (2D xz-projection template matching approach), Protractor3D (3D trajectory matching approach). Experimental results on self-collected action database demonstrated that our proposed approach can successfully achieve higher recognition accuracy and lower false positive rate. On the other hand, to evaluate our freehand click gesture recognizer, we tested our approach on a self-collected click dataset, and compared it with general click approach. Experimental results show that our click recognition approach achieves higher recognition accuracy than the general approach. Keywords: multi-mode remote control; Gesture recognition; Natural User Interface; Graphic User Interface; Click gesture recognition; Misaligned click problem. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-16T10:35:14Z (GMT). No. of bitstreams: 1 ntu-102-R00944037-1.pdf: 5748284 bytes, checksum: 475dc5649d367a278460ce1736986df7 (MD5) Previous issue date: 2013 | en |
| dc.description.tableofcontents | 口試委員會審定書 #
誌謝 1 中文摘要 2 ABSTRACT 4 CONTENTS 6 LIST OF FIGURES 8 Chapter 1 Introduction 13 Chapter 2 Related Work 17 2.1 Commercial TV with Gesture Control 17 2.2 Gesture Recognition 18 2.3 Pointing 19 2.4 Free-Hand Click Gesture 19 Chapter 3 Multi-Mode Remote Control System 21 Chapter 4 3D Gesture Recognition Using Palm Trajectories 27 4.1 Gesture Data Collection 28 4.2 3D Gesture Recognition System 30 4.2.1 Circular Motion Recognizer 34 4.2.2 Hand Waving Recognizer 36 4.2.3 Hand Swipe Gesture Recognizer 39 Chapter 5 Freehand Click-Gesture Recognition 50 5.1 General Freehand Click-Gesture Methods 50 5.2 Our Freehand Click-Gesture Recognition 52 5.2.1 Motivation 53 5.2.2 Click Gesture Analysis using 3D Palm Trajectories 57 5.2.3 Freehand Click-Gesture Recognition using 3D palm Trajectory 69 5.2.4 Freehand Click-Gesture Segmentation using Finite State Machine 70 Chapter 6 Experiments 74 6.1 Experiments of 3D Gesture Recognition 74 6.2 Experiments of Freehand Click-Gesture Recognition 77 Chapter 7 Conclusion 81 REFERENCE 83 | |
| dc.language.iso | en | |
| dc.subject | 多階層遠端控制 | zh_TW |
| dc.subject | 自然使用者界面 | zh_TW |
| dc.subject | 點擊手勢辨識 | zh_TW |
| dc.subject | 圖形使用者介面 | zh_TW |
| dc.subject | 點擊偏移問題 | zh_TW |
| dc.subject | 手勢辨識 | zh_TW |
| dc.subject | Misaligned click problem | en |
| dc.subject | Gesture recognition | en |
| dc.subject | Natural User Interface | en |
| dc.subject | Graphic User Interface | en |
| dc.subject | Click gesture recognition | en |
| dc.subject | multi-mode remote control | en |
| dc.title | 基於三維手勢之多層式遠端電視控制技術 | zh_TW |
| dc.title | 3D Gesture-Based Multi-Layer Remote Control Technique
for Smart TV | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 101-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.coadvisor | 李明穗 | |
| dc.contributor.oralexamcommittee | 黃俊翔,張智星,余孟杰 | |
| dc.subject.keyword | 多階層遠端控制,手勢辨識,自然使用者界面,圖形使用者介面,點擊手勢辨識,點擊偏移問題, | zh_TW |
| dc.subject.keyword | multi-mode remote control,Gesture recognition,Natural User Interface,Graphic User Interface,Click gesture recognition,Misaligned click problem, | en |
| dc.relation.page | 88 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2013-08-14 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
| 顯示於系所單位: | 資訊網路與多媒體研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-102-1.pdf 未授權公開取用 | 5.61 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
