請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/40757完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 洪一平(Yi-Ping Hung) | |
| dc.contributor.author | Wen-Yan Chang | en |
| dc.contributor.author | 張文彥 | zh_TW |
| dc.date.accessioned | 2021-06-14T16:59:01Z | - |
| dc.date.available | 2010-08-05 | |
| dc.date.copyright | 2008-08-05 | |
| dc.date.issued | 2008 | |
| dc.date.submitted | 2008-07-28 | |
| dc.identifier.citation | [1] S. Agarwal, A. Awan, and D. Roth. Learning to detect objects in images via a sparse, part-based representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(11):1475–1490, 2004.
[2] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2):174–188, 2002. [3] V. Athitsos and S. Sclaroff. An appearance-based framework for 3D hand shape classification and camera viewpoint estimation. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 40–45, 2002. [4] V. Athitsos and S. Sclaroff. Estimating 3D hand pose from a cluttered image. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 432–439, 2003. [5] S. Avidan. Support vector tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8):1064–1072, 2004. [6] H. Barrow, J. Tenenbaum, R. Bolles, and H. Wolf. Parametric correspondence and chamfer matching: Two new techniques for image matching. In International Joint Conference on Artificial Intelligence, pages 659–663, 1977. [7] M. Bartlett, P. Viola, T. Sejnowski, B. Golomb, J. Larsen, J. Hager, and P. Ekman. Classifying facial action. Advances in Neural Information Processing Systems, 8: 823–829, 1996. [8] M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan. Recognizing facial expression: machine learning and application to spontaneous behavior. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 568–573, 2005. [9] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4):509–522, 2002. [10] M. Bray, E. Koller-Meier, and L. Van Gool. Smart particle filtering for 3D hand tracking. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 675–680, 2004. [11] Y. Chang, C. Hu, and M. Turk. Probabilistic expression analysis on manifolds. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 520–527, 2004. [12] H.-T. Chen, H.-W. Chang, and T.-L. Liu. Local discriminant embedding and its variants. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 846–853, 2005. [13] J.-H. Chen and C.-S. Chen. Object recognition based on image sequences by using inter-feature-line consistencies. Pattern Recognition, 37(8):1713–1722, 2004. [14] I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang. Facial expression recognition from video sequences: temporal and static modeling. Computer Vision and Image Understanding, 91(1-2):160–187, 2003. [15] D. Comaniciu, V. Ramesh, and P. Meer. Real-time tracking of non-rigid objects using mean shift. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 142–149, 2000. [16] D. W. Cunningham, M. Kleiner, C. Wallraven, and H. H. Bulthoff. Manipulating video sequences to determine the components of conversational facial expressions. ACM Transactions on Applied Perception, 2(3):251–269, 2005. [17] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 886–893, 2005. [18] J. Deutscher and I. Reid. Articulated body motion capture by stochastic search. International Journal of Computer Vision, 61(2):185–205, 2005. [19] G. Donato, M. Bartlett, J. Hager, P. Ekman, and T. Sejnowski. Classifying facial actions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10): 974–989, 1999. [20] F. Dornaika and F. Davoine. Simultaneous facial action tracking and expression recognition using a particle filter. In IEEE International Conference on Computer Vision, volume 2, pages 1733–1738, 2005. [21] A. Doucet, S. Godsill, and C. Andrieu. On sequentialMonte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10(3):197–208, 2000. [22] G. Edwards, T. Cootes, and C. Taylor. Face recognition using active appearance models. In European Conference on Computer Vision, volume 2, pages 581–695, 1998. [23] P. Ekman and W. Friesen. Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Prentice-Hall, 1975. [24] P. Ekman and W. Friesen. Facial Action Coding System: A Technique fo the Measurement of Facial Movement. Consulting Psychologists Press, 1978. [25] P. Elinas, R. Sim, and J. Little. sigmaSLAM: Stereo vision SLAM using the Rao-Blackwellised particle filter and a novel mixture proposal distribution. In IEEE International Conference on Robotics and Automation, pages 1564–1570, 2006. [26] I. Essa and A. Pentland. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):757–763, 1997. [27] B. Fasel and J. Luettin. Automatic facial expression analysis: A survey. Pattern Recognition, 36(1):259–275, 2003. [28] R. Haralick and L. Shapiro. Computer and Robot Vision. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1992. [29] X. He, D. Cai, S. Yan, and H.-J. Zhang. Neighborhood preserving embedding. In IEEE International Conference on Computer Vision, volume 2, pages 1208–1213, 2005. [30] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang. Face recognition using laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(3):328–340, 2005. [31] G. Hua and Y. Wu. Measurement integration under inconsistency for robust tracking. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 650–657, 2006. [32] C.-R. Huang, C.-S. Chen, and P.-C. Chung. Contrast context histogram - a discriminating local descriptor for image matching. In International Conference on Pattern Recognition, volume 4, pages 53–56, 2006. [33] M. Isard and A. Blake. CONDENSATION–conditional density propagation for visual tracking. International Journal of Computer Vision, 29(1):5–28, 1998. [34] M. Isard and A. Blake. ICONDENSATION: Unifying low-level and high-level tracking in a stochastic framework. In European Conference on Computer Vision, volume 1, pages 893–908, 1998. [35] T. Jaakkola. Tutorial on variational approximation methods. Advanced Mean Field Methods: Theory and Practice, pages 129–159, 2001. [36] A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi. Robust online appearance models for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10):1296–1311, 2003. [37] Y.-D. Jian, W.-Y. Chang, and C.-S. Chen. Attractor-guided particle filtering for lip contour tracking. In Asian Conference on Computer Vision, volume 1, pages 653–663, 2006. [38] M. Jones and J. Rehg. Statistical color models with application to skin detection. International Journal of Computer Vision, 46(1):81–96, 2002. [39] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233, 1999. [40] T. Kanade, J. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 46–53, 2000. [41] Z. Khan, T. Balch, and F. Dellaert. A Rao-Blackwellized particle filter for eigentracking. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 980–986, 2004. [42] Z. Khan, T. Balch, and F. Dellaert. An MCMC-based particle filter for tracking multiple interacting targets. In European Conference on Computer Vision, volume 4, pages 279–290, 2004. [43] F. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001. [44] K.-M. Lam and H. Yan. An analytic-to-holistic approach for face recognition based on a single frontal view. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(7):673–686, 1998. [45] A. Lanitis, C. J. Taylor, and T. F. Cootes. Automatic interpretation and coding of face images using flexible models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):743–756, 1997. [46] K. Levi and Y.Weiss. Learning object detection from a small number of examples: the importance of good features. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 53–60, 2004. [47] B. Li and R. Chellappa. A generic approach to simultaneous tracking and verification in video. IEEE Transactions on Image Processing, 11(5):530–544, 2002. [48] W.-K. Liao and I. Cohen. Classifying facial gestures in presence of head motion. In IEEE Workshop on Vision for Human-Computer Interaction, 2005. [49] J. Lien, T. Kanade, J. Cohn, and C.-C. Li. Automated facial expression recognition based on facs action units. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 390–395, 1998. [50] J. Lin, Y.Wu, and T. Huang. 3D model-based hand tracking using stochastic direct search method. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 693–698, 2004. [51] W. Lin and Y. Liu. Tracking dynamic near-regular textures under occlusions and rapid movements. In European Conference on Computer Vision, volume 2, pages 44–55, 2006. [52] C. Lisetti and D. Rumelhart. Facial expression recognition using a neural network. In International Flairs Conference, pages 328–332, 1998. [53] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2004. [54] S. Lu, D.Metaxas, D. Samaras, and J. Oliensis. Using multiple cues for hand tracking and model refinement. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 443–450, 2003. [55] M. Lyons, J. Budynek, and S. Akamatsu. Automatic classification of single facial images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(12): 1357–1362, 1999. [56] K. Okuma, A. Taleghani, N. de Freitas, J. Little, and D. Lowe. A boosted particle filter: Multitarget detection and tracking. In European Conference on Computer Vision, volume 1, pages 28–39, 2004. [57] C. Padgett and G. Cottrell. Representing face images for emotion classification. Advances in Neural Information Processing Systems, 9:894–900, 1997. [58] M. Pantic and I. Patras. Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on Systems, Man and Cybernetics, Part B, 36(2):433–449, 2006. [59] M. Pantic and L. Rothkrantz. Expert system for automatic analysis of facial expressions. Image and Vision Computing, 18(11):881–905, 2000. [60] M. Pantie and L. Rothkrantz. Automatic analysis of facial expressions: the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12): 1424–1445, 2000. [61] I. Patras and M. Pantic. Particle filtering with factorized likelihoods for tracking facial features. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 97–102, 2004. [62] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Pub, 1988. [63] P. Perez, C. Hue, J. Vermaak, and M. Gangnet. Color-based probabilistic tracking. In European Conference on Computer Vision, volume 1, pages 661–675, 2002. [64] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3), 1999. [65] C. Rasmussen and G. Hager. Probabilistic data association methods for tracking complex visual objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):560–576, 2001. [66] R. Rosales, V. Athitsos, L. Sigal, and S. Sclaroff. 3D hand pose reconstruction using specialized mappings. In IEEE International Conference on Computer Vision, volume 1, pages 378–385, 2001. [67] N. Rose. Facial expression classification using gabor and log-gabor filters. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 346–350, 2006. [68] D. Ross, J. Lim, and M. Yang. Adaptive probabilistic visual tracking with incremental subspace update. In European Conference on Computer Vision, volume 2, pages 470–482, 2004. [69] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [70] Y. Rui and Y. Chen. Better proposal distributions: object tracking using unscented particle filter. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 786–793, 2001. [71] S. Russell and P. Norvig. Artificial intelligence: a modern approach. Prentice-Hall, Inc. Upper Saddle River, NJ, USA, 1995. [72] A. Schwaninger, C. Wallraven, D. W. Cunningham, and S. D. Chiller-Glaus. Processing of identity and emotion in faces: a psychophysical, physiological and computational perspective. Progress in Brain Research, 156:321–343, 2006. [73] S. M. Seitz and C. R. Dyer. View morphing. In ACM SIGGRAPH: Conference on Computer Graphics and Interactive Techniques, pages 21–30, 1996. [74] G. Shakhnarovich, P. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In IEEE International Conference on Computer Vision, volume 2, pages 750–757, 2003. [75] C. Shan, S. Gong, and P. McOwan. Appearance manifold of facial expression. IEEE Workshop on Human-Computer Interaction, 2005. [76] C. Shan, S. Gong, and P.McOwan. Robust facial expression recognition using local binary patterns. In IEEE International Conference on Image Processing, volume 2, pages 370–373, 2005. [77] L. Sigal, Y. Zhu, D. Comaniciu, and M. Black. Tracking complex objects using graphical object models. In International Workshop on Complex Motion, 2004. [78] I. Skrypnyk and D. Lowe. Scene modelling, recognition and tracking with invariant image features. In IEEE and ACM International Symposium on Mixed and Augmented Reality, pages 110–119, 2004. [79] B. Stenger, A. Thayananthan, P. Torr, and R. Cipolla. Filtering using a tree-based estimator. In IEEE International Conference on Computer Vision, volume 2, pages 1063–1070, 2003. [80] E. Sudderth, A. Ihler, W. Freeman, and A.Willsky. Nonparametric belief propagation. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 605–612, 2003. [81] E. Sudderth, M. Mandel, W. Freeman, and A. Willsky. Distributed occlusion reasoning for tracking with nonparametric belief propagation. Advances in Neural Information Processing Systems, 17:1369–1376, 2004. [82] J. Tenenbaum, V. Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [83] J. Triesch and C. von der Malsburg. A system for person-independent hand posture recognition against complex backgrounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(12):1449–1453, 2001. [84] J. Vermaak, A. Doucet, and P. Perez. Maintaining multimodality through mixture tracking. In IEEE International Conference on Computer Vision, pages 1110–1116, 2003. [85] F. Wang, C. Zhang, H. Shen, and J. Wang. Semi-supervised classification using linear neighborhood propagation. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 160–167, 2006. [86] J. Wang, L. Yin, X. Wei, and Y. Sun. 3D facial expression recognition based on primitive surface feature distribution. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 1399–1406, 2006. [87] Z.Wen and T. Huang. Capturing subtle facial motions in 3d face tracking. In IEEE International Conference on Computer Vision, volume 2, pages 1343–1350, 2003. [88] T. Wu, C. Lin, and R. Weng. Probability estimates for multi-class classification by pairwise coupling. The Journal of Machine Learning Research, 5:975–1005, 2004. [89] Y. Wu and T. Huang. Capturing articulated human hand motion: A divide-andconquer approach. In IEEE International Conference on Computer Vision, pages 606–611, 1999. [90] Y. Wu, G. Hua, and T. Yu. Tracking articulated body by dynamic markov network. In IEEE International Conference on Computer Vision, pages 1094–1101, 2003. [91] Y. Wu, J. Lin, and T. Huang. Analyzing and capturing articulated hand motion in image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(12):1910–1922, 2005. [92] S. Yan, D. Xu, B. Zhang, and H.-J. Zhang. Graph embedding: a general framework for dimensionality reduction. In IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 830–837 vol. 2, 2005. [93] C. Yang, R. Duraiswami, and L. Davis. Fast multiple object tracking via a hierarchical particle filter. In IEEE International Conference on Computer Vision, volume 1, pages 212–219, 2005. [94] M. Yeasin, B. Bullot, and R. Sharma. Recognition of facial expressions and measurement of levels of interest from video. IEEE Transactions on Multimedia, 8(3): 500–508, 2006. [95] Q. You, N. Zheng, S. Du, and Y. Wu. Neighborhood discriminant projection for face recognition. In International Conference on Pattern Recognition, volume 2, pages 532–535, 2006. [96] T. Yu and Y. Wu. Decentralized multiple target tracking using netted collaborative autonomous trackers. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 939–946, 2005. [97] T. Yu and Y. Wu. Differential tracking based on spatial-appearance model (SAM). In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 720–727, 2006. [98] Z. Zhang, M. Lyons, M. Schuster, and S. Akamatsu. Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron. In IEEE International Conference on Automatic Face and Gesture Recognition, pages 454–459, 1998. [99] G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):915–928, 2007. [100] W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld. Face recognition: A literature survey. ACM Computing Surveys, 35(4):399–458, 2003. [101] S. K. Zhou, R. Chellappa, and B. Moghaddam. Visual tracking and recognition using appearance-adaptive models in particle filters. IEEE Transactions on Image Processing, 13(11):1491–1506, 2004. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/40757 | - |
| dc.description.abstract | 如何讓電腦藉由觀察使用者的行為與情緒來瞭解其意圖和想法是基礎而重要的課題。在本論文中,我們針對手勢追蹤、人臉五官追蹤和表情辨認等題目進行研究與探討。在手勢追蹤方面,由於每一根手指的關節都有數個運動自由度,這些為數眾多的關節自由度使得該問題成為複雜的高維度追蹤問題。為了能有效地解決此問題,我們提出了以貝氏機率傳遞模型為基礎之「外觀導引式粒子濾波法」。將手勢的外觀資訊引入動態系統中,透過外觀資訊的導引與動態訊息的傳遞我們可以正確地估測出運動的狀態。一般而言,物體是由若干個局部元件所組成的,而這些元件通常也存在某種幾何結構上的關係,為此我們也發展了利用物體局部特徵的空間一致性來改進影像的追蹤問題,並將其應用在人臉與五官的追蹤上。藉由於利用局部特徵的的關連與合作特性,我們可以有效地改善光線與局部遮蔽對追蹤所造成的影響。除了臉部追蹤,我們也成功地將此方法應用於其他的視覺追蹤問題上。在臉部表情辨認的研究上,不同於常見的整體或是局部的臉部表示法,我們採用了複合式的表示法來描述臉部的特徵,使得臉部的整體變化與局部細微的差異可以同時被觀察到。藉由應用監督式流形學習技術,我們提出了融合演算法來有效地整合這些不同元件上的流形,以突顯出個個元件在不同表情上的影響力。經由廣泛的實驗,我們證明了此方法可以有效地辨別各類表情。 | zh_TW |
| dc.description.abstract | Three important topics for human intention understanding are discussed in this dissertation, including articulated hand tracking, face/facial component tracking, and facial expression recognition. To capture the complex hand motion in image sequences, we propose a model-based approach, called appearance-guided particle filtering, for high degree-of-freedom tracking. In addition to the initial state, our method assumes that there are some known attractors in the state space to guide the tracking. We then integrate both attractor and motion-transition information in a probability propagation framework. Experimental results show that our method performs better than those merely using sequential motion transition information or appearance information. An object usually consists of several components. To deal with the tracking problems that have (strong) spatial coherence in objects' components, we develop a part-based tracking method. Unlike existing methods that only use the spatial-coherence relationship for particle weight estimation, our method further applies the spatial relationship for state prediction. Thus, the tracking performance can be considerably improved. In the facial-expression recognition part, we propose a hybrid-representation approach in a manifold-learning framework, which takes advantage of both holistic and local representations for analysis. We show the effectiveness of our method by applying it to the Cohn-Kanade database, and a high recognition rate can be achieved. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-14T16:59:01Z (GMT). No. of bitstreams: 1 ntu-97-D91922011-1.pdf: 15116658 bytes, checksum: d297120c646149e7cc1c120cb39f0ec3 (MD5) Previous issue date: 2008 | en |
| dc.description.tableofcontents | Abstract vii
List of Figures xiii List of Tables xv 1 Introduction 1 1.1 Motivation 1 1.2 Overview 2 1.2.1 CapturingHandArticulation 2 1.2.2 Tracking Face via Component Collaboration 3 1.2.3 AnalyzingFacialExpression 3 1.3 Organization 4 2 Capturing Hand Articulation 7 2.1 Background 8 2.2 Appearance-Guided Particle Filtering 10 2.2.1 Review of Particle Filtering 11 2.2.2 Probability Propagation of AGPF 12 2.2.3 Sequential Monte Carlo Framework of AGPF 14 2.3 Mixture-basedAGPF 17 2.4 Likelihood Model and State Estimation 20 2.5 ExperimentalResults 22 2.6 Discussions 31 3 Tracking Face via Component Collaboration 33 3.1 Background 34 3.2 TBP Particle Filtering 36 3.2.1 Bayesian Probability Propagation 38 3.2.2 Inference ofTBP-BN 40 3.2.3 Refinement of the Particle Weights by Spatial Relationship 43 3.3 Dynamic Distribution 46 3.3.1 General Spatial Constraints 47 3.3.2 Variations of the Dynamic Model 49 3.4 Likelihood and Particle Weight Estimation 51 3.4.1 Component Representation and Likelihood Measurement 51 3.4.2 Particle Re-weighting 53 3.5 Experimental Results 55 3.5.1 Implementation 55 3.5.2 Results 56 3.6 Summary 64 4 Analyzing Facial Expression 65 4.1 Background 67 4.1.1 Facial Expression Analysis 67 4.1.2 Supervised Manifold Learning 68 4.2 Expression Analysis Using Fusion Manifolds 69 4.2.1 Facial Components 70 4.2.2 Fusion Algorithm for Embedded Manifolds 71 4.3 Experimental Results 76 4.3.1 Dataset and Preprocessing 76 4.3.2 Algorithms for Comparison 76 4.3.3 Comparisons and Discussions 78 4.4 Summary 84 5 Conclusion 87 Bibliography 89 Publications 97 | |
| dc.language.iso | en | |
| dc.subject | 監督式流形學習 | zh_TW |
| dc.subject | 手勢追蹤 | zh_TW |
| dc.subject | 粒子濾波法 | zh_TW |
| dc.subject | 局部特徵追蹤 | zh_TW |
| dc.subject | 局部元件合作 | zh_TW |
| dc.subject | 表情辨識 | zh_TW |
| dc.subject | supervised manifold learning | en |
| dc.subject | Articulated hand tracking | en |
| dc.subject | component collaboration | en |
| dc.subject | tracking by parts | en |
| dc.subject | facial expression recognition | en |
| dc.subject | particle filtering | en |
| dc.title | 電腦視覺技術於手勢追蹤與表情辨識之研究 | zh_TW |
| dc.title | Computer Vision Techniques for Articulated Hand Tracking and Facial Expression Recognition | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 96-2 | |
| dc.description.degree | 博士 | |
| dc.contributor.coadvisor | 陳祝嵩(Chu-Song Chen) | |
| dc.contributor.oralexamcommittee | 陳世旺(Sei-Wang Chen),鍾國亮(Kuo-Liang Chung),王聖智(Sheng-Jyh Wang),賴尚宏(Shang-Hong Lai) | |
| dc.subject.keyword | 手勢追蹤,粒子濾波法,局部特徵追蹤,局部元件合作,表情辨識,監督式流形學習, | zh_TW |
| dc.subject.keyword | Articulated hand tracking,particle filtering,tracking by parts,component collaboration,facial expression recognition,supervised manifold learning, | en |
| dc.relation.page | 95 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2008-07-30 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-97-1.pdf 未授權公開取用 | 14.76 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
