請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59145
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 陳定立(Ting-Li Chen) | |
dc.contributor.author | Yi-Heng Sun | en |
dc.contributor.author | 孫以恆 | zh_TW |
dc.date.accessioned | 2021-06-16T09:16:43Z | - |
dc.date.available | 2022-07-20 | |
dc.date.copyright | 2017-07-20 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-07-13 | |
dc.identifier.citation | [1] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on pattern analysis and machine intelligence, 19(7), 711-720.
[2] Brereton, R. G., & Lloyd, G. R. (2014). Partial least squares discriminant analysis: taking the magic away. Journal of Chemometrics, 28(4), 213-225. [3] Chen, L. F., Liao, H. Y. M., Ko, M. T., Lin, J. C., & Yu, G. J. (2000). A new LDA-based face recognition system which can solve the small sample size problem. Pattern recognition, 33(10), 1713-1726. [4] Cook, R. D. (2009). Regression graphics: Ideas for studying regressions through graphics (Vol. 482). John Wiley & Sons. [5] Dai, D. Q., & Yuen, P. C. (2003). Regularized discriminant analysis and its application to face recognition. Pattern Recognition, 36(3), 845-847. [6] Dai, D. Q., & Yuen, P. C. (2007). Face recognition by regularized discriminant analysis. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 37(4), 1080-1085. [7] Ding, S., & Cook, R. D. (2015). Tensor sliced inverse regression. Journal of Multivariate Analysis, 133, 216-231. [8] Duda, R. O., Hart, P. E., & Stork, D. G. (2012). Pattern classification. John Wiley & Sons. [9] Duda, R. O., & Hart, P.E. (1973). Pattern classifiaction and scene analysis. John 65 Wiley & Sons. [10] Friedman, J. H. (1989). Regularized discriminant analysis. Journal of the American statistical association, 84(405), 165-175. [11] Fukunaga, K. (2013). Introduction to statistical pattern recognition. Academic press. [12] Hall, P., & Li, K. C. (1993). On almost linearity of low dimensional projections from high dimensional data. The annals of Statistics, 867-889. [13] Hastie, T., Tibshirani, R., & Friedman, J. (2002). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Biometrics. [14] Helland, I. S. (1988). On the structure of partial least squares regression. Communications in statistics-Simulation and Computation, 17(2), 581-607. [15] Helland, I. S. (1990). Partial least squares regression and statistical models. Scandinavian Journal of Statistics, 97-114. [16] Helland, I. S. (2000). Model reduction for prediction in regression models. Scandinavian journal of statistics, 27(1), 1-20. [17] Huang, R., Liu, Q., Lu, H., & Ma, S. (2002). Solving the small sample size problem of LDA. In Pattern Recognition, 2002. Proceedings. 16th International Conferenceon (Vol. 3, pp. 29-32). IEEE. [18] Johnson, R. A., & Wichern, D. W. (2014). Applied multivariate statistical analysis (Vol. 4). New Jersey: Prentice-Hall. [19] Kong, H., Teoh, E. K., Wang, J. G., & Venkateswarlu, R. (2005, March). Two-dimensional Fisher discriminant analysis: forget about small sample size problem [facerecognition applications]. In Acoustics, Speech, and Signal Processing, 2005. Proceedings.(ICASSP’05). IEEE International Conference on (Vol. 2, pp. ii-761). IEEE. [20] Kong, H., Wang, L., Teoh, E. K., Wang, J. G., & Venkateswarlu, R. (2005, June). A framework of 2D Fisher discriminant analysis: application to face recognition with small number of training samples. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (Vol. 2, pp. 1083-1088). IEEE. [21] Li, L., Cook, R. D., & Tsai, C. L. (2007). Partial inverse regression. Biometrika, 66, 94(3), 615-625. [22] Li, M., & Yuan, B. (2005). 2D-LDA: A statistical linear discriminant analysis for image matrix. Pattern Recognition Letters, 26(5), 527-532. [23] Li, K. C. (1991). Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414), 316-327. [24] Li, K. C. (2000). High dimensional data analysis via the SIR/PHD approach. [25] Liu, C., & Wechsler, H. (1998). Enhanced fisher linear discriminant models for face recognition. In Pattern Recognition, 1998. Proceedings. Fourteenth International Conference on (Vol. 2, pp. 1368-1372). IEEE. [26] Liu, C., & Wechsler, H. (2002). Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Transactions on Image processing, 11(4), 467-476. [27] Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2008). MPCA: Multilinear principal component analysis of tensor objects. IEEE Transactions on Neural Networks, 19(1), 18-39. [28] Lu, H., Plataniotis, K. N., & Venetsanopoulos, A. N. (2011). A survey of multilinear subspace learning for tensor data. Pattern Recognition, 44(7), 1540-1551. [29] McLachlan, G. (2004). Discriminant analysis and statistical pattern recognition (Vol. 544). John Wiley & Sons. [30] Noushath, S., Kumar, G. H., & Shivakumara, P. (2006). (2D)2 LDA: An efficient approach for face recognition. Pattern recognition, 39(7), 1396-1400. [31] Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of cognitive neuroscience, 3(1), 71-86. [32] Rao, C. R. (1948). The utilization of multiple measurements in problems of biological classification. Journal of the Royal Statistical Society. Series B (Methodological), 10(2), 159-203. [33] Rao, C. R., & Mitra, S. K. (1972). Generalized inverse of a matrix and its applications. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Theory of Statistics. The Regents of the University of California.67 [34] Sharma, A., & Paliwal, K. K. (2015). Linear discriminant analysis for the small sample size problem: an overview. International Journal of Machine Learning and Cybernetics, 6(3), 443-454. [35] Sharma, A., & Paliwal, K. K. (2015). A deterministic approach to regularized linear discriminant analysis. Neurocomputing, 151, 207-214. [36] Turk, M. A., & Pentland, A. P. (1991, June). Face recognition using eigenfaces. In Computer Vision and Pattern Recognition, 1991. Proceedings CVPR’91., IEEE Computer Society Conference on (pp. 586-591). IEEE. [37] Yan, S., Xu, D., Yang, Q., Zhang, L., Tang, X., & Zhang, H. J. (2007). Multilinear discriminant analysis for face recognition. IEEE Transactions on Image Processing, 16(1), 212-220. [38] Yang, J., Zhang, D., Frangi, A. F., & Yang, J. Y. (2004). Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE transactions on pattern analysis and machine intelligence, 26(1), 131-137. [39] Ye, J., Janardan, R., & Li, Q. (2004, August). GPCA: an efficient dimension reduction scheme for image compression and retrieval. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 354-363). ACM. [40] Ye, J. (2004). Generalized low rank approximations of matrices. In Proceedings of the twenty-first international conference on Machine learning (p. 112). ACM. [41] Ye, J. (2007). Least squares linear discriminant analysis. In Proceedings of the 24th international conference on Machine learning (pp. 1087-1093). ACM. [42] Ye, J., Janardan, R., & Li, Q. (2005). Two-dimensional linear discriminant analysis. In Advances in neural information processing systems (pp. 1569-1576). [43] Yu, H., & Yang, J. (2001). A direct LDA algorithm for high-dimensional data—with application to face recognition. Pattern recognition, 34(10), 2067-2070. [44] Zhang, D., & Zhou, Z. H. (2005). (2D)2 PCA: Two-directional two-dimensional PCA for efficient face representation and recognition. Neurocomputing, 69(1), 224-231. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/59145 | - |
dc.description.abstract | 在這些年中,人臉辨識是機器學習中非常重要的一個領域,其中,
特徵萃取是這個領域中人們最在乎的課題之一。主成分分析與線性判 別分析是在這個領域中兩個最重要的特徵萃取的方法,然而在實務 上,他們都有些缺點或限制。這篇論文,我們會先討論線性判別分析 的主要限制和回顧一些廣為人知去處理這個限制的演算法,然後我們 會提供了兩個新的想法去萃取特徵。一個是一種新的方法去對資料做 特徵萃取,另一個是兩階段監督式降維法。首先,我們提供了一個方 法去尋找線性判別分析的子空間,這個想法是基於偏最小平方迴歸法 與分片逆迴歸法。對於類別資料,只要適當地編碼反應變數,分片逆 迴歸法可以用來尋照線性判別子空間(Li, 2000),然而,當資料的維度 大於資料的樣本數,它依然受到小樣本問題的限制。當小樣本問題發 生時,Li(2007) 提出了偏分片逆迴歸法去估計估計式的行空間。我們 使用這個估計的行空間作為線性判別的子空間(特徵),作為我們分類 資料的依據。另外一項是我們所提出二階段監督式降維,在從資料中 尋找線性判別子空間之前,我們建議先用多重線性主成分分析尋找一 個張量子空間先降低資料的維度,再使用不同的監督式降維演算法去 尋找線性判別分析子空間。除此之外,我們也提供了一個方法去選擇 張量子空間的維度,使得用二階段監督式降維法可以有效提升分類資 料的準確度。 | zh_TW |
dc.description.abstract | Face recognition has been viewed as an important part of the human perception system for many years and there are so many people having been putting so much effort into this field. PCA and LDA (Linear discriminant analysis) are the two of the most famous feature extraction technique for this field. However, they both suffer from some drawbacks or limitations. In this thesis, we would first discuss the primary limitation of LDA, small sample size problem, and review some well-known algorithms to overcome this problem. Then we would present two approaches for this field. One is a novel approach to find a subspace in which the data could be classified accurately. It is based on the idea of partial least square regression(Helland, 1988, 1990, 2000) and sliced inverse regression(Li, 1991). Li et al(2007) used partial least square regression to overcome the SSS problem met by sliced inverse regression. We use the column subspace spanned by the estimator they provide as the discriminative subspace. The other one is a two-step supervised dimension reduction strategy, based on the ideas of MPCA and linear discriminant analysis provided by previous people. Also, we provide a strategy to determine how many dimensions we should keep at the first step so that we could improve the recognition accuracy. | en |
dc.description.provenance | Made available in DSpace on 2021-06-16T09:16:43Z (GMT). No. of bitstreams: 1 ntu-106-R03246021-1.pdf: 669585 bytes, checksum: 881c753c2b101c86d740d5f06b1f0da8 (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 1 Introduction to face recognition 1
2 Introduction to unsupervised dimension reduction and supervised dimension reduction with application to face recognition 7 2.1 An overview of unsupervised dimension reduction . . . . . . . . . . . . 9 2.2 An overview of supervised dimension reduction and SSS problem . . . . 12 3 Another interpretation of partial inverse regression on linear discriminant analysis 29 3.1 Brief review of NIPALS PLS2 algorithm: Partial least square regression . 29 3.2 Relation between linear regression and linear discriminant analysis . . . . 33 3.3 Partial least square discriminant . . . . . . . . . . . . . . . . . . . . . . 36 3.4 An improvement of partial least square discriminant: Partial inverse regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.5 Simulation results of NIPALS PLS2 and Partial inverse regression and comparison with other methods . . . . . . . . . . . . . . . . . . . . . . . 41 4 Two-step supervised dimension reduction 43 4.1 Motivation and introduction to two-step supervised dimension reduction . 43 4.2 Dimension selection for two-step supervised dimension reduction . . . . 46 4.3 Other ways to select the dimension reduction parameter . . . . . . . . . . 50 4.4 Other dimension reduction algorithms to reduce the dimension as the first step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.4.1 Unsupervised dimension reduction . . . . . . . . . . . . . . . . . 52 4.4.2 Supervised dimension reduction . . . . . . . . . . . . . . . . . . 53 4.5 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.5.1 Basic information of our real face data . . . . . . . . . . . . . . . 59 4.5.2 Numerical results of real face data . . . . . . . . . . . . . . . . . 59 5 Discussion and conclusion 63 6 Reference 65 | |
dc.language.iso | en | |
dc.title | 兩階段監督式降維及其在人臉辨識上的應用 | zh_TW |
dc.title | Two-step supervised dimension reduction with application
to face recognition | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 陳宏(Hung Chen),陳素雲(Su-Yun Huang),洪弘(Hung Hung) | |
dc.subject.keyword | 多重線性主成分分析,線性判別分析,模式識別, | zh_TW |
dc.subject.keyword | Multilinear principal component analysis,Linear discriminant analysis,Pattern recognition, | en |
dc.relation.page | 68 | |
dc.identifier.doi | 10.6342/NTU201701553 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2017-07-13 | |
dc.contributor.author-college | 理學院 | zh_TW |
dc.contributor.author-dept | 應用數學科學研究所 | zh_TW |
顯示於系所單位: | 應用數學科學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf 目前未授權公開取用 | 653.89 kB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。