Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 工業工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/32281
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳正剛(Argon Chen)
dc.contributor.authorChen-Wei Lien
dc.contributor.author李振維zh_TW
dc.date.accessioned2021-06-13T03:40:32Z-
dc.date.available2009-07-28
dc.date.copyright2006-07-28
dc.date.issued2006
dc.date.submitted2006-07-26
dc.identifier.citation[1] R.A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7: 179-188, 1936.
[2] R.O.Duda and P.E. Hart. Pattern Classification and Scene Analysis. John Wiley
& Sons, New York, 1973.
[3] S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K.-R. Müller. Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for Signal Processing IX, pages 41-48. IEEE, 1999.
[4] B. Schölkopf, A.J. Smola, & K.-R. Müller. Nonlinear component analysis as kernel eigenvalue problem. Neural Computation, 10: 1299-1319, 1998.
[5] B. Schölkopf, C.J.C. Burges, & A. J. Smola, editors. Advanced in Kernel Methods – Support Vector Machine. MIT Press, Cambridge, MA, 1999.
[6] S. Mika, G. Rätsch, and K.-R. Müller. A mathematical programming approach to the Kernel Fisher algorithm. In Advances in Neural Information Processing Systems 13, 2001. to appear.
[7] Jianhua Xu, Xuegong Zhang, Yanda Li. Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR. Proceedings of 2001 International Joint Conference on Neural Networks (IJCNN’01), 1486-1491, 2001. Washington DC, USA, July 15-19, 2001. (EI)
[8] G. Strang. Linear Algebra and Its Applications. 1988. Thomson Learning, Inc., 3rd edition.
[9] S. Mika, A.J. Smola, and B. Schölkopf. An improved training algorithm for kernel fisher discriminants. In Proceedings AISTATS 2001. Morgan Kaufmann, 2001. to appear.
[10] G.H. Golub and C.F. van Loan. Matrix Computations. John Hopkins University Press, Baltimore, London, 3rd edition, 1996.
[11] NC classifier Jianguo Zhang and Kai-Kuang Ma, “Kernel Fisher Discriminant for Texture Classification”, 2004.
[12] University of Minnesota. 2006. Computer Science & Engineering. 12 Jun. 2006 http://www-users.cs.umn.edu/~hpark/data.html.
[13] Chih-Wei Hsu, Chih-Chung Chang and Chih-Jen Lin, “A Practical Guide to Support Vector Classification,” available http://www.csie.ntu.edu.tw/~cjlin/papaers/guide/guide.pdf
[14] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A Library for Support Vector Machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm
[15] UCI Machine Learning Repository
http://www.ics.uci.edu/~mlearn/MLRepository.html
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/32281-
dc.description.abstract線性分類方法大致上包含最小平方誤差法(Minimum Squared Error)和費雪線性區別(Fisher Linear Discriminant)。由於這兩種方法都是線性分類方法,為了能夠處理蘊含非線性特徵的實例(instance),因此從最小平方誤差法衍生出以核心函數(kernel function)為根基的核心最小平方誤差法(Kernel Minimum Squared Error),另外也從費雪線性區別衍生出以核心函數為根基的核心費雪區別(Kernel Fisher Discriminant)。這兩種方法都是把實例從原本的屬性空間(attribute space)投射到一個較高維度的特徵空間(feature space)裡,並在特徵空間裡使用線性分類方法。費雪線性區別和核心費雪區別的目的都是尋找一組方向使得訓練實例(training instances)投影在上面的區別分數(discriminant scores)可以提供最大的鑑別力來區分所有類別。然而,當資料包含大量的屬性或實例時,費雪線性區別和核心費雪區別將會很沒有效率。為了改善運算效能,我們用最小平方誤差法來處理線性分類問題。但在面對多類別的問題時,最小平方誤差法就會跟支持向量機器(Support Vector Machine)一樣,使用一對一(one-against-one)或一對多(one-against-the-rest)的方法。兩者都是無效率的方法,因為無法跟費雪線性區別和核心費雪區別一樣只用一個模型就可以處理多類別的問題。因此我們發展多類別最小平方誤差法(multi-class MSE),使用雪曼-伍德布瑞(Sherman-Woodbury)公式來改善運算效能,並利用葛蘭-舒密特過程(Gram-Schmidt process)來決定類別標籤組合(class-labeling scheme)以處理多類別的問題。多類別最小平方誤差法的非線性應用稱作多類別核心最小平方誤差法(multi-class KMSE)。接著我們用一筆模擬的範例來描述這個方法的流程以及表現類別標籤組合的意義。最後我們用兩筆真實資料來對我們所提出的方法和傳統的分類方法進行比較。zh_TW
dc.description.abstractIn general, there are two kinds of linear classification methods: one is MSE, and the other is FLD. Because linear methods are not sufficient to analyze the data with nonlinear patterns, the nonlinear methods KMSE and KFD are hence developed from MSE and FLD, respectively. Both transform the instances from the original attribute space to the high-dimensional feature space and then linear methods are applied. The objective of FLD and KFD is to find the directions on which the projection of training instances can provide the maximal separability of classes. FLD and KFD are known to be inefficient for datasets with a large amount of attributes and instances, respectively. To improve the computing efficiency, we use MSE for linear classification problems. However, MSE, like SVM, can use only the one-against-one or the one-against-the-rest approach to solve the multi-class problems. Both are inefficient compared to FLD and KFD where only one model is built to discriminate multiple classes simultaneously. Thus, we develop the multi-class MSE with Sherman-Woodbury formula to improve the computation efficiency. It can deal with multiple classes simultaneously by a class-labeling scheme. The different class-labeling schemes are determined by the Gram-Schmidt process. The nonlinear application, multi-class KMSE, is also developed from the multi-class MSE. Then, a simulated example is used to show how the proposed method works and to visualize the meaning of the class-labeling scheme. Finally, two real-world datasets are used for comparing the proposed method with other conventional methods.en
dc.description.provenanceMade available in DSpace on 2021-06-13T03:40:32Z (GMT). No. of bitstreams: 1
ntu-95-R93546018-1.pdf: 1538452 bytes, checksum: 72f212c0c1bf946981d2b4dc3b86c79f (MD5)
Previous issue date: 2006
en
dc.description.tableofcontentsAbstract i
論文摘要 ii
Contents iii
Contents of Figures iv
Contents of Tables v
Chapter 1: Introduction 1
1.1 Background 1
1.2 Current Linear Approaches for Classification 2
1.2.1 Fisher Linear Discriminants 2
1.2.2 Minimum Squared Error Approach 4
1.3 Kernel Fisher Discriminants 7
1.4 Problems of Current Linear and Nonlinear Classification Approaches and Research Objectives 11
1.5 Thesis Organization 11
Chapter 2: Multi-Class Kernel MSE with Sherman-Woodbury Formula 13
2.1 Multi-class Minimum Squared Error Approach 13
2.2 Multi-class Kernel Minimum Squared Error Approach 20
2.3 Determination of the Optimal Class-Labeling Scheme 24
2.3.1 Gram-Schmidt Process 27
2.4 Sherman-Woodbury Formula 30
2.5 Illustration with a Simulated Example 31
Chapter 3: Case Study 38
3.1 Medline Text Dataset 38
3.2 Hayes-Roth Dataset 46
Chapter 4: Conclusions and Suggestions on Future Research 51
References 52
Appendix A: Principal Component Analysis 54
Appendix B: C# Code 57
dc.language.isoen
dc.title利用 Sherman-Woodbury 公式之高效率多類別非線性最小平方誤差分類器zh_TW
dc.titleEffective Multi-class Kernel MSE Classifier with Sherman-Woodbury Formulaen
dc.typeThesis
dc.date.schoolyear94-2
dc.description.degree碩士
dc.contributor.oralexamcommittee黃乾綱(Chien-Kang Huang),陳倩瑜(Chien-Yu Chen),范治民(Chih-Min Fan)
dc.subject.keyword分類方法,費雪線性區別,核心費雪區別,最小平方誤差法,核心最小平方誤差法,效率,zh_TW
dc.subject.keywordClassification method,FLD,KFD,MSE,KMSE,Efficiency,en
dc.relation.page71
dc.rights.note有償授權
dc.date.accepted2006-07-26
dc.contributor.author-college工學院zh_TW
dc.contributor.author-dept工業工程學研究所zh_TW
顯示於系所單位:工業工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-95-1.pdf
  目前未授權公開取用
1.5 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved