Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 理學院
  3. 數學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/31254
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor陳宏(Hung Chen)
dc.contributor.authorHsin-Hsiung Huangen
dc.contributor.author黃信雄zh_TW
dc.date.accessioned2021-06-13T02:38:40Z-
dc.date.available2008-01-24
dc.date.copyright2007-01-24
dc.date.issued2006
dc.date.submitted2007-01-15
dc.identifier.citation[1] Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control 19, 716-723.
[2] Balkema, A.A. and de Haan, L. (1972). On R. von Mises’ condition for the domain of attraction of exp(¡e−x). Annals of Mathematical Statistics. 43, 1352-1354.
[3] Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate:a Practical and Powerful Approach to Multiple Hypothesis Testing. Journal of the Royal Statistical Society, Series B, 57, 289-300.
[4] David, H.A. and Nagaraja, H.N. (2003). Order Statistics. Third Edition. Wiley Interscience.
[5] de Haan, L. (1970). On regular variation and its application to the weak convergence of sample extremes. Thesis, University of Amsterdam, Mathematical Centre tract, 32, 296, 299, 301.
[6] Donoho, D. and Johnstone, I. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81, 425-455.
[7] Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. (2004). Least angle regression (with discussion). Annals of Statistics, 32, 407-499.
[8] Gnedenko, B. (1943). Sur la distribution limite du terme maximum d’une serie aleatoire. Annals of Mathematics, 44, 423-453.
[9] Hoerl, A.E. and Kennard, R.W. (1970). Ridge regression: Biased estimation for nonorthogonal problems.Technometrics, 12, 55-67.
[10] Kolmogorov, A.N. (1933). Grundgebriffe der Wahrscheinlichkeitsrechnung. Berlin: Springer-Verlag (English trans. 1950). Foundation of the Theory of
Probability. New York: Chelsea. 2nd ed. (1974) in German and Russian.
[11] Li, K.C. (2005). Likelihood of false positives in hypotheses with strongest evidence from multiple testing: the p-value memoryless conversion approach. Technical report.
[12] Li, K.C. (1985). From Stein’s unbiased risk estimates to the method of generalized cross validation. Annals of Statistics, 13, 1352-1377.
[13] Mallows, C.L. (1973). Some comments on Cp. Technometrics. 15, 661-675.
[14] Murray, W., Gill, P. and Wright, M. (1981). Practical Optimization. New York:Academic Press.
[15] Pyke, R. (1965). Spacings. Journal of the Royal Statistical Society, Series B 27, 395-436.
[16] Schwartz, G. (1978). Estimating the dimension of a model. Annals of Statistics 6, 461-464.
[17] Stein, C.M. (1981). Estimation of the Mean of a Multivariate Normal Distribution. Annals of Statistics 9, 1135-1151.
[18] Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society, Series B 58, 267-288.
[19] von Mises, R. (1936). La distribution de la plus grande de n valeurs. Rev. Math. Union Interbalcanique, 1141-160.
[20] Woodrofe, M. (1982). On Model Selection and the ARC Sin Laws. Annals of Statistics, 10, 1182-1194.
[21] Zhang, P. (1992) On the Distributional Properties of Model Selection Criteria. Journal of the American Statistical Association 87, 732-737.
[22] Zou, H., Hastie, T. and Tibshirani, R. (2004). On the “Degrees of Freedom” of the Lasso. Technical Report.
[23] Zou, H. (2006). The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association 101, 1418-1429.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/31254-
dc.description.abstract當線性回歸模型中的自變數極多時, 正規化是個常用的辦法來達到降低被選取回歸模型複雜度之目的。Lasso (Tibshirani, 1996) 被認為是可以達到選取模型參數精簡目的之正規化方法。當線性回歸模型中的自變數為么正且自變數個數及樣本數個數相近時, 本論文探討使用Lasso 與Cp辦法選擇重要自變數的操作性質。考慮的操作性質, 包含了被選取自變數的個數及被選取真實自變數佔被選取自變數的比例。當Lasso 與Cp作為多重假設檢定辦法時, 這些結論也適用之。zh_TW
dc.description.abstractWhen the number of predictors in a linear regression model is large, regularization is a commonly used method to reduce the complexity of the fitted model. LASSO (Tibshirani, 1996) is being advocated as a useful regulation
method for achieving sparsity or parsimony of resulting fitted model. In this thesis, we study the operating characteristics of LASSO coupled with Mallows’Cp on identifying the orthonormal predictor variables of linear regression when the number of predictors and the number of the observation are of the same magnitude. The characteristics includes the chosen number of predictors and the proportion of correctly identified predictors. This result can be useful in multiple testing.
en
dc.description.provenanceMade available in DSpace on 2021-06-13T02:38:40Z (GMT). No. of bitstreams: 1
ntu-95-R93221018-1.pdf: 704517 bytes, checksum: 43065b633b146c11d8adae526340da10 (MD5)
Previous issue date: 2006
en
dc.description.tableofcontents目錄
口試委員會審定書.......................................i
誌謝...................................................ii
中文摘要...............................................iii
英文摘要...............................................iv
第一章Introduction.....................................1
第二章Lasso............................................3
第一節Multiple Hypothesis Testing......................4
第二節Regression.......................................9
第三章Random walk induced by Mallows'Cp................11
第四章Estimate of the degrees of freedom...............16
第五章Null and Sparse Models...........................23
第六章Simulation Studies When n = m and Xnm = Im.......25
第一節Study 1: Null Model..............................26
第二節Study 2: The spacings determined by 2exp(1)-2....27
第三節Study 3: Sparse Model with Cp of Penalty 2.......28
第四節Study 4: Sparse Model with Cp of Penalty 4.......31
第五節Study 5: Effect on Penalty 4 and 2 under Abundant
Models.................................................33
第七章Simulation Studies When n~5m and XTnmXnm=Im......36
第一節Study 6: Null Model..............................36
第二節Study 7: Effect on Penalty 4 and 2 under Sparse Model..................................................37
第三節Study 8: Effect on Penalty 4 and 2 under Abundant
Models.................................................39
第八章Conclusions and Discussions......................41
參考文獻...............................................44
dc.language.isoen
dc.title使用Lasso-Cp選取線性模型解釋變數之探討zh_TW
dc.titleStudy on the Lasso Method for Variable Selection
in Linear Regression Model with Mallows' Cp
en
dc.typeThesis
dc.date.schoolyear95-1
dc.description.degree碩士
dc.contributor.oralexamcommittee李克昭(Ker-Chau Li),陳素雲(Su-Yun Huang),江金倉
dc.subject.keyword最小角度回歸,zh_TW
dc.subject.keywordLeast angle regression,Forward selection,en
dc.relation.page46
dc.rights.note有償授權
dc.date.accepted2007-01-15
dc.contributor.author-college理學院zh_TW
dc.contributor.author-dept數學研究所zh_TW
顯示於系所單位:數學系

文件中的檔案:
檔案 大小格式 
ntu-95-1.pdf
  未授權公開取用
688 kBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved