Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 理學院
  3. 統計與數據科學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96428
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor楊鈞澔zh_TW
dc.contributor.advisorChun-Hao Yangen
dc.contributor.author劉冠毅zh_TW
dc.contributor.authorGuan-Yi Liuen
dc.date.accessioned2025-02-13T16:25:36Z-
dc.date.available2025-02-14-
dc.date.copyright2025-02-13-
dc.date.issued2025-
dc.date.submitted2025-02-07-
dc.identifier.citationAlzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., and Farhan, L. (2021). Review of deep learning: concepts, cnn architectures, challenges, applications, future directions. Journal of big Data, 8:1–74.

Andrews, D. W. and Lu, B. (2001). Consistent model and moment selection procedures for gmm estimation with application to dynamic panel data models. Journal of Econometrics, 101(1):123–164.

Boyd, S. and Vandenberghe, L. (2004). Convex optimization. Cambridge university press.

Candes, E. and Tao, T. (2007). The dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics, 35(6):2313–2351.

Chatterjee, A. and Lahiri, S. N. (2011). Bootstrapping lasso estimators. Journal of the American Statistical Association, 106(494):608–625.

Chen, Y. and Li, J. (2021). Recurrent neural networks algorithms and applications. In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), pages 38–43. IEEE.

Efron, B. (1992). Bootstrap methods: another look at the jackknife. In Breakthroughs in statistics: Methodology and distribution, pages 569–593. Springer.

Feng, J. and Simon, N. (2019). Sparse-input neural networks for high-dimensional nonparametric regression and classification.

Hastie, T., Tibshirani, R., and Friedman, J. (2009). The elements of statistical learning: data mining, inference and prediction. Springer, 2 edition.

He, K., Xu, H., and Kang, J. (2019). A selective overview of feature screening methods with applications to neuroimaging data. WIREs Computational Statistics, 11(2):e1454.

Kingma, D. P. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

Lahiri, S. (2010). Asymptotic properties of the residual bootstrap for lasso estimators. Proceedings of the American Mathematical Society, 138(12):4497–4509.

Lemhadri, I., Ruan, F., Abraham, L., and Tibshirani, R. (2021). Lassonet: A neural network with feature sparsity. Journal of Machine Learning Research, 22(127):1–29.

Li, S. (2020). Debiasing the debiased lasso with bootstrap. Electronic Journal of Statistics, 14(1):2298–2337.

Liu, B., Zhang, Q., Xue, L., Song, P. X. K., and Kang, J. (2024). Robust high-dimensional regression with coefficient thresholding and its application to imaging data analysis. Journal of the American Statistical Association, 119(545):715–729.

Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. The Annals of Statistics, 34(3):1436 – 1462.

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288.

van de Geer, S., Bühlmann, P., Ritov, Y., and Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics, 42(3):1166 – 1202.

Xia, L., Nan, B., and Li, Y. (2023). Debiased lasso for generalized linear models with a diverging number of covariates. Biometrics, 79(1):344–357.

Zhang, C.-H. and Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society SeriesB: Statistical Methodology, 76(1):217–242.

Zhao, P. and Yu, B. (2006). On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541–2563.

Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429.

Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(2):301–320.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96428-
dc.description.abstractLassoNet 是一種Lasso 的擴展模型,其結合了前饋神經網絡以捕捉非線性關係,並同時保留其稀疏解。然而,它跟Lasso 一樣有偏誤問題,特別是在高維廣義線性模型中。本文中將LassoNet 擴展到高維廣義線性模型中,使其能夠在維持稀疏性的同時建模複雜的數據結構。然而因為該擴展模型的估計結果仍然與Lasso一樣存在偏誤問題,所以我們引入了van de Geer et al. (2014) 提出的去偏誤架構,並通過基於自助法的校正方法作進一步改進。我們的改進提供了一個具有良好預測性和解釋性的去偏LassoNet 估計式。我們提出的方法拓展LassoNet 的應用範圍,並為分析高維數據中預測變量與響應變量之間的非線性和稀疏關係提供了一個可靠的框架。zh_TW
dc.description.abstractLassoNet, an extension of Lasso, incorporates feed-forward neural networks (FFNs) to capture nonlinear relationships while retaining sparse solutions. However, it inherits Lasso’s bias issues, particularly in high-dimensional GLMs. In this thesis, we extend LassoNet to GLMs, enabling it to model complex data structures while maintaining sparsity. As the estimations provided by this extended model still exhibit bias similar to Lasso, we address this issue by incorporating the debiasing framework introduced by van de Geer et al. (2014) and further enhancing it with a bootstrap-based correction. These refinements yield a debiased LassoNet estimator that is both predictive and interpretable. The proposed method broadens the applicability of LassoNet, providing a reliable framework for analyzing high-dimensional data with nonlinear and sparse relationships between predictors and response.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-13T16:25:36Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-02-13T16:25:36Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents摘要 i
Abstract iii
Contents v
List of Figures vii
Chapter 1 Introduction 1
1.1 Background 1
1.2 Proposed method 4
1.3 Organization of the thesis 4
Chapter 2 Preliminary 5
2.1 Lasso 5
2.2 Fully-Connected Neural Network 6
Chapter 3 Method 13
3.1 Lassonet 13
3.2 Debiasing Lasso Estimators 14
3.2.1 Node-wise Lasso regression 14
3.2.2 Extensions of node-wise Lasso regression 16
3.3 Debiasing the LassoNet 17
3.3.1 Merge LassoNet and GLM 18
3.3.2 Bootstraping the Bias of LassoNet 18
Chapter 4 Simulation 21
4.1 Evaluation of estimation bias 22
4.2 Evaluating variable selection performance 24
Chapter 5 Conclusion 27
References 29
-
dc.language.isoen-
dc.subject變數選擇zh_TW
dc.subjectLassoNetzh_TW
dc.subject高維度模型zh_TW
dc.subject去偏誤zh_TW
dc.subject神經網路zh_TW
dc.subjectLassoNeten
dc.subjectNeural Networken
dc.subjectBias Reductionen
dc.subjectHigh-dimensional Modelsen
dc.subjectVariable Selectionen
dc.title使用自助法技術對LassoNet模型去偏誤zh_TW
dc.titleBias Reduction in LassoNet Models Using Bootstrap Techniquesen
dc.typeThesis-
dc.date.schoolyear113-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳裕庭;張升懋zh_TW
dc.contributor.oralexamcommitteeYu-Ting Chen;Sheng-Mao Changen
dc.subject.keyword神經網路,去偏誤,高維度模型,變數選擇,LassoNet,zh_TW
dc.subject.keywordNeural Network,Bias Reduction,High-dimensional Models,Variable Selection,LassoNet,en
dc.relation.page31-
dc.identifier.doi10.6342/NTU202500447-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2025-02-07-
dc.contributor.author-college理學院-
dc.contributor.author-dept統計與數據科學研究所-
dc.date.embargo-lift2025-02-14-
顯示於系所單位:統計與數據科學研究所

文件中的檔案:
檔案 大小格式 
ntu-113-1.pdf360.79 kBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved