Please use this identifier to cite or link to this item:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96428| Title: | 使用自助法技術對LassoNet模型去偏誤 Bias Reduction in LassoNet Models Using Bootstrap Techniques |
| Authors: | 劉冠毅 Guan-Yi Liu |
| Advisor: | 楊鈞澔 Chun-Hao Yang |
| Keyword: | 神經網路,去偏誤,高維度模型,變數選擇,LassoNet, Neural Network,Bias Reduction,High-dimensional Models,Variable Selection,LassoNet, |
| Publication Year : | 2025 |
| Degree: | 碩士 |
| Abstract: | LassoNet 是一種Lasso 的擴展模型,其結合了前饋神經網絡以捕捉非線性關係,並同時保留其稀疏解。然而,它跟Lasso 一樣有偏誤問題,特別是在高維廣義線性模型中。本文中將LassoNet 擴展到高維廣義線性模型中,使其能夠在維持稀疏性的同時建模複雜的數據結構。然而因為該擴展模型的估計結果仍然與Lasso一樣存在偏誤問題,所以我們引入了van de Geer et al. (2014) 提出的去偏誤架構,並通過基於自助法的校正方法作進一步改進。我們的改進提供了一個具有良好預測性和解釋性的去偏LassoNet 估計式。我們提出的方法拓展LassoNet 的應用範圍,並為分析高維數據中預測變量與響應變量之間的非線性和稀疏關係提供了一個可靠的框架。 LassoNet, an extension of Lasso, incorporates feed-forward neural networks (FFNs) to capture nonlinear relationships while retaining sparse solutions. However, it inherits Lasso’s bias issues, particularly in high-dimensional GLMs. In this thesis, we extend LassoNet to GLMs, enabling it to model complex data structures while maintaining sparsity. As the estimations provided by this extended model still exhibit bias similar to Lasso, we address this issue by incorporating the debiasing framework introduced by van de Geer et al. (2014) and further enhancing it with a bootstrap-based correction. These refinements yield a debiased LassoNet estimator that is both predictive and interpretable. The proposed method broadens the applicability of LassoNet, providing a reliable framework for analyzing high-dimensional data with nonlinear and sparse relationships between predictors and response. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96428 |
| DOI: | 10.6342/NTU202500447 |
| Fulltext Rights: | 同意授權(全球公開) |
| metadata.dc.date.embargo-lift: | 2025-02-14 |
| Appears in Collections: | 統計與數據科學研究所 |
Files in This Item:
| File | Size | Format | |
|---|---|---|---|
| ntu-113-1.pdf | 360.79 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
