請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96428完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 楊鈞澔 | zh_TW |
| dc.contributor.advisor | Chun-Hao Yang | en |
| dc.contributor.author | 劉冠毅 | zh_TW |
| dc.contributor.author | Guan-Yi Liu | en |
| dc.date.accessioned | 2025-02-13T16:25:36Z | - |
| dc.date.available | 2025-02-14 | - |
| dc.date.copyright | 2025-02-13 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-02-07 | - |
| dc.identifier.citation | Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamaría, J., Fadhel, M. A., Al-Amidie, M., and Farhan, L. (2021). Review of deep learning: concepts, cnn architectures, challenges, applications, future directions. Journal of big Data, 8:1–74.
Andrews, D. W. and Lu, B. (2001). Consistent model and moment selection procedures for gmm estimation with application to dynamic panel data models. Journal of Econometrics, 101(1):123–164. Boyd, S. and Vandenberghe, L. (2004). Convex optimization. Cambridge university press. Candes, E. and Tao, T. (2007). The dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics, 35(6):2313–2351. Chatterjee, A. and Lahiri, S. N. (2011). Bootstrapping lasso estimators. Journal of the American Statistical Association, 106(494):608–625. Chen, Y. and Li, J. (2021). Recurrent neural networks algorithms and applications. In 2021 2nd International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), pages 38–43. IEEE. Efron, B. (1992). Bootstrap methods: another look at the jackknife. In Breakthroughs in statistics: Methodology and distribution, pages 569–593. Springer. Feng, J. and Simon, N. (2019). Sparse-input neural networks for high-dimensional nonparametric regression and classification. Hastie, T., Tibshirani, R., and Friedman, J. (2009). The elements of statistical learning: data mining, inference and prediction. Springer, 2 edition. He, K., Xu, H., and Kang, J. (2019). A selective overview of feature screening methods with applications to neuroimaging data. WIREs Computational Statistics, 11(2):e1454. Kingma, D. P. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lahiri, S. (2010). Asymptotic properties of the residual bootstrap for lasso estimators. Proceedings of the American Mathematical Society, 138(12):4497–4509. Lemhadri, I., Ruan, F., Abraham, L., and Tibshirani, R. (2021). Lassonet: A neural network with feature sparsity. Journal of Machine Learning Research, 22(127):1–29. Li, S. (2020). Debiasing the debiased lasso with bootstrap. Electronic Journal of Statistics, 14(1):2298–2337. Liu, B., Zhang, Q., Xue, L., Song, P. X. K., and Kang, J. (2024). Robust high-dimensional regression with coefficient thresholding and its application to imaging data analysis. Journal of the American Statistical Association, 119(545):715–729. Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. The Annals of Statistics, 34(3):1436 – 1462. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288. van de Geer, S., Bühlmann, P., Ritov, Y., and Dezeure, R. (2014). On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics, 42(3):1166 – 1202. Xia, L., Nan, B., and Li, Y. (2023). Debiased lasso for generalized linear models with a diverging number of covariates. Biometrics, 79(1):344–357. Zhang, C.-H. and Zhang, S. S. (2014). Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society SeriesB: Statistical Methodology, 76(1):217–242. Zhao, P. and Yu, B. (2006). On model selection consistency of lasso. The Journal of Machine Learning Research, 7:2541–2563. Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476):1418–1429. Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(2):301–320. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96428 | - |
| dc.description.abstract | LassoNet 是一種Lasso 的擴展模型,其結合了前饋神經網絡以捕捉非線性關係,並同時保留其稀疏解。然而,它跟Lasso 一樣有偏誤問題,特別是在高維廣義線性模型中。本文中將LassoNet 擴展到高維廣義線性模型中,使其能夠在維持稀疏性的同時建模複雜的數據結構。然而因為該擴展模型的估計結果仍然與Lasso一樣存在偏誤問題,所以我們引入了van de Geer et al. (2014) 提出的去偏誤架構,並通過基於自助法的校正方法作進一步改進。我們的改進提供了一個具有良好預測性和解釋性的去偏LassoNet 估計式。我們提出的方法拓展LassoNet 的應用範圍,並為分析高維數據中預測變量與響應變量之間的非線性和稀疏關係提供了一個可靠的框架。 | zh_TW |
| dc.description.abstract | LassoNet, an extension of Lasso, incorporates feed-forward neural networks (FFNs) to capture nonlinear relationships while retaining sparse solutions. However, it inherits Lasso’s bias issues, particularly in high-dimensional GLMs. In this thesis, we extend LassoNet to GLMs, enabling it to model complex data structures while maintaining sparsity. As the estimations provided by this extended model still exhibit bias similar to Lasso, we address this issue by incorporating the debiasing framework introduced by van de Geer et al. (2014) and further enhancing it with a bootstrap-based correction. These refinements yield a debiased LassoNet estimator that is both predictive and interpretable. The proposed method broadens the applicability of LassoNet, providing a reliable framework for analyzing high-dimensional data with nonlinear and sparse relationships between predictors and response. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-13T16:25:36Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-02-13T16:25:36Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 摘要 i
Abstract iii Contents v List of Figures vii Chapter 1 Introduction 1 1.1 Background 1 1.2 Proposed method 4 1.3 Organization of the thesis 4 Chapter 2 Preliminary 5 2.1 Lasso 5 2.2 Fully-Connected Neural Network 6 Chapter 3 Method 13 3.1 Lassonet 13 3.2 Debiasing Lasso Estimators 14 3.2.1 Node-wise Lasso regression 14 3.2.2 Extensions of node-wise Lasso regression 16 3.3 Debiasing the LassoNet 17 3.3.1 Merge LassoNet and GLM 18 3.3.2 Bootstraping the Bias of LassoNet 18 Chapter 4 Simulation 21 4.1 Evaluation of estimation bias 22 4.2 Evaluating variable selection performance 24 Chapter 5 Conclusion 27 References 29 | - |
| dc.language.iso | en | - |
| dc.subject | 變數選擇 | zh_TW |
| dc.subject | LassoNet | zh_TW |
| dc.subject | 高維度模型 | zh_TW |
| dc.subject | 去偏誤 | zh_TW |
| dc.subject | 神經網路 | zh_TW |
| dc.subject | LassoNet | en |
| dc.subject | Neural Network | en |
| dc.subject | Bias Reduction | en |
| dc.subject | High-dimensional Models | en |
| dc.subject | Variable Selection | en |
| dc.title | 使用自助法技術對LassoNet模型去偏誤 | zh_TW |
| dc.title | Bias Reduction in LassoNet Models Using Bootstrap Techniques | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 113-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 陳裕庭;張升懋 | zh_TW |
| dc.contributor.oralexamcommittee | Yu-Ting Chen;Sheng-Mao Chang | en |
| dc.subject.keyword | 神經網路,去偏誤,高維度模型,變數選擇,LassoNet, | zh_TW |
| dc.subject.keyword | Neural Network,Bias Reduction,High-dimensional Models,Variable Selection,LassoNet, | en |
| dc.relation.page | 31 | - |
| dc.identifier.doi | 10.6342/NTU202500447 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-02-07 | - |
| dc.contributor.author-college | 理學院 | - |
| dc.contributor.author-dept | 統計與數據科學研究所 | - |
| dc.date.embargo-lift | 2025-02-14 | - |
| 顯示於系所單位: | 統計與數據科學研究所 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-1.pdf | 360.79 kB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
