請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67958
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 張晏誠(Yen-Cheng Chang) | |
dc.contributor.author | Han Chiu | en |
dc.contributor.author | 裘涵 | zh_TW |
dc.date.accessioned | 2021-06-17T02:00:35Z | - |
dc.date.available | 2020-07-27 | |
dc.date.copyright | 2017-07-27 | |
dc.date.issued | 2017 | |
dc.date.submitted | 2017-07-19 | |
dc.identifier.citation | Chen, Y., Wiesel, A., & Hero, A. O. (2009, April). Shrinkage estimation of high dimensional covariance matrices. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on (pp. 2937-2940). IEEE.
Chen, B., Huang, S. F., & Pan, G. (2015). High dimensional mean–variance optimization through factor analysis. Journal of Multivariate Analysis, 133, 140-159. Berk, K. N. (1974). Consistent autoregressive spectral estimates. Ann. Statist., 2, 489–502. Markowitz, H. (1952). Portfolio selection. The journal of finance, 7(1), 77-91. Mcnamara, J. R. (1998). Portfolio selection using stochastic dominance criteria. Decision Sciences, 29(4), 785-801. Laloux, L., Cizeau, P., Bouchaud, J. P., & Potters, M. (1999). Noise dressing of financial correlation matrices. Physical review letters, 83(7), 1467. Levina, E., Rothman, A., & Zhu, J. (2008). Sparse estimation of large covariance matrices via a nested Lasso penalty. The Annals of Applied Statistics, 2(1), 245-263. Ing, C. K., Chiou, H. T., & Guo, M. (2016). Estimation of inverse autocovariance matrices for long memory processes. Bernoulli, 22(3), 1301-1330. Ing, C. K., & Lai, T. L. (2011). A stepwise regression method and consistent model selection for high-dimensional sparse linear models. Statistica Sinica, 1473-1513. Chen, Y., Wiesel, A., & Hero, A. O. (2009, April). Shrinkage estimation of high dimensional covariance matrices. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on (pp. 2937-2940). IEEE. Bickel, P. J., & Levina, E. (2008). Regularized estimation of large covariance matrices. The Annals of Statistics, 199-227. Lam, C., & Fan, J. (2009). Sparsistency and rates of convergence in large covariance matrix estimation. Annals of statistics, 37(6B), 4254. El Karoui, N. (2010). High-dimensionality effects in the Markowitz problem and other quadratic programs with linear constraints: Risk underestimation. The Annals of Statistics, 38(6), 3487-3566. Banerjee, O., Ghaoui, L. E., & d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. Journal of Machine learning research, 9(Mar), 485-516. Friedman, J., Hastie, T., & Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 432-441. Rothman, A. J., Bickel, P. J., Levina, E., & Zhu, J.(2008). Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2, 494-515. Fan, J., Fan, Y., & Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. Journal of Econometrics, 147(1), 186-197. Fan, J., Liao, Y., & Mincheva, M. (2011). High dimensional covariance matrix estimation in approximate factor models. Annals of statistics, 39(6), 3320. Cai, T., & Liu, W. (2011). Adaptive thresholding for sparse covariance matrix estimation. Journal of the American Statistical Association, 106(494), 672-684. Berk, K. N. (1974). Consistent autoregressive spectral estimates. The Annals of Statistics, 489-502. Wu, W. B., & Pourahmadi, M. (2003). Nonparametric estimation of large covariance matrices of longitudinal data. Biometrika, 831-844. Ing, C. K., Wang, H. Y., & Kuang, H. C. (2016). Estimation of large precision matrix for high dimensional mean-variance optimization. | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/67958 | - |
dc.description.abstract | 在平均-變異數投資組合最佳化的問題中,我們時常需要估計共變異數矩陣的反矩陣以計算投資組合的最適權重。當資產數量大且樣本數小的同時,共變異數矩陣在計算反矩陣上較為困難且複雜。常用來估計高維度共變異數矩陣的方法是在樣本共變異數上利用稀疏性假設,使高維度矩陣轉換為一個可逆矩陣。本研究提出一個統計框架,透過修正的Cholesky 分解法將高維度共變異數矩陣估計,轉換為迴歸係數估計之問題,並使用正交貪婪演算法(OGA)處理高維度迴歸模型之選擇。在模擬研究中,比較估計量與母體共變異數矩陣間的差異。此外,實證研究顯示在合理的參數假設下,OGA 估計結果優於Adaptive thresholding和linear shrinkage之方法。 | zh_TW |
dc.description.abstract | The classical mean-variance portfolio optimization requires the estimation of an inverse covariance matrix. This is a challenging task given the large number of assets in the market and at the same time limited available historical data. Commonly used methods for estimating large covariance matrix exploit sparsity in the sample covariance matrix. In this study, I propose a statistical framework to estimate high-dimensional variance-covariance matrices under small sample size via the modified Cholesky decomposition with orthogonal greedy algorithm (OGA). This study transforms the covariance matrix estimation into a regression coefficient estimation problem, where the OGA is a fast stepwise regression method for the high-dimensional model selection and coefficient estimation. Therefore, I perform simulation studies to measure the difference between the estimators and the population covariance matrix. Moreover, empirical results show OGA estimators have better performance than adaptive thresholding and linear shrinkage approaches under reasonable parameter assumptions. | en |
dc.description.provenance | Made available in DSpace on 2021-06-17T02:00:35Z (GMT). No. of bitstreams: 1 ntu-106-R04H41011-1.pdf: 1643761 bytes, checksum: cf1ab75578b25cc5e6ce27dafc21b658 (MD5) Previous issue date: 2017 | en |
dc.description.tableofcontents | 口試委員審定書 ............................................................................................................... i
誌謝 ................................................................................................................................. ii 摘要 ................................................................................................................................ iv ABSTRACT ..................................................................................................................... v CONTENTS .................................................................................................................... vi LIST OF FIGURES....................................................................................................... viii LIST OF TABLES .......................................................................................................... ix 1 Introduction ............................................................................................................. 1 2 Literature Review.................................................................................................... 4 3 Model and Estimation ............................................................................................. 6 3.1 Main model......................................................................................................... 7 3.2 Factor analysis based estimator .......................................................................... 7 3.3 Estimation of large error covariance matrix ....................................................... 8 3.3.1 Modified Cholesky decomposition ........................................................... 8 3.3.2 Orthogonal greedy algorithm (OGA) ........................................................ 9 4 Alternative method................................................................................................ 13 4.1 Adaptive thresholding estimation..................................................................... 13 4.2 Shrinkage estimation ........................................................................................ 14 5 Simulation ............................................................................................................. 15 5.1 Factor model structure data and simulations .................................................... 15 5.2 Simulation Results............................................................................................ 16 6 Empirical study ..................................................................................................... 19 6.1 Data and Descriptive Statistics ......................................................................... 19 6.2 Methodology..................................................................................................... 20 6.3 Empirical Results.............................................................................................. 23 6.3.1 Portfolio Performance (2009).................................................................. 23 6.3.2 Portfolio Performance (2009-2016) ........................................................ 28 7 Conclusion............................................................................................................. 31 References ...................................................................................................................... 32 | |
dc.language.iso | en | |
dc.title | 高維度平均-變異數最佳化之共變異數矩陣估計:以台灣資料為例 | zh_TW |
dc.title | Variance-Covariance Matrix Estimation for High Dimensional Mean-Variance Optimization: Evidence from Taiwan | en |
dc.type | Thesis | |
dc.date.schoolyear | 105-2 | |
dc.description.degree | 碩士 | |
dc.contributor.coadvisor | 葉小蓁(Hsiaw-Chan Yeh) | |
dc.contributor.oralexamcommittee | 鄭宏文(Hung-Wen Cheng) | |
dc.subject.keyword | 因素分析,共變異數矩陣,修正Cholesky分解法,正交貪婪演算法,平均-變異數最佳化, | zh_TW |
dc.subject.keyword | Factor analysis,Variance-covariance matrix,Modified Cholesky decomposition,Orthogonal greedy algorithm,Mean-variance optimization, | en |
dc.relation.page | 33 | |
dc.identifier.doi | 10.6342/NTU201700667 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2017-07-19 | |
dc.contributor.author-college | 共同教育中心 | zh_TW |
dc.contributor.author-dept | 統計碩士學位學程 | zh_TW |
顯示於系所單位: | 統計碩士學位學程 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-106-1.pdf 目前未授權公開取用 | 1.61 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。