Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 管理學院
  3. 資訊管理學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70681
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor盧信銘(Hsin-Min Lu)
dc.contributor.authorWei-Chun Liaoen
dc.contributor.author廖維君zh_TW
dc.date.accessioned2021-06-17T04:34:45Z-
dc.date.available2023-08-16
dc.date.copyright2018-08-16
dc.date.issued2018
dc.date.submitted2018-08-09
dc.identifier.citationBernardo, J., & Berger, J. (1998). Regression and classification using Gaussian process priors. Bayesian statistics, 6, 475.
Bertin-Mahieux, T., Ellis, D. P. W., Whitman, B., & Lamere, P. (2011). The Million Song Dataset. Retrieved from: https://labrosa.ee.columbia.edu/millionsong/
Breiman, L. (1997). Arcing the edge. Retrieved from Technical Report 486, Statistics Department, University of California at Berkeley.:
Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
Chalupka, K., Williams, C. K., & Murray, I. (2013). A framework for evaluating approximation methods for Gaussian process regression. Journal of Machine Learning Research, 14(Feb), 333-350.
Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. (pp. 785-794). ACM.
Dheeru, D. K. T., Efi. (2017). UCI Machine Learning Repository. Retrieved from: http://archive.ics.uci.edu/ml
Fanaee-T, H., & Gama, J. (2014). Event labeling combining ensemble detectors and background knowledge. Progress in Artificial Intelligence, 2(2-3), 113-127.
Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189-1232.
Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367-378.
Gibbs, M. N., & MacKay, D. J. (2000). Variational Gaussian process classifiers. IEEE Transactions on Neural Networks, 11(6), 1458-1464.
Jiang, J., Jiang, J., Cui, B., & Zhang, C. (2017). TencentBoost: A Gradient Boosting Tree System with Parameter Server. 2017 IEEE 33rd International Conference on Data Engineering (ICDE). (pp. 281-284). IEEE.
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., . . . Liu, T.-Y. (2017). LightGBM: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems. (pp. 3149-3157).
Li, P., Wu, Q., & Burges, C. J. (2008). Mcrank: Learning to rank using multiple classification and gradient boosting. Advances in neural information processing systems. (pp. 897-904).
Meng, X. (2014). MLlib: Scalable machine learning on Spark. Paper presented at the Spark workshop.
Minka, T. P. (2001). Expectation propagation for approximate Bayesian inference. Proceedings of the Seventeenth conference on Uncertainty in Artificial Intelligence. (pp. 362-369). Morgan Kaufmann Publishers Inc.
Nguyen-Tuong, D., Seeger, M., & Peters, J. (2009). Model learning with local gaussian process regression. Advanced Robotics, 23(15), 2015-2034.
Nickisch, H., & Rasmussen, C. E. (2008). Approximations for binary Gaussian process classification. Journal of Machine Learning Research, 9(Oct), 2035-2078.
Poggio, T., & Girosi, F. (1990). Networks for approximation and learning. Proceedings of the IEEE, 78(9), 1481-1497.
Quiñonero-Candela, J., & Rasmussen, C. E. (2005). A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6(Dec), 1939-1959.
Quinonero-Candela, J., Rasmussen, C. E., & Williams, C. K. (2007). Approximation methods for Gaussian process regression. Large-scale kernel machines, 203-224.
Rasmussen, C. E., & Williams, C. K. (2006). Gaussian processes for machine learning (Vol. 1): MIT press Cambridge.
Ridgeway, G. (2017). Generalized Boosted Models: A guide to the gbm package. Retrieved from https://cran.r-project.org/package=gbm
Seeger, M., Williams, C., & Lawrence, N. (2003). Fast forward selection to speed up sparse Gaussian process regression. Artificial Intelligence and Statistics 9(EPFL-CONF-161318).
Snelson, E., & Ghahramani, Z. (2006). Sparse Gaussian processes using pseudo-inputs. Advances in neural information processing systems (pp. 1257-1264).
Snelson, E., & Ghahramani, Z. (2007). Local and global sparse Gaussian process approximations. Artificial Intelligence and Statistics. (pp. 524-531).
Tsanas, A., Little, M. A., McSharry, P. E., & Ramig, L. O. (2010). Accurate telemonitoring of Parkinson's disease progression by noninvasive speech tests. IEEE transactions on Biomedical Engineering, 57(4), 884-893.
Wahba, G. (1990). Spline models for observational data (Vol. 59): Philadelphia: Society for Industrial and Applied Mathematics.
Williams, C. K., & Barber, D. (1998). Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12), 1342-1351.
Yeh, I.-C. (1998). Modeling of strength of high-performance concrete using artificial neural networks. Cement and Concrete research, 28(12), 1797-1808.
Yeh, I.-C. (2007). Modeling slump flow of concrete using second-order regressions and artificial neural networks. Cement and Concrete Composites, 29(6), 474-480.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/70681-
dc.description.abstract高斯過程迴歸 (Gaussian Process Regression) 是機器學習中的一種方法,此方法具有良好的預測結果、且容易實作,但在運算時時間及空間的複雜度高,使得此方法難以被實際運用在大量資料集上。本研究提供一個基於梯度提升演算法的估計方法,實驗結果顯示此方法能夠在訓練時使用較低的時間及記憶體成本來達到良好的估計效果。zh_TW
dc.description.abstractGaussian process regression (GPR) is an important model in the field of machine learning. GPR model is flexible, robust, and easy to implement. However, it suffers from expensive computational cost: O(n^3) for training time, O(n^2) for training memory and O(n) for testing time, where n is the number of observations in training data. In this work, we develop a fast approximation method to reduce the time and space complexity. The proposed method is related to the design of gradient boosting algorithm. We conduct experiments using real-world dataset and demonstrate that the proposed method can achieve comparable prediction performance compared to the standard GPR model and some state-of-the-art regression methods.en
dc.description.provenanceMade available in DSpace on 2021-06-17T04:34:45Z (GMT). No. of bitstreams: 1
ntu-107-R05725020-1.pdf: 1741983 bytes, checksum: ed03cd5dbaf394042d9510f03d1ef27f (MD5)
Previous issue date: 2018
en
dc.description.tableofcontents口試委員審定書 i
致謝 ii
中文摘要 iii
ABSTRACT iv
Contents v
List of Tables vii
List of Figures viii
Chaper 1. Introduction 1
Chaper 2. Literature Review 3
2.1. Gaussian Processes for Machine Learning 3
2.1.1. Gaussian Process Regression 4
2.1.2. Gaussian Process Classification 5
2.1.3. Approximation Methods 6
2.2. Gradient Boosting Algorithms 8
2.2.1. Boosting 8
2.2.2. Gradient Boosting 8
2.2.3. Applications of Gradient Boosting Machine 10
2.2.4. Gradient Boosting Machine Implementation 11
Chaper 3. Design of Gradient Boosting Gaussian Process Regression 12
3.1. Gradient Boosting Gaussian Process Regression 12
3.2. GBGPR for Time Series Forecasting 16
Chaper 4. Dataset 19
Chaper 5. Experimental Results and Discussion 21
5.1. Regression 21
5.1.1. Experimental Results 21
5.1.2. Comparison with Baseline Algorithms 26
5.1.3. Interpretation of Results 27
5.2. Time Series Forecasting 29
Chaper 6. Conclusion and Future Work 31
Reference 32
dc.language.isoen
dc.subject機器學習zh_TW
dc.subject高斯過程迴歸zh_TW
dc.subject梯度提升zh_TW
dc.subject估計zh_TW
dc.subjectMachine learningen
dc.subjectApproximationsen
dc.subjectGradient Boostingen
dc.subjectGaussian Process Regressionen
dc.title基於高斯過程迴歸的梯度提升演算法zh_TW
dc.titleA Gradient Boosting Algorithm Based on Gaussian Process Regressionen
dc.typeThesis
dc.date.schoolyear106-2
dc.description.degree碩士
dc.contributor.oralexamcommittee余峻瑜(Jiun-Yu Yu),洪為璽(Wei-Hsi Hung)
dc.subject.keyword機器學習,高斯過程迴歸,梯度提升,估計,zh_TW
dc.subject.keywordMachine learning,Gaussian Process Regression,Gradient Boosting,Approximations,en
dc.relation.page34
dc.identifier.doi10.6342/NTU201802714
dc.rights.note有償授權
dc.date.accepted2018-08-10
dc.contributor.author-college管理學院zh_TW
dc.contributor.author-dept資訊管理學研究所zh_TW
顯示於系所單位:資訊管理學系

文件中的檔案:
檔案 大小格式 
ntu-107-1.pdf
  未授權公開取用
1.7 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved