請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96634
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 呂育道 | zh_TW |
dc.contributor.advisor | Yuh-Dauh Lyuu | en |
dc.contributor.author | 洪郡辰 | zh_TW |
dc.contributor.author | Chun-Chen Hung | en |
dc.date.accessioned | 2025-02-20T16:18:08Z | - |
dc.date.available | 2025-02-21 | - |
dc.date.copyright | 2025-02-20 | - |
dc.date.issued | 2025 | - |
dc.date.submitted | 2025-01-20 | - |
dc.identifier.citation | Y. Bengio, P. Frasconi, and P. Simard. The problem of learning long-term dependencies in recurrent networks. In IEEE International Conference on Neural Networks, volume 3, pages 1183–1188, San Francisco, 1993.
Binance. Binance/binance-public-data: Details on how to get binance public data. https://github.com/binance/binance-public-data, 2024. (accessed April 14,2024). T. Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31:307–327, 1986. G. E. P. Box and G. M. Jenkins. Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco, 1976. L. Breiman. Bagging predictors. Machine Learning, 24:123–140, 1996. L. Breiman. Random forests. Machine Learning, 45:5–32, 2001. V. Buterin. Ethereum: A next-generation smart contract and decentralized application platform. https://ethereum.org/en/whitepaper/, 2013. V. Buterin. Tether: Fiat currencies on the Bitcoin blockchain. https://tether.to/en/whitepaper/, 2014. CoinMarketCap. Bitcoin. https://coinmarketcap.com/currencies/bitcoin/. C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In J. Burstein, C. Doran, and T. Solorio, editors, Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 4171–4186, Minneapolis, Minnesota, June 2019. T. G. Dietterich. Ensemble methods in machine learning. In Multiple Classifier Systems, pages 1–15, Berlin, Heidelberg, 2000. H. Drucker, D. Wu, and V. Vapnik. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048–1054, 1999. Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In International Conference on Machine Learning, page 148–156, San Francisco, 1996. A. Gillioz, J. Casas, E. Mugellini, and O. A. Khaled. Overview of the transformerbased models for NLP tasks. In 15th Conference on Computer Science and Information Systems, pages 179–183, Sofia, Bulgaria, 2020. G. Giudici, A. Milne, and D. Vinogradov. Cryptocurrencies: market analysis and perspectives. Journal of Industrial and Business Economics, 47(1):1–18, 2020. J. Han, M. Kamber, and J. Pei. Data Mining : Concepts and Techniques. Morgan Kaufmann, San Francisco, 2011. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. R. Hyndman and G. Athanasopoulos. Forecasting: principles and practice. Melbourne: OTexts, 2021. M. A. Istiake Sunny, M. M. S. Maswood, and A. G. Alharbi. Deep learning-based stock price prediction using LSTM and bi-directional LSTM model. In 2nd Novel Intelligent and Leading Emerging Sciences Conference, pages 87–92, Giza, Egypt, 2020. A. Jain, J. Mao, and K. Mohiuddin. Artificial neural networks: A tutorial. Computer, 29(3):31–44, 1996. M. Jiang, J. Liu, L. Zhang, and C. Liu. An improved stacking framework for stock index prediction by leveraging tree-based ensemble models and deep learning algorithms. Physica A: Statistical Mechanics and its Applications, 541:122272, 2020. T. Joachims. Text categorization with suport vector machines: Learning with many relevant features. In 10th European Conference on Machine Learning, page 137–142, Berlin, Heidelberg, 1998. S. Liao, L. Xie, Y. Du, S. Chen, H. Wan, and H. Xu. Stock trend prediction based on dynamic hypergraph spatio-temporal network. Appl. Soft Comput., 154(C), 2024. P. Malhotra, L. Vig, G. Shroff, and P. Agarwal. Long short term memory networks for anomaly detection in time series. In 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pages 89–94, Bruges, Belgium, 2015. I. D. Mienye and Y. Sun. A survey of ensemble learning: Concepts, algorithms, applications, and prospects. IEEE Access, 10:99129–99149, 2022. S. M. Mirjebreili, A. Solouki, H. Soltanalizadeh, and M. Sabokrou. Multi-task transformer for stock market trend prediction. In 12th International Conference on Computer and Knowledge Engineering, pages 101–105, 2022. A. Moghar and M. Hamiche. Stock market prediction using LSTM recurrent neural network. Procedia Computer Science, 170:1168–1173, 2020. S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, 2008. S. T. A. Niaki and S. Hoseinzade. Forecasting S&P 500 index using artificial neural networks and design of experiments. Journal of Industrial Engineering International, 9:1–9, 2013. J. S. Park, H. Sung Cho, J. Sung Lee, K. I. Chung, J. M. Kim, and D. J. Kim. Forecasting daily stock trends using random forest optimization. In International Conference on Information and Communication Technology Convergence, pages 1152–1155, Jeju, Korea (South), 2019. S. K. Raipitam, S. Kumar, T. Dhanani, S. Bilgaiyan, and M. K. Gourisaria. Comparative study on stock market prediction using generic CNN-LSTM and ensemble learning. In International Conference on Network, Multimedia and Information Technology, pages 1–6, Bengaluru, India, 2023. R. Ren, D. D. Wu, and T. Liu. Forecasting stock market movement direction using sentiment analysis and support vector machine. IEEE Systems Journal, 13(1):760–770, 2019. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986. X. Shi, Z. Chen, H. Wang, D. Y. Yeung, W.-K. Wong, and W. chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Neural Information Processing Systems, page 802–810, Cambridge, MA, USA, 2015. J. Singh and P. Tripathi. Sentiment analysis of twitter data by making use of SVM, random forest and decision tree algorithm. In 10th IEEE International Conference on Communication Systems and Network Technologies, pages 193–198, Bhopal, India, 2021. E. V. A. Sylvester, P. Bentzen, I. Bradbury, M. Clément, J. Pearce, J. B. Horne, and R. G. Beiko. Applications of random forest feature selection for fine‐scale genetic population assignment. Evolutionary Applications, 11:153–165, 2017. W. Tang, X. Xu, and W. Su. A trading strategy based on LSTM. In 4th International Conference on Applied Machine Learning (ICAML), pages 420–424, 2022. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Neural Information Processing Systems, page 6000–6010, Red Hook, NY, USA, 2017. Z. Wang, Z. Hu, F. Li, S.-B. Ho, and E. Cambria. Learning-based stock trending prediction by incorporating technical indicators and social media sentiment. Cognitive Computation, 15(3):1092–1102, May 2023. D. H. Wolpert. Stacked generalization. Neural Networks, 5(2):241–259, 1992. J. Xu, W. Wang, H. Wang, and J. Guo. Multi-model ensemble with rich spatial information for object detection. Pattern Recognition, 99:107098, 2020. Y. Xu and S. B. Cohen. Stock movement prediction from tweets and historical prices. In I. Gurevych and Y. Miyao, editors, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1970–1979, Melbourne, Australia, 2018. Association for Computational Linguistics. S. Yadav and S. Shukla. Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification. In IEEE 6th International Conference on Advanced Computing, pages 78–83, 2016. Q. Zhang, C. Qin, Y. Zhang, F. Bao, C. Zhang, and P. Liu. Transformer-based attention network for stock movement prediction. Expert Systems with Applications, 202:117239, 2022. 徐賢翰. 自注意力模型於比特幣短期價格走勢預測之應用. 碩士論文, 國立臺灣大學資訊工程學系, 2023. https://hdl.handle.net/11296/6vtau6. | - |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/96634 | - |
dc.description.abstract | 近年來,加密貨幣市場蓬勃發展,比特幣(Bitcoin)作為其中的代表性資產,吸引了大量投資者和研究者的關注。隨著市場的波動性和不確定性增加,準確預測比特幣價格走勢變得愈加重要。然而,大多數現有研究都聚焦在傳統金融交易市場的數據,研究往往聚焦在使用日交易數據進行預測,對於快速變動的加密貨幣市場這可能無法及時提供有益於制定交易策略的訊息。
本論文著重在使用集成學習(ensemble learning)策略提升單一模型在比特幣短期價格走勢預測的準確性,透過堆疊法(stacking)讓集成模型能綜合多個單一模型的優勢,驗證集成模型提升各別單一模型在準確率(accuracy)及精確度(precision)上的表現,同時探討集成模型在召回率(recall)及F1分數(F1 score)上的表現。本論文使用兩個分別基於訓練集收盤價漲跌比例及測試集收盤價漲跌比例的隨機選擇模型(random prediction model),使用測試集收盤價漲跌比例代表看到部分未來漲跌資訊。實驗結果顯示,在透過滾動窗口驗證法(rolling window validation)及多個窗口驗證後,所有單一模型在準確率及精確度上皆優於兩個隨機選擇模型,代表單一模型具備優於隨機選擇的漲跌預測能力。集成模型在結合多個單一模型後,在準確率及精確度上皆在最多個窗口下表現最佳,且集成模型同時在平均準確率及平均精確度表現皆最好:在平均準確率上比單一模型中表現最佳的 LSTM 高,且在平均精確度上比LSTM高約1.7%。在平均精確度上與所有單一模型中表現最佳的SVM幾乎相同,然而在平均準確率上比SVM高約1.2%。集成模型在準確率及精確度上的表現證明了集成學習能夠提升單一模型在比特幣短期價格走勢預測上的準確性,得到一個同時在準確率及精確度上表現皆最好的模型。 為了更全面地觀察模型表現,我們計算召回率及F1分數。召回率反應模型在所有真正為正例的樣本中,能夠找出多少正例。F1分數則提供一個全面評估模型正例預測表現的評估指標,使我們能透過單一評估指標觀察模型正例預測表現。因為召回率只反應模型在所有正例中預測成功的比率,而F1分數則是同時考慮了召回率及精確度,所以這兩個評估指標反應模型在準確性以外的表現。 在召回率及F1分數上集成模型並未取得最高的表現,在平均召回率及F1分數上均第三高,比平均召回率表現最好的LSTM低約15%,也比平均F1分數表現最好的LSTM低約6%。在四個單一模型中,平均召回率及平均F1分數最高與最低的模型相差很大,因此集成模型在整合所有單一模型的預測結果後,在這兩個評估指標上會受到表現高及表現低的模型影響,使這兩個評估指標的表現介於最高表現與最低表現之間。 | zh_TW |
dc.description.abstract | In recent years, the cryptocurrency market has experienced significant growth with Bitcoin as its representative asset. Thus, it attracts considerable attention from investors and researchers. As market volatility and uncertainty increase, accurately predicting Bitcoin price trends becomes important. However, most existing research has focused on data from traditional financial trading markets, often relying on daily trading data for prediction. This may not provide timely trading information for the rapidly changing cryptocurrency market.
This thesis focuses on enhancing the performance of single models in predicting short-term Bitcoin price trends by using an ensemble learning strategy. Through stacking, the ensemble model integrates the advantages of multiple single models to improve prediction accuracy and precision. This thesis also examines the effects of ensemble learning on recall and F1 score. Experimental results demonstrate that every single model outperforms two baseline random selection models: one based on the proportions of upward movements in the training set as prediction probability and the other based on those in the testing set. The ensemble model achieves the highest accuracy and precision across more window sizes than any single model. It also achieves the highest average accuracy and average precision. So ensemble learning improves prediction accuracy and precision. The ensemble model does not achieve the highest average recall and average F1 score. It ranks third in the average recall and the average F1 score. Its average recall is 15% lower than LSTM, which achieves the highest average recall; its average F1 score is 6% lower than LSTM, which achieves the highest average F1 score. | en |
dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-02-20T16:18:08Z No. of bitstreams: 0 | en |
dc.description.provenance | Made available in DSpace on 2025-02-20T16:18:08Z (GMT). No. of bitstreams: 0 | en |
dc.description.tableofcontents | 摘要i
Abstract iii 目次v 圖次vii 表次ix 第一章緒論1 1.1 簡介. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 第二章背景3 2.1 文獻回顧. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 集成學習. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 預測模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3.1 傳統機器學習. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3.1.1 支持向量機. . . . . . . . . . . . . . . . . . . . . . . 5 2.3.1.2 隨機森林. . . . . . . . . . . . . . . . . . . . . . . . 6 2.3.2 類神經網路. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3.2.1 前饋神經網路. . . . . . . . . . . . . . . . . . . . . . 7 2.3.2.2 遞迴神經網路. . . . . . . . . . . . . . . . . . . . . . 8 2.3.2.3 長短期記憶模型. . . . . . . . . . . . . . . . . . . . 9 2.3.2.4 自注意力機制模型. . . . . . . . . . . . . . . . . . . 11 第三章實驗方法15 3.1 資料處理與來源. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 訓練與驗證資料. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2.1 正規化. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2.2 滾動窗口驗證. . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3 實驗設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4 模型架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4.1 超參數設定. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4.2 支持向量機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4.3 隨機森林. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.4 長短期記憶模型. . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.5 自注意力機制模型. . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4.6 集成模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 第四章實驗結果27 4.1 評估指標. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2 實驗結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 第五章結論與建議37 5.1 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.2 未來展望. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 參考文獻39 | - |
dc.language.iso | zh_TW | - |
dc.title | 集成學習於比特幣短期價格走勢預測 | zh_TW |
dc.title | Ensemble Learning for Short-term Bitcoin Price Trend Prediction | en |
dc.type | Thesis | - |
dc.date.schoolyear | 113-1 | - |
dc.description.degree | 碩士 | - |
dc.contributor.oralexamcommittee | 張經略;陸裕豪 | zh_TW |
dc.contributor.oralexamcommittee | Ching-Lueh Chang;U-Hou Lok | en |
dc.subject.keyword | 集成學習,比特幣,短期價格走勢預測,時間序列, | zh_TW |
dc.subject.keyword | Ensemble Learning,Bitcoin,Short-Term Price Trend Prediction,Time-Series Analysis, | en |
dc.relation.page | 44 | - |
dc.identifier.doi | 10.6342/NTU202500188 | - |
dc.rights.note | 未授權 | - |
dc.date.accepted | 2025-01-20 | - |
dc.contributor.author-college | 電機資訊學院 | - |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | - |
dc.date.embargo-lift | N/A | - |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-113-1.pdf 目前未授權公開取用 | 1.73 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。