請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/15933
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 葉丙成(Ping-Cheng Yeh) | |
dc.contributor.author | Tzung-Ying Lin | en |
dc.contributor.author | 林宗穎 | zh_TW |
dc.date.accessioned | 2021-06-07T17:55:45Z | - |
dc.date.copyright | 2020-08-13 | |
dc.date.issued | 2020 | |
dc.date.submitted | 2020-08-07 | |
dc.identifier.citation | [1] 曾宇皓(2019)。結合基本分析與技術分析程序化交易之實證研究-以臺灣股票市場為例。國立中央大學財務金融學系在職專班碩士論文,桃園縣。 [2] 中華民國統計資訊網.Available: https://www.stat.gov.tw/mp.asp?mp=4 [3] Yahoo!理財 (2017), 上市、上櫃、興櫃、未上市、全額交割 這5種股票的不同差在「風險」2個字!, Available: https://tw.money.yahoo.com/ [4] Huang, Y. (2019). Machine Learning for Stock Prediction Based on Fundamental Analysis. [5] 張晏銘. (2017). 台灣股票市場技術分析效用證實. 臺灣大學國際企業學研究所學位論文, 台北市。. [6] 周俊志. (2008). 自動交易系統與策略評價之研究.國立台灣大學電機資訊工程學系研究所碩士論文,台北市。 [7] 許灝馨. (2019)。機器學習應用於台灣股票指數期貨趨勢預測及交易策略建構。國立台灣大學電機資訊工程學系研究所碩士論文,台北市。 [8] Schwendner, P. (2020). Advances in Financial Machine Learning: by Marcos Lopez de Prado, Wiley (2018). [9] Charles H.Dow ,Rhea, Robert, The Dow Theory,(1948). [10] 董鍾祥. (2014). 技術分析無用論?---以兩岸股市 (週資料) 實證分析. 臺灣大學國家發展研究所學位論文, 1-134. [11] Wiki.加權股價指數.Available: https://zh.wikipedia.org/wiki/加權股價指數 [12] CMoney文章,台股2008年金融海嘯大回顧,你還記得那年發生什麼事嗎?,Available: https://www.cmoney.tw/learn/course/passiveinvestment/topic/1643 [13] 自由時報新聞.Available: https://ec.ltn.com.tw/article/breakingnews/587142 [14] 鉅亨網新聞.Available: https://reurl.cc/D91rzm [15] 關鍵評論網新聞.Available: https://www.thenewslens.com/article/23261 [16] ETtoday新聞雲.Available: https://www.ettoday.net/news/20181228/1342870.htm [17] 全國法規資料庫.Available: https://law.moj.gov.tw/Index.aspx [18] 理財達人秀. Available: https://www.youtube.com/watch?v=3jE8CVcY8B0 [19] MBA智庫.Available : https://wiki.mbalib.com/zh-tw/随机漫步理论 [20] Finlab實驗室.Available:https://www.finlab.tw/ [21] Wiki. Beta係數.Available:https://zh.wikipedia.org/wiki/Beta係數 [22] Opitz, D.; Maclin, R. (1999). 'Popular ensemble methods: An empirical study'. Journal of Artificial Intelligence Research. 11: 169–198. [23] Polikar, R. (2006). 'Ensemble based systems in decision making'. IEEE Circuits and Systems Magazine. 6 (3): 21–45. [24] Rokach, L. (2010). 'Ensemble-based classifiers'. Artificial Intelligence Review. 33 (1–2): 1–39 [25] 李宏毅(2016),ML Lecture 2: Where does the error come from?,Available:https://www.youtube.com/watch?v=D_S6y0Jm6dQ [26] 李宏毅(2016),ML Lecture 22: Ensemble,Available:https://www.youtube.com/watch?v=tH9FH1DH5n0 [27] Dietterich, T. G. (2002). Ensemble learning. The handbook of brain theory and neural networks, 2, 110-125. [28] Polikar, R. (2012). Ensemble learning. In Ensemble machine learning (pp. 3). Springer, Boston, MA. [29] Liaw, A., Wiener, M. (2002). Classification and regression by randomForest. R news, 2(3), 18-22. [30] Ho, T. K. (1995). Random Decision Forest, 3rd Int'l Conf. on Document Analysis and Recognition. [31] Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. [32] Tommy Huang (2018).機器學習: Ensemble learning之Bagging、Boosting和AdaBoost .Available:https://medium.com/@chih.sheng.huang821 [33] 知乎(2019),从零实现回归随机森林,Available:https://zhuanlan.zhihu.com/p/52052903 [34] Afroz Chakure(2019), Random Forest Regression, Available:https://towardsdatascience.com/random-forest-and-its-implementation-71824ced454f [35] Freund, Y., Schapire, R. E. (1995, March). A desicion-theoretic generalization of on-line learning and an application to boosting. In European conference on computational learning theory (pp. 23-37). Springer, Berlin, Heidelberg. [36] Freund, Y., Schapire, R., Abe, N. (1999). A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence, 14(771-780), 1612. [37] CSDN(2019) , AdaBoost 算法:回归问题Available:https://blog.csdn.net/randompeople/article/details/95042487?depth_1-utm_source=distribute.pc_relevant.none-task utm_source=distribute.pc_relevant.none-task [38] Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., ... Liu, T. Y. (2017). Lightgbm: A highly efficient gradient boosting decision tree. In Advances in neural information processing systems (pp. 3146-3154). [39] Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189-1232. [40] Cortes, C., Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273-297. [41] Drucker, H., Burges, C. J., Kaufman, L., Smola, A. J., Vapnik, V. (1997). Support vector regression machines. In Advances in neural information processing systems (pp. 155-161). [42] Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4), 193-202. [43] LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551. [44] Hinton, G. E. (2007). Learning multiple layers of representation. Trends in cognitive sciences, 11(10), 428-434. [45] Haykin, S., Network, N. (2004). A comprehensive foundation. Neural networks, 2(2004), 41. [46] Rumelhart, D. E., Hinton, G. E., Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). California Univ San Diego La Jolla Inst for Cognitive Science. [47] Rumelhart, D. E., Hinton, G. E., Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536. [48] Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010 (pp. 177-186). Physica-Verlag HD. [49] Duchi, J., Hazan, E., Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(Jul), 2121-2159. [50] Tieleman, T., Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 26-31. [51] Kingma, D. P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. [52] LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551. [53] Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). [54] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9). [55] Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. [56] Chen, T., Xu, R., He, Y., Wang, X. (2017). Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN. Expert Systems with Applications, 72, 221-230. [57] Hochreiter, S., Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. [58] Colah's blog (2015), Understanding LSTM Networks,Available: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ [59] Bengio, Y., Simard, P., Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2), 157-166. [60] He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). [61] He, K., Zhang, X., Ren, S., Sun, J. (2016, October). Identity mappings in deep residual networks. In European conference on computer vision (pp. 630-645). Springer, Cham. [62] Karpathy(2015), The Unreasonable Effectiveness of Recurrent Neural Networks.Available: http://karpathy.github.io/2015/05/21/rnn-effectiveness/ [63] Bahdanau, D., Cho, K., Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. [64] Cheng, J., Dong, L., Lapata, M. (2016). Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733. [65] 顧廣平. (2010). 營收動能策略. 管理學報, 27(3), 267-289. [66] 蔡毓娟. (2011). 月營收資訊對三大法人投資決策影響之研究. 淡江大學會計學系碩士在職專班學位論文, 1-80. [67] 顧廣平, 卓志文. (2013). 產業營收動能策略. 管理資訊計算, 2(1), 92-104. [68] 巫建廷. (2014). 台灣上市櫃公司月營收之資訊內涵與交易策略. 中山大學財務管理學系研究所學位論文, 1-66. [69] 蕭湧志. (2016). 單月營收成長率和股價報酬的關聯. 中正大學財務金融學系學位論文, 1-39. [70] 林逸青, 謝孟芬, 徐旺興. (2019). 以深度學習建構股價預測模型: 以台灣股票市場為例. 當代商管論叢, 4(1), 35-59. [71] 簡壬申(2017)。類神經網路在股票預測之獲利可能性研究 -以台灣50 成分股為例。國立雲林科技大學財務金融系碩士論文,雲林縣。 [72] 施育霖(2019)。以機器學習方法預測股價:以台股金融類股為案例。國立中興大學資訊管理學系所碩士論文,台中市。 [73] 徐少騏(2019)。道瓊股市之機器學習預測模型: 使用隨機梯度下降法。國立彰化師範大學工業教育與技術學系碩士論文,彰化縣。 [74] Patel, J., Shah, S., Thakkar, P., Kotecha, K. (2015). Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques. Expert systems with applications, 42(1), 259-268. [75] 謝曜宏, 許資汯, 朱涵宇, 林祝興. (2019, November). 使用機器學習的智慧選股策略之研究. In NCS 2019 全國計算機會議 (pp. 467-472). 國立金門大學. [76] 林典南. (2008). 使用 AdaBoost 之臺股指數期貨當沖交易系統. 臺灣大學資訊網路與多媒體研究所學位論文, 1-55. [77] Liu, H., Song, B. (2018, December). Stock Price Trend Prediction Model Based on Deep Residual Network and Stock Price Graph. In 2018 11th International Symposium on Computational Intelligence and Design (ISCID) (Vol. 2, pp. 328-331). IEEE. [78] Gabriel Cypriano Saca,(2018), XGBoost LightGBM.Available:https://www.slideshare.net/GabrielCyprianoSaca/xgboost-lightgbm [79] 台灣證券交易所, Available:https://www.twse.com.tw/zh/ [80] 證券櫃檯買賣中心, Available:https://www.tpex.org.tw/web/index.php?l=zh-tw [81] 公開資訊觀測站, Available:https://mops.twse.com.tw/mops/web/index [82] Canonical Ltd.(2014). Ubuntu. Available:https://ubuntu.com/download [83] Guido van Rossum(1991). Python. Available:https://www.python.org/ [84] Sik-Ho Tsang(2018). Review: VGGNet.Available:https://medium.com/coinmonks/paper-review-of-vggnet-1st-runner-up-of-ilsvlc-2014-image-classification-d02355543a11 [85] G-kdom(2019). CNN基礎知識——池化(pooling)Available:https://zhuanlan.zhihu.com/p/78760534 [86] Mario Fortier(1999). TA-Lib. Available:https://ta-lib.org/ [87] Wilder, J. W. (1978). New concepts in technical trading systems. Trend Research. [88] Ehlers, J. F. (2001). Rocket science for traders: Digital signal processing applications (Vol. 112). John Wiley Sons. [89] Achelis, S. B. (2001). Technical Analysis from A to Z (pp. 199-200). New York: McGraw Hill. [90] Wiki,幾何布朗運動, Available:https://zh.wikipedia.org/wiki/幾何布朗運動 [91] Wiki,布萊克-休斯模型, Available:https://zh.wikipedia.org/wiki/布萊克-休斯模型 [92] 陳金瑩,富貴滿盈價值股, Available:http://www.cmoney.tw/app/ItemContent.aspx?id=2032 [93] 元大台灣卓越50基金,Available:https://www.twse.com.tw/zh/ETF/fund/0050 | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/15933 | - |
dc.description.abstract | 股票市場趨勢預測是許多學者在研究的議題,有許多人依據技術分析、基本面分析、籌碼面分析以及消息面分析來構建交易策略,也因為近幾年電腦硬體設備的進步使得運算速度的提升,越來越多的研究者使用機器學習來做金融相關交易策略的研究。
本研究希望建置一個交易頻率不要太過頻繁而且能夠依據市場動態調整持股的交易策略,而使用月營收策略來研究,每當月營收公布後,該月的11日買,次月的10日賣。使用的訓練特徵有:股價技術指標、成交量技術指標、基本面指標、風險指標以及月營收指標,使用的標籤為該月11日至次月10日股價漲跌幅的排名。使用九種不同的機器學習演算法進行訓練:Random Forest、AdaBoost、GBDT、LightGBM、SVR、DNN、ResNet、CNN、LSTM+Attention。將九個訓練出來的模型合成為集成模型系統,提升預測的準確度,回測時分別計算做多、做空以及多空皆做,且使用不同數量的股票分析模型績效。以獲利的觀點來看是做多前1檔表現最好,年化獲利率可以到達401.48%、每月交易勝率為90.90%。 最後探討月營收指標在集成模型系統中的重要性,以及用集成模型系統和一般月營收策略、市售策略、元大台灣50 ETF的績效做比較。 | zh_TW |
dc.description.abstract | Stock market trend prediction is the research topic in which academic interests reside. Many people build trading strategies based on technical analysis, fundamental analysis, chip analysis, and message analysis. In addition, given the advancement of computer hardware equipment has increased the speed of computing in recent years, more and more researchers use machine learning to research financial trading strategies.
With the aim to build a trading strategy that does not have overly high trading frequency and can adjust the holdings contingent on the market dynamics, this study employs monthly revenue strategy, i.e., following the announcement of the current monthly revenue the stocks are purchased on the 11th of the current month and sold on the 10th of the next month. The training features used include: stock price technical indicators, trading volume technical indicators, fundamental indicators, risk indicators, and monthly revenue indicators. The labels used are the rankings of stock price changes from 11th of the current month to 10th of the next month. This study uses nine different machine learning algorithms for training: Random Forest, AdaBoost, GBDT, LightGBM, SVR, DNN, ResNet, CNN, LSTM+Attention. The nine trained models are combined into an ensemble model system to improve the accuracy of the prediction. When backtesting, this study respectively calculates go long, go short, and go both of long and short with the different number of stocks. From the profits perspective, go long in the first rank has better performance, with an annualized interest rate of 401.48% and a monthly trading win rate of 90.90%. Finally, this study discusses the importance of monthly revenue indicators in the ensemble model system; compares the performance of the ensemble model system with the general monthly revenue strategy, the strategy sold from market, and Yuanta Taiwan Top 50 ETF. | en |
dc.description.provenance | Made available in DSpace on 2021-06-07T17:55:45Z (GMT). No. of bitstreams: 1 U0001-3107202020525500.pdf: 6757426 bytes, checksum: 59c6e30a83e4c6b9e8ede6e86cdba05f (MD5) Previous issue date: 2020 | en |
dc.description.tableofcontents | 口試委員會審定書 # 誌謝 i 中文摘要 iii ABSTRACT iv CONTENTS vi LIST OF FIGURES x LIST OF TABLES xv 第1章 緒論 1 1.1 研究背景 1 1.2 研究動機 2 1.3 研究範圍與限制 3 1.3.1 研究範圍 3 1.3.2 研究限制 3 1.4 研究目的 4 1.5 論文架構 5 第2章 股市背景知識及文獻探討 7 2.1 台灣股市預測基礎 7 2.1.1 台灣股市簡介 7 2.1.2 市場趨勢 12 2.1.3 隨機漫步假說 15 2.1.4 交易策略 15 2.1.5 風險分析 17 2.1.6 回測 20 2.2 相關研究及文獻 20 2.2.1 交易策略預測股票市場 20 2.2.2 機器學習策略預測股票市場 24 第3章 機器學習相關模型 29 3.1 集成學習模型(Ensemble Learning Model) 30 3.1.1 隨機森林(Random Forest, RF) 32 3.1.2 自適應增強(Adaptive Boosting, AdaBoost) 34 3.1.3 梯度提升決策樹(Gradient Boosting Decision Tree, GBDT) 36 3.1.4 LightGBM(Light Gradient Boosting Machine) 37 3.2 支持向量機(Support Vector Machines, SVM) 38 3.2.1 支持向量回歸(Support Vector Regression, SVR) 39 3.3 深度學習模型(Deep Learning Model) 40 3.3.1 深度神經網路(Deep Neural Network, DNN) 41 3.3.2 卷積神經網路(Convolution Neural Network, CNN) 44 3.3.3 長短期記憶(Long Short-Term Memory, LSTM) 45 3.3.4 深度殘差網絡(Deep residual network, ResNet) 48 3.3.5 注意力機制(Attention Mechanism) 49 第4章 研究方法 51 4.1 實驗設計 51 4.2 系統架構 53 4.3 實驗資料 54 4.4 環境設定 55 4.4.1 硬體設備 55 4.4.2 作業系統及開發程式 55 4.4.3 交易環境設定 56 4.5 資料預處理 57 4.5.1 還原股價 58 4.5.2 抽取資料集特徵 60 4.5.3 資料標準化 65 4.5.4 資料切分 66 第5章 股票趨勢 68 5.1 機器學習模型的月營收策略 68 5.1.1 月趨勢標籤 68 5.1.2 月營收策略 69 5.2 實驗 70 5.2.1 各模型超參數設定及訓練 70 5.2.2 各模型績效評估 70 5.2.3 集成模型系統績效分析 79 第6章 實驗結果與分析 86 6.1 月營收指標的有效性 86 6.1.1 特徵重要性 86 6.1.2 選取特徵 87 6.1.3 模型績效綜合評估 88 6.2 一般月營收策略比較 94 6.2.1 一般月營收策略 94 6.2.2 模型績效綜合評估 94 6.3 市售策略比較 100 6.3.1 市售策略 100 6.3.2 模型績效綜合評估 101 第7章 結論與未來展望 104 7.1 研究結論 104 7.2 未來研究建議 105 REFERENCE 107 附錄A 115 附錄B 120 附錄C 132 | |
dc.language.iso | zh-TW | |
dc.title | 台灣股票市場趨勢預測月營收策略機器學習系統 | zh_TW |
dc.title | Machine Learning System of Monthly Revenue Strategy for Forecasting Taiwan Stock Market Trends | en |
dc.type | Thesis | |
dc.date.schoolyear | 108-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 呂育道(Yuh-Dauh Lyuu),jang@csie.ntu.edu.tw(Jyh-Shing Roger Jang) | |
dc.subject.keyword | 市場趨勢,月營收策略,台灣股票市場,機器學習,深度學習, | zh_TW |
dc.subject.keyword | market trend,monthly revenue strategy,Taiwan stock market,machine learning,deep learning, | en |
dc.relation.page | 149 | |
dc.identifier.doi | 10.6342/NTU202002185 | |
dc.rights.note | 未授權 | |
dc.date.accepted | 2020-08-10 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 電信工程學研究所 | zh_TW |
顯示於系所單位: | 電信工程學研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-3107202020525500.pdf 目前未授權公開取用 | 6.6 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。