Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97280
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor呂育道zh_TW
dc.contributor.advisorYuh-Dauh Lyuuen
dc.contributor.author周柏諺zh_TW
dc.contributor.authorPo-Yen Chouen
dc.date.accessioned2025-04-02T16:16:21Z-
dc.date.available2025-04-03-
dc.date.copyright2025-04-02-
dc.date.issued2025-
dc.date.submitted2025-01-20-
dc.identifier.citationE. K. Ampomah, Z. Qin and G. Nyame. “Evaluation of tree-based ensemble machine learning models in predicting stock price direction of movement,” Information, vol. 11, issue 6, 2020, pp. 332.
J. Benediktsson. “TA-Lib.” https://github.com/TA-Lib/ta-lib-python (accessed July 2024).
R. Bommasani, et al. “On the opportunities and risks of foundation models,” arXiv: 2108.07258, 2021.
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov and S. Zagoruyko. “End-to-end object detection with Transformers,” European Conference on Computer Vision, Glasgow, UK, August 2020, pp. 213–229.
S. Cen and C. G. Lim. “Multi-task learning of the PatchTCN-TST model for short-term multi-load energy forecasting considering indoor environments in a smart building,” IEEE Access, vol. 12, 2024, pp. 19553–19568.
Y. Cui, M. Jia, T. Y. Lin and S. Belongie. “Class-balanced loss based on effective number of samples,” Conference on Computer Vision and Pattern Recognition, Long Beach, USA, June 2019, pp. 9268–9277.
J. Devlin, M. W. Chang, K. Lee and K. Toutanova. “BERT: Pre-training of deep bidirectional Transformers for language understanding,” Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, USA, June 2019, pp. 4171–4186.
M. Grandini, E. Bagli and G. Visani. “Metrics for multi-class classification: An overview,” arXiv: 2008.05756, 2020.
L. Han, H. J. Ye and D. C. Zhan. “The capacity and robustness trade-off: Revisiting the channel independent strategy for multivariate time series forecasting,” IEEE Transactions on Knowledge and Data Engineering, 2024, pp. 1–14.
X. Huang, J. Tang and Y. Shen. “Long time series of ocean wave prediction based on PatchTST model,” Ocean Engineering, vol. 301, 2024, pp. 117572.
K. Kingphai and Y. Moshfeghi. “On time series cross-validation for deep learning classification model of mental workload levels based on EEG signals,” International Conference on Machine Learning, Optimization, and Data Science, Certosa di Pontignano, Italy, September 2022, pp. 402–416.
R. Kohavi. “A study of cross-validation and bootstrap for accuracy estimation and model selection,” International Joint Conference on Artificial Intelligence, San Francisco, USA, August 1995, pp. 1137–1143.
A. W. Li and G. S. Bastos. “Stock market forecasting using deep learning and technical analysis: A systematic review,” IEEE Access, vol. 8, 2020, pp. 18523–185242.
Y. Lin, Y. Yan, J. Xu, Y. Liao and F. Ma. “Forecasting stock index price using the CEEMDAN-LSTM model,” North American Journal of Economics and Finance, vol. 57, 2021, pp. 101421.
Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin and B. Guo. “Swin Transformer: Hierarchical vision Transformer using shifted windows,” International Conference on Computer Vision, October 2021, pp. 10012–10022.
X. Liu, H. F. Yu, I. S. Dhillon and C. J. Hsieh. “Learning to encode position for Transformer with continuous dynamical model,” International Conference on Machine Learning, Vienna, Austria, July 2020, pp. 6327–6335.
I. E. Livieris, E. Pintelas and P. Pintelas. “A CNN-LSTM model for gold price time-series forecasting,” Neural Computing and Applications, vol. 32, 2020, pp. 17351–17360.
S. M. Lundberg and S. I. Lee. “A unified approach to interpreting model predictions,” International Conference on Neural Information Processing Systems, Long Beach, USA, December 2017, pp. 4768–4777.
H. M, G. E.A., V. K. Menon and S. K.P. “NSE stock market prediction using deep-learning models,” Procedia Computer Science, vol. 132, 2018, pp. 1351–1362.
M. Nabipour, P. Nayyeri, H. Jabani, S. S. and A. Mosavi, “Predicting stock market trends using machine learning and deep learning algorithms via continuous and binary data: A comparative analysis,” IEEE Access, vol. 8, 2020, pp. 150199–150212.
R. T. F. Nazário, J. L. e Silva, V. A. Sobreiro and H. Kimura. “A literature review of technical analysis on stock markets,” The Quarterly Review of Economics and Finance, vol. 66, 2017, pp. 115–126.
Y. Nie, N. H. Nguyen, P. Sinthong and J. Kalagnanam. “A time series is worth 64 words: Long-term forecasting with Transformers,” International Conference on Learning Representations, Kigali, Rwanda, May 2023, pp. 1–24.
J. Opitz and S. Burst. “Macro f1 and macro f1,” arXiv: 1911.03347, 2019
F. B. Oriani and G. P. Coelho. “Evaluating the impact of technical indicators on stock forecasting,” IEEE Symposium Series on Computational Intelligence, Athens, Greece, December 2016, pp. 1–8.
H. Pabuccu, S. Ongan and A. Ongan. “Forecasting the movements of bitcoin prices: An application of machine learning algorithms,” arXiv: 2303.04642, 2023.
J. Patel, S. Shah, P. Thakkar and K Kotecha. “Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques,” Expert Systems with Applications, vol. 42, issue 1, 2015, pp. 259–268.
J. Patel, S. Shah, P. Thakkar and K Kotecha. “Predicting stock market index using fusion of machine learning techniques,” Expert Systems with Applications, vol. 42, issue 4, 2015, pp. 2162–2172.
R. Sawhney, S. Agarwal, A. Wadhwa and R. R. Shah, “Spatiotemporal hypergraph convolution network for stock movement forecasting,” IEEE International Conference on Data Mining, Sorrento, Italy, November 2020, pp. 482–491.
S. Selvin, R. Vinayakumar, E. A. Gopalakrishnan, V. K. Menon and K. P. Soman. “Stock price prediction using LSTM, RNN and CNN-sliding window model,” International Conference on Advances in Computing, Communications and Informatics, Udupi, India, September 2017, pp. 1643–1647.
Y. Shynkevich, T.M. McGinnity, S. A. Coleman, A. Belatreche and Y. Li. “Forecasting price movements using technical indicators: Investigating the impact of varying input window length,” Neurocomputing, vol. 264, 2017, pp. 71–88.
D. Singh and B. Singh. “Investigating the impact of data normalization on classification performance,” Applied Soft Computing, vol. 97, 2020, pp. 105524.
J. Sola and J. Sevilla. “Importance of input data normalization for the application of neural networks to complex industrial problems,” IEEE Transactions on Nuclear Science, vol. 44, issue 3, 1997, pp. 1464–1468.
Y. Sui, M. Yin, Y. Xie, H. Phan, S. Zonouz and B. Yuan. “CHIP: Channel independence-based pruning for compact neural networks,” International Conference on Neural Information Processing Systems, December 2021, pp. 24604–24616.
D. Svozil, V. Kvasnicka and J. Pospichal. “Introduction to multi-layer feed-forward neural networks,” Chemometrics and Intelligent Laboratory Systems, vol. 39, issue 1, 1997, pp. 43–62.
L. J. Tashman. “Out-of-sample tests of forecasting accuracy: An analysis and review,” International Journal of Forecasting, vol. 16, issue 4, 2000, pp. 437–450.
C. F. Tsai and Y. C. Hsiao. “Combining multiple feature selection methods for stock prediction: Union, intersection, and multi-intersection approaches,” Decision Support Systems, vol. 50, issue 1, 2010, pp. 258–269.
A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj and A. Iosifidis. “Forecasting stock prices from the limit order book using convolutional neural networks,” IEEE Conference on Business Informatics, Thessaloniki, Greece, July 2017, pp. 7–12.
M. Valipour, M. E. Banihabib and S. M. R. Behbahani. “Comparison of the ARMA, ARIMA and the autoregressive artificial neural network models in forecasting the monthly inflow of Dez dam reservoir,” Journal of Hydrology, vol. 476, 2013, pp. 433–441.
M. R. Vargas, C. E. M. dos Anjos, G. L. G. Bichara and A. G. Evsukoff. “Deep learning for stock market prediction using technical indicators and financial news articles,” International Joint Conference on Neural Networks, Rio de Janeiro, Brazil, July 2018, pp. 1–8.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin. “Attention is all you need,” International Conference on Neural Information Processing Systems, Long Beach, USA, December 2017, pp. 6000–6010.
S. Yadav and S. Shukla. “Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification,” International Conference on Advanced Computing, Bhimavaram, India, February 2016, pp. 78–83.
Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov and Q. V. Le. “XLNet: Generalized autoregressive pretraining for language understanding,” International Conference on Neural Information Processing Systems, Vancouver, Canada, December 2019, pp. 5753–5763.
C. F. Yeh, J. Mahadeokar, K. Kalgaonkar, Y. Wang, D. Le, M. Jain, K. Schubert, C. Fuegen and M. L. Seltzer. “Transformer-transducer: End-to-end speech recognition with self-attention,” arXiv: 1910.12977, 2019.
K. K. Yun, S. W. Yoon and D. Won. “Prediction of stock price direction using a hybrid GA-XGBoost algorithm with a three-stage feature engineering process,” Expert Systems with Applications, vol. 186, 2021, pp. 115716.
A. Zeng, M. Chen, L. Zhang and Q. Xu. “Are Transformers effective for time series forecasting?” AAAI Conference on Artificial Intelligence, Washington, USA, February 2023, pp. 11121–11128.
G. Zerveas, S. Jayaraman, D. Patel, A. Bhamidipaty and C. Eickhoff. “A Transformer-based framework for multivariate time series representation learning,” ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, USA, August 2021, pp. 2114–2124.
Y. Zheng, Q. Liu, E. Chen, Y. Ge and J. L. Zhao. “Time series classification using multi-channels deep convolutional neural networks,” International Conference on Web-Age Information Management, Macau, China, June 2014, pp. 298–310.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/97280-
dc.description.abstract一直以來股票市場都是投資人所關注的熱門領域,使用深度學習技術進行時間序列資料的研究也逐漸成為趨勢,由於自注意力的架構在自然語言及圖像處理等領域的任務中有著不錯的表現,因此許多研究也試圖利用此架構來提高時間序列任務的預測準確度。然而,對於高波動、高雜訊、及非線性的金融資料而言,深度學習模型往往需要大量的歷史資料來學習,因此較不利於使用原始的自注意力架構。此外,在深度學習模型中,僅使用連續型的技術指標資料往往難以獲得理想的股價走勢預測結果。因此本論文引入了適合多變量長輸入時間序列資料的模型架構並結合資料離散化的方式,來預測股價的上漲、持平、及下跌。實驗結果顯示,對於各項深度學習常見的衡量指標,都能利用這樣的方法來取得顯著的提升,證明此論文所使用的方法對於預測未來股價走勢的有效性。zh_TW
dc.description.abstractThe stock market has always been a popular area for investors. Meanwhile, deep learning techniques have become increasingly prevalent in analyzing time series data. Since the strong performance of the self-attention architecture for natural language processing and image processing, numerous studies have leveraged this architecture to enhance the prediction accuracy of time series models. However, for financial data with high volatility, high noise and non-linearity, deep learning models often need a large amount of historical data, which makes the original self-attention architecture less suitable. Furthermore, using only continuous technical indicators in deep learning models can lead to suboptimal predictions of stock price movements. Therefore, this thesis introduces a model architecture specifically designed for multivariate time series data with long input data. The model also incorporates data discretization technique to predict stock price rises, no changes, and falls. The experimental results demonstrate that this approach significantly improves the performance of model prediction based on various common deep learning metrics, proving its effectiveness.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-04-02T16:16:21Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2025-04-02T16:16:21Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents摘要 i
ABSTRACT ii
目次 iii
圖次 iv
表次 v
第一章 緒論 1
1.1 論文動機與簡介 1
1.2 論文架構 2
第二章 背景知識 3
2.1 模型介紹 3
2.1.1 前饋神經網路 3
2.1.2 自注意力機制 3
2.1.3 多頭注意力機制 4
2.1.4 位置編碼 5
2.1.5 Transformer 6
2.1.6 Patching 8
2.1.7 通道獨立 8
2.1.8 PatchTST 9
2.2技術指標 10
2.2.1連續型資料 11
2.2.2離散型資料 14
第三章 實驗方法 18
3.1 實驗設計 18
3.2 實驗資料及處理 18
3.3 預測目標 21
第四章 實驗結果 23
4.1 衡量指標 23
4.2 實驗一結果 25
4.3 實驗二結果 27
第五章 結論與展望 29
5.1 結論 29
5.2 後續研究 29
參考文獻 30
-
dc.language.isozh_TW-
dc.subject時間序列zh_TW
dc.subject技術指標zh_TW
dc.subject台灣證券市場zh_TW
dc.subject類神經網路zh_TW
dc.subject股價走勢預測zh_TW
dc.subjectTechnical indicatorsen
dc.subjectStock price forecasten
dc.subjectNeural Networken
dc.subjectTaiwan stock marketen
dc.subjectTime seriesen
dc.title結合資料離散化與通道獨立於時間序列資料之趨勢預測zh_TW
dc.titleIntegrating Discretization of Data and Channel Independence for Trend Forecasting on Time Series Dataen
dc.typeThesis-
dc.date.schoolyear113-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee張經略;陸裕豪zh_TW
dc.contributor.oralexamcommitteeChing-Lueh Chang;U-Hou Loken
dc.subject.keyword股價走勢預測,類神經網路,台灣證券市場,時間序列,技術指標,zh_TW
dc.subject.keywordStock price forecast,Neural Network,Taiwan stock market,Time series,Technical indicators,en
dc.relation.page35-
dc.identifier.doi10.6342/NTU202500180-
dc.rights.note未授權-
dc.date.accepted2025-01-20-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-liftN/A-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-113-2.pdf
  未授權公開取用
1.2 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved