請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98746| 標題: | 基於適應性多尺度時頻域多層感知器於時間序列預測之架構設計 AMTF-MLP: Adaptive Multi-Scale Time-Frequency MLP for Time Series Forecasting |
| 作者: | 戴廷磬 Ting-Ching Tai |
| 指導教授: | 王勝德 Sheng-De Wang |
| 關鍵字: | 多變量時間序列預測,純多層感知器模型,頻域分析,自適應特徵融合,多尺度時域建模,長期預測,短期預測, Multivariate Time Series Forecasting,MLP-based Models,Frequency-Domain Analysis,Adaptive Feature Fusion,Multi-Scale Temporal Modeling,Long-Term Forecasting,Short-Term Forecasting, |
| 出版年 : | 2025 |
| 學位: | 碩士 |
| 摘要: | 準確預測多變量時間序列對於工業應用至關重要,然而,由於數據中固有的短期瞬態變化、長期相依性、隱藏的週期性,以及嚴格的計算效率要求,這項任務充滿挑戰。為了解決這些問題,我們提出了AMTF-MLP,一個創新的純多層感知器(MLP)架構,它透過自適應融合機制,整合了多尺度時域混合器與頻域頻譜學習。AMTF-MLP透過並行分支處理信號:一個帶有層級化區塊混合器的時域分支,用以捕捉局部與全域的時間模式;以及一個帶有頻譜MLP的頻域分支,用以建模週期性。
在多樣化的公開基準上進行的大量實驗,驗證了我們模型的有效性與通用性。在長期預測方面,AMTF-MLP展現了高度的競爭力,與iTransformer和PatchTST等主流Transformer模型相比,其均方誤差(MSE)降低了9.3%至20.4%。在高頻率的短期預測場景中,它取得了優異的成果,在PEMS數據集上,其MSE分別比AMD和iTransformer等強力競爭對手降低了24.3%和40.4%。 此模型優異的跨域性能,同時也體現在計算效率上。在其高效能的MLP同類模型中,AMTF-MLP展現了優異的記憶體用量,記憶體消耗比AMD少1.80倍,同時維持著快1.5倍的訓練速度。消融實驗證實,我們設計的每個組件都對模型 的效能至關重要。憑藉其線性複雜度與強大的實證結果,AMTF-MLP為真實世界的預測系統提供了一個強大且實用的解決方案。 Accurately predicting multivariate time series is essential for industrial applications, yet it poses significant challenges due to short-term transients, long-term dependencies, and hidden periodicities, alongside stringent computational efficiency requirements.To address these issues, we introduce AMTF-MLP, an innovative pure Multi-Layer Perceptron (MLP) architecture that integrates multi-scale time-domain mixers and frequency domain spectral learning, unified through adaptive fusion. AMTF-MLP processes the signal in parallel branches: a time-domain branch with hierarchical patch mixers to capture local and global temporal patterns, and a frequency-domain branch with a spectral MLP to model periodicities. Extensive experiments on diverse public benchmarks validate our model's effectiveness and versatility. In long-term forecasting, it delivers highly competitive performance, reducing Mean Squared Error (MSE) by 9.3% to 20.4% compared to prominent Transformer models like iTransformer and PatchTST. In high-frequency short-term scenarios, it achieves leading results, reducing MSE by 24.3% and 40.4% against strong competitors like AMD and iTransformer, respectively, on the PEMS datasets. The model's excellent cross-domain performance is also reflected in its computational efficiency: among its high-performance MLP peers, it exhibits the most superior memory efficiency, consuming 1.8 × less memory than AMD, while maintaining a training speed 1.5 × faster. Ablation studies confirm that each component of our design is critical to the model’s efficacy. With its linear complexity and strong empirical results, AMTF-MLP presents a powerful and practical solution for real-world forecasting systems. |
| URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/98746 |
| DOI: | 10.6342/NTU202503526 |
| 全文授權: | 同意授權(限校園內公開) |
| 電子全文公開日期: | 2025-08-19 |
| 顯示於系所單位: | 電機工程學系 |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-113-2.pdf 授權僅限NTU校內IP使用(校園外請利用VPN校外連線服務) | 4.92 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
