請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95612
標題: | 穩健集成預測與深度強化學習於孤島微電網能源管理 Robust Ensemble Forecasting and Deep Reinforcement Learning for Energy Management of Islanded Microgrids |
作者: | 許芸嘉 Yun-Chia Hsu |
指導教授: | 李家岩 Chia-Yen Lee |
關鍵字: | 能源管理,孤島微電網,深度強化學習,集成模型,再生能源預測,自適應穩健最佳化, energy management,islanded microgrid,deep reinforcement learning,ensemble modeling,renewables forecasting,adaptive robust optimization, |
出版年 : | 2024 |
學位: | 碩士 |
摘要: | 微電網是整合多種能源發電設備與電力負載的在地化電力系統,而其中的能源管理系統能提升資源利用率、減少浪費、並提高整體效率。本研究聚焦於孤島微電網內的能源管理問題,微電網由太陽能光伏發電系統、風力發電系統、柴油發電機組和儲能系統組成。本研究在定義能源管理問題之數學模型時,考慮了柴油發電機組在啟動所需的暖機特性和調節發電量時的逐漸變化時間,此在以往的研究中往往被忽視。因再生能源發電量與電力負載量皆變化很大且受到許多外部因子影響,容易影響能源管理決策之穩健性,故本研究提出結合集成預測模型和深度強化學習的能源管理方法框架:預測模型使用改良的自適應穩健最佳化集成方法,並實作三種深度強化學習演算法,包含深度 Q 學習網路、優勢動作評論算法和近端策略優化。本研究使用搜集自臺灣澎湖島的每小時數據集驗證方法論,結果顯示改良的自適應穩健最佳化集成方法在預測電力負載量和風力發電量方面勝過其他機器學習集成模型,但在太陽能發電量預測效果較差。而結合預測的深度強化學習模型在決策效力和穩健性方面皆優於未考慮預測值之模型,在本實驗中,平均目標值提升 3.4 倍且標準差降低 11%。此外,決策空間維度較大的深度 Q 學習網路算法得到最高目標值。 Microgrids (MGs) are localized energy systems integrating diverse energy sources. An energy management (EM) system can enhance resource usage, minimize waste, and boost efficiency. This study focuses on EM within an islanded MG, comprising a solar photovoltaic (PV) system, wind power system, diesel generator (DG) set, and energy storage system (ESS). Notably, the model formulation considers the gradual changes required by dispatchable systems like DG sets during engine start-up and power regulation, aspects often overlooked in prior studies. To address the variability of renewables and load, this study proposes a novel EM framework combining ensemble forecasting models and deep reinforcement learning (DRL) algorithms. An improved adaptive robust optimization (ARO) ensemble approach is utilized, and three DRL algorithms are implemented: deep Q-network (DQN), advantage actor-critic (A2C), and proximal policy optimization (PPO). An hourly dataset collected from Penghu Island in Taiwan is used to validate the methodology. The improved ARO ensemble approach excels in forecasting energy load and wind power but is less effective for solar power. The overall results demonstrate that DRLs with forecasts outperform those without in both effectiveness and robustness, showing a 3.4-fold increase in average rewards and an 11% reduction in standard deviations in the experiments. Additionally, the DQN algorithm with finer granularity in decision space achieves the highest reward. |
URI: | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/95612 |
DOI: | 10.6342/NTU202403682 |
全文授權: | 未授權 |
顯示於系所單位: | 資訊管理學系 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
ntu-112-2.pdf 目前未授權公開取用 | 13.45 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。