Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 生物資源暨農學院
  3. 生物環境系統工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85880
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor胡明哲(Ming-Che Hu)
dc.contributor.authorHsiang-Hao Chanen
dc.contributor.author詹祥皓zh_TW
dc.date.accessioned2023-03-19T23:27:28Z-
dc.date.copyright2022-09-26
dc.date.issued2022
dc.date.submitted2022-09-23
dc.identifier.citationAbolpour, B., Javan, M. and Karamouz, M. 2007. Water allocation improvement in river basin using Adaptive Neural Fuzzy Reinforcement Learning approach. Applied Soft Computing 7(1), 265-285. https://doi.org/10.1016/j.asoc.2005.02.007. Alibabaei, K., Gaspar, P.D., Assunção, E., Alirezazadeh, S. and Lima, T.M. 2022. Irrigation optimization with a deep reinforcement learning model: Case study on a site in Portugal. Agricultural Water Management 263. https://doi.org/10.1016/j.agwat.2022.107480. Ashu, A.B. and Lee, S.-I. 2021. Simulation-Optimization Model for Conjunctive Management of Surface Water and Groundwater for Agricultural Use. Water 13(23), 3444. https://doi.org/10.3390/w13233444. Bakker, M., Post, V., Langevin, C.D., Hughes, J.D., White, J.T., Starn, J.J. and Fienen, M.N. 2016. Scripting MODFLOW Model Development Using Python and FloPy. Groundwater 54(5), 733-739. https://doi.org/10.1111/gwat.12413. Barlow, P.M., Ahlfeld, D.P. and Dickerman, D.C. 2003. Conjunctive-management models for sustained yield of stream-aquifer systems. Journal of Water Resources Planning and Management 129(1), 35-48. https://doi.org/10.1061/(ASCE)0733-9496(2003)129:1(35). Bhattacharya, B., Lobbrecht, A. and Solomatine, D. (2002). Control of water levels of regional water systems using reinforcement learning. Proc. 5th Int. Conference on Hydroinformatics, Cardiff, UK. Bhattacharya, B., Lobbrecht, A. and Solomatine, D. 2003. Neural networks and reinforcement learning in control of water systems. Journal of Water Resources Planning and Management 129(6), 458-465. https://doi.org/10.1061/(ASCE)0733-9496(2003)129:6(458). Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J. and Zaremba, W. 2016. OpenAI Gym. arXiv preprint arXiv:1606.01540. https://doi.org/10.48550/arXiv.1606.01540. Castelletti, A., Galelli, S., Restelli, M. and Soncini-Sessa, R. 2010. Tree-based reinforcement learning for optimal water reservoir operation. Water Resources Research 46(9). https://doi.org/10.1029/2009wr008898. Castelletti, A., Pianosi, F. and Restelli, M. 2013. A multiobjective reinforcement learning approach to water resources systems operation: Pareto frontier approximation in a single run. Water Resources Research 49(6), 3476-3486. https://doi.org/10.1002/wrcr.20295. Chakraei, I., Safavi, H.R., Dandy, G.C. and Golmohammadi, M.H. 2021. Integrated Simulation-Optimization Framework for Water Allocation Based on Sustainability of Surface Water and Groundwater Resources. Journal of Water Resources Planning and Management 147(3), 05021001. https://doi.org/10.1061/(asce)wr.1943-5452.0001339. Chang, F.-J., Wang, Y.-C. and Tsai, W.-P. 2016. Modelling intelligent water resources allocation for multi-users. Water resources management 30(4), 1395-1413. https://doi.org/10.1007/s11269-016-1229-6. Chen, C.-W., Wei, C.-C., Liu, H.-J. and Hsu, N.-S. 2014. Application of Neural Networks and Optimization Model in Conjunctive Use of Surface Water and Groundwater. Water Resources Management 28(10), 2813-2832. https://doi.org/10.1007/s11269-014-0639-6. Chen, M., Cui, Y., Wang, X., Xie, H., Liu, F., Luo, T., Zheng, S. and Luo, Y. 2021. A reinforcement learning approach to irrigation decision-making for rice using weather forecasts. Agricultural Water Management 250. https://doi.org/10.1016/j.agwat.2021.106838. Chen, Y.W., Chang, L.C., Huang, C.W. and Chu, H.J. 2013. Applying Genetic Algorithm and Neural Network to the Conjunctive Use of Surface and Subsurface Water. Water Resources Management 27(14), 4731-4757. https://doi.org/10.1007/s11269-013-0418-9. Friedman, E. and Fontaine, F. 2018. Generalizing across multi-objective reward functions in deep reinforcement learning. arXiv preprint arXiv:1809.06364. https://doi.org/10.48550/arXiv.1809.06364. Hajgató, G., Paál, G. and Gyires-Tóth, B. 2020. Deep reinforcement learning for real-time optimization of pumps in water distribution systems. Journal of Water Resources Planning and Management 146(11), 04020079. https://doi.org/10.1061/(ASCE)WR.1943-5452.0001287. Harbaugh, A.W. (2005). MODFLOW-2005 : the U.S. Geological Survey modular ground-water model--the ground-water flow process [Report](6-A16). (Techniques and Methods). http://pubs.er.usgs.gov/publication/tm6A16 Karamouz, M., Kerachian, R. and Zahraie, B. 2004. Monthly water resources and irrigation planning: case study of conjunctive use of surface and groundwater resources. Journal of irrigation and drainage engineering 130(5), 391-402. https://doi.org/10.1061/(ASCE)0733-9437(2004)130:5(391). Kayhomayoon, Z., Milan, S.G., Arya Azar, N., Bettinger, P., Babaian, F. and Jaafari, A. 2022. A Simulation-Optimization Modeling Approach for Conjunctive Water Use Management in a Semi-Arid Region of Iran. Sustainability 14(5), 2691. https://doi.org/10.3390/su14052691. Lee, J.H. and Labadie, J.W. 2007. Stochastic optimization of multireservoir systems via reinforcement learning. Water resources research 43(11). https://doi.org/10.1029/2006WR005627. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. and Riedmiller, M. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. https://doi.org/10.48550/arxiv.1312.5602. Mossalam, H., Assael, Y.M., Roijers, D.M. and Whiteson, S. 2016. Multi-objective deep reinforcement learning. arXiv preprint arXiv:1610.02707. https://doi.org/10.48550/arXiv.1610.02707. Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M. and Dormann, N. 2021. Stable-Baselines3: Reliable Reinforcement Learning Implementations. Journal of Machine Learning Research. Safavi, H.R., Darzi, F. and Mariño, M.A. 2010. Simulation-optimization modeling of conjunctive use of surface water and groundwater. Water resources management 24(10), 1965-1988. https://doi.org/10.1007/s11269-009-9533-z. Safavi, H.R. and Enteshari, S. 2016. Conjunctive use of surface and ground water resources using the ant system optimization. Agricultural Water Management 173, 23-34. https://doi.org/10.1016/j.agwat.2016.05.001. Schulman, J., Wolski, F., Dhariwal, P., Radford, A. and Klimov, O. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. https://doi.org/10.48550/arXiv.1707.06347. Seo, S., Mahinthakumar, G., Sankarasubramanian, A. and Kumar, M. 2018. Conjunctive management of surface water and groundwater resources under drought conditions using a fully coupled hydrological model. Journal of Water Resources Planning and Management 144(9), 04018060. https://doi.org/10.1061/(ASCE)WR.1943-5452.0000978. Sepahvand, R., Safavi, H.R. and Rezaei, F. 2019. Multi-objective planning for conjunctive use of surface and ground water resources using genetic programming. Water Resources Management 33(6), 2123-2137. https://doi.org/10.1007/s11269-019-02229-4. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T. and Hassabis, D. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484-489. https://doi.org/10.1038/nature16961. Singh, A. 2014. Simulation–optimization modeling for conjunctive water use management. Agricultural Water Management 141, 23-29. https://doi.org/10.1016/j.agwat.2014.04.003. Singh, A., Panda, S.N., Saxena, C., Verma, C., Uzokwe, V.N., Krause, P. and Gupta, S. 2016. Optimization modeling for conjunctive use planning of surface water and groundwater for irrigation. Journal of Irrigation and Drainage Engineering 142(3), 04015060. https://doi.org/10.1061/(ASCE)IR.1943-4774.0000977. Soleimani, S., Bozorg-Haddad, O., Boroomandnia, A. and Loáiciga, H.A. 2021. A review of conjunctive GW-SW management by simulation–optimization tools. AQUA—Water Infrastructure, Ecosystems and Society 70(3), 239-256. https://doi.org/10.2166/aqua.2021.106. Sun, L., Yang, Y., Hu, J., Porter, D., Marek, T. and Hillyer, C. (2017). Reinforcement Learning Control for Water-Efficient Agricultural Irrigation 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC). https://doi.org/10.1109/ispa/iucc.2017.00203. Sutton, R.S. and Barto, A.G. (2018) Reinforcement learning: An introduction, MIT press. Wang, X., Nair, T., Li, H., Wong, Y.S.R., Kelkar, N., Vaidyanathan, S., Nayak, R., An, B., Krishnaswamy, J. and Tambe, M. 2020. Efficient Reservoir Management through Deep Reinforcement Learning. arXiv preprint arXiv:2012.03822. https://doi.org/10.48550/arXiv.2012.03822. Wu, X., Chen, Z., Wen, Q., Wang, Z. and Hu, L. 2021. Optimal allocation model of unconventional water resources based on reinforcement learning. Journal of Hydroelectric Engineering 40(7), 23-31. https://doi.org/10.11660/slfdxb.20210703. Yang, C.-C., Chang, L.-C., Chen, C.-S. and Yeh, M.-S. 2009. Multi-objective planning for conjunctive use of surface and subsurface water using genetic algorithm and dynamics programming. Water resources management 23(3), 417-437. https://doi.org/10.1007/s11269-008-9281-5. Yang, Y., Hu, J., Porter, D., Marek, T., Heflin, K. and Kong, H. 2020. Deep Reinforcement Learning-Based Irrigation Scheduling. Transactions of the ASABE 63(3), 549-556. https://doi.org/10.13031/trans.13633. 黃柏傑(1999)。桃園地區地下水資源之評估與應用。國立臺灣大學農業工程學研究所碩士論文,台北市。取自https://hdl.handle.net/11296/hd7hmx
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/85880-
dc.description.abstract近年來由於一些主要項目(民生、工業等)之用水需求日益增加,以及受極端天氣事件之影響造成地表水供應上的不穩定,因此在水資源管理上,聯合運用地表水和地下水資源逐漸成為重要的研究主題。此配水方法是永續地利用地表水和地下水資源以滿足各個時期的用水需求,並同時遵從於地表水可用量和地下水抽取造成洩降之限制。以往之研究多以結合模擬和各種最佳化演算法 (simulation-optimization) 所建立之模型來解決此類連續決策問題。然而本研究引入深度強化學習 (Deep reinforcement learning) 方法來尋求最佳之配水策略。強化學習是機器學習的一個子領域,以處理複雜的決策任務而聞名,並且已經在多項領域達到卓越成就。本研究將地表水和地下水系統之水文模擬模式整合於強化學習框架之中,將地面水與地下水模型編寫成環境(environment)提供機器自行互動,並透過與環境反覆多次互動之方式,以及利用在互動中所獲得之正向和負向之反饋,來訓練以神經網路所近似之策略函數,藉此來達到決策之優化並找出最佳之聯合用水策略。其中地下水模型是利用MODFLOW模式來模擬由抽水所引起之地下水位變化。此經由強化學習所訓練之聯合用水策略將以多種不同之獎勵(reward)設計所訓練出之結果去做比較,並以三種設計之入流情境(正常、缺水、極端)來評估其表現。結果表明,強化學習智能體透過與本研究構建之環境互動所獲得之經驗來不斷改進其用水策略,並逐漸學會在不同情況下皆能明智地做出適當選擇。zh_TW
dc.description.abstractConjunctive use of surface water and groundwater resources has become an important topic for water management due to the increasing water demands in multiple categories and the unstable availability of surface water impacted by extreme weather events. It is an allocation approach that sustainably utilizes both surface water and groundwater resources for satisfying water demands at every period, while subject to constraints on surface water availability and groundwater drawdowns limitation. Previous studies addressed such sequential decision-making problems mostly by adopting simulation-optimization models. In this study, however, Deep Reinforcement Learning (DRL), a subfield of machine learning well-known for dealing with complex decision-making tasks and has already gained a high reputation across various domains, is introduced herein for seeking the optimal policy. It is implemented by integrating hydrologic simulations of both surface water and groundwater systems into the RL-based optimization framework, where a custom environment containing surface water and groundwater models is established for the RL agent to interact with. The positive and negative feedback obtained throughout the agent-environment interaction process is utilized for optimizing conjunctive water use policies. Also, MODFLOW is applied to simulate the change in groundwater level caused by pumping. The RL-trained policies for conjunctive water management trained with different designs in the reward function are compared in this study, and the resulting performances are evaluated by three designed scenarios: normal year, dry year, and extreme year. Results show that the RL agent improves its policy through the interactive experiences with the environment we built and gradually learns to make decisions intelligently under different situations.en
dc.description.provenanceMade available in DSpace on 2023-03-19T23:27:28Z (GMT). No. of bitstreams: 1
U0001-2009202218065200.pdf: 7421344 bytes, checksum: 5328b0ce4f32d08944d5b21a96a39887 (MD5)
Previous issue date: 2022
en
dc.description.tableofcontents摘要 i Abstract ii Contents iv List of Figures vii List of Tables x Chapter 1 Introduction 1 1.1 Backgrounds and motivations 1 1.2 Organization 6 Chapter 2 Methodology 8 2.1 Reinforcement learning 8 2.1.1 Markov decision process 8 2.1.2 Policy function and value function 12 2.1.3 Vanilla Policy gradient method 14 2.1.4 Proximal Policy Optimization Algorithm 18 2.2 MODFLOW 20 2.3 Optimization for conjunctive water use management 24 Chapter 3 Model Formulation 29 3.1 Study area and designed case 29 3.1.1 Surface water system 31 3.1.2 Groundwater system 32 3.2 Building a custom RL environment 35 3.2.1 State space and action space 36 3.2.2 Reward function 38 3.2.3 Transition process in the RL environment 40 3.3 Training setup for the RL agent 41 Chapter 4 Results and Discussion 45 4.1 Training curve 45 4.2 Evaluations for different scenarios 46 4.2.1 Comparison with a simple practical policy 47 4.2.2 Multi-objective weight comparison for reward scalarization 55 4.2.3 Comparison with policy trained with designed extra bonus 61 Chapter 5 Conclusions and Recommendations 65 5.1 Conclusions 65 5.2 Recommendations 66 References 68 Appendix 72
dc.language.isoen
dc.subject水資源管理zh_TW
dc.subject深度強化學習zh_TW
dc.subject聯合運用zh_TW
dc.subjectMODFLOWzh_TW
dc.subjectConjunctive useen
dc.subjectDeep reinforcement learningen
dc.subjectMODFLOWen
dc.subjectWater resource managementen
dc.title強化學習方法應用於地面地下水之聯合運用zh_TW
dc.titleDeep Reinforcement Learning for Conjunctive Use of Surface Water and Groundwateren
dc.typeThesis
dc.date.schoolyear110-2
dc.description.degree碩士
dc.contributor.coadvisor蔡瑞彬(Jun-Pin Tsai)
dc.contributor.oralexamcommittee余化龍(Hwa-Lung Yu),許少瑜(Shao-Yiu Hsu),何率慈(Shuay-Tsyr Ho)
dc.subject.keyword深度強化學習,聯合運用,水資源管理,MODFLOW,zh_TW
dc.subject.keywordDeep reinforcement learning,Conjunctive use,Water resource management,MODFLOW,en
dc.relation.page86
dc.identifier.doi10.6342/NTU202203666
dc.rights.note同意授權(全球公開)
dc.date.accepted2022-09-25
dc.contributor.author-college生物資源暨農學院zh_TW
dc.contributor.author-dept生物環境系統工程學研究所zh_TW
dc.date.embargo-lift2022-09-26-
顯示於系所單位:生物環境系統工程學系

文件中的檔案:
檔案 大小格式 
U0001-2009202218065200.pdf7.25 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved