Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 資訊工程學系
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88092
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor徐宏民zh_TW
dc.contributor.advisorWinston Hsuen
dc.contributor.author林柏劭zh_TW
dc.contributor.authorPo-Shao Linen
dc.date.accessioned2023-08-08T16:15:43Z-
dc.date.available2023-11-09-
dc.date.copyright2023-08-08-
dc.date.issued2023-
dc.date.submitted2023-07-13-
dc.identifier.citation[1] J. Andreas, D. Klein, and S. Levine. Modular multitask reinforcement learning with policy sketches. In International Conference on Machine Learning, pages 166–175. PMLR, 2017.
[2] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neural information processing systems, 30, 2017.
[3] D. Borsa, T. Graepel, and J. Shawe-Taylor. Learning shared representations in multitask reinforcement learning. arXiv preprint arXiv:1603.02041, 2016.
[4] D. Calandriello, A. Lazaric, and M. Restelli. Sparse multi-task reinforcement learning. Advances in neural information processing systems, 27, 2014.
[5] R. Caruana. Multitask learning. Machine learning, 28:41–75, 1997.
[6] C. D’Eramo, D. Tateo, A. Bonarini, M. Restelli, J. Peters, et al. Sharing knowledge in multi-task deep reinforcement learning. In 8th International Conference on Learning Representations,{ICLR} 2020, Addis Ababa, Ethiopia, April 26-30, 2020, pages 1–11. OpenReview. net, 2020.
[7] C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural network policies for multi-task and multi-robot transfer. In 2017 IEEE international conference on robotics and automation (ICRA), pages 2169–2176. IEEE, 2017.
[8] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pages 1861–1870. PMLR, 2018.
[9] J. Meng and F. Zhu. Seek for commonalities: Shared features extraction for multi-task reinforcement learning via adversarial training. Expert Systems with Applications, 224:119975, 2023.
[10] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[11] L. Pinto and A. Gupta. Learning to push by grasping: Using multiple tasks foreffective learning. In 2017 IEEE international conference on robotics and automation (ICRA), pages 2161–2168. IEEE, 2017.
[12] S. Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
[13] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
[14] O. Sener and V. Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018.
[15] S. Sodhani, A. Zhang, and J. Pineau. Multi-task reinforcement learning with context based representations. In International Conference on Machine Learning, pages 9767–9779. PMLR, 2021.
[16] L. Sun, H. Zhang, W. Xu, and M. Tomizuka. Paco: Parameter-compositional multitask reinforcement learning. arXiv preprint arXiv:2210.11653, 2022.
[17] F. Tanaka and M. Yamamura. Multitask reinforcement learning on the distribution of mdps. In Proceedings 2003 IEEE international symposium on computational intelligence in robotics and automation. Computational intelligence in robotics and automation for the new millennium (Cat. No. 03EX694), volume 3, pages 1108–12. IEEE, 2003.
[18] N. Vithayathil Varghese and Q. H. Mahmoud. A survey of multi-task deep reinforcement learning. Electronics, 9(9):1363, 2020.
[19] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224, 2016.
[20] A. Wilson, A. Fern, S. Ray, and P. Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pages 1015–1022, 2007.
[21] R. Yang, H. Xu, Y. Wu, and X. Wang. Multi-task reinforcement learning with soft modularization. Advances in Neural Information Processing Systems, 33:4767–4777, 2020.
[22] T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33:5824–5836, 2020.
[23] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Metaworld: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on robot learning, pages 1094–1100. PMLR, 2020.
[24] D. Zha, K.-H. Lai, K. Zhou, and X. Hu. Experience replay optimization. arXiv preprint arXiv:1906.08387, 2019.
[25] Y. Zhang and Q. Yang. An overview of multi-task learning. National Science Review, 5(1):30–43, 2018.
[26] Y. Zhang and Q. Yang. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, 34(12):5586–5609, 2021.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/88092-
dc.description.abstract多任務強化學習已成為一個具有挑戰性的問題,旨在減少強化學習的計算成本並利用任務之間的共享特徵來提高個別任務的性能。然而,一個關鍵挑戰在於確定應該在任務之間共享哪些特徵以及如何保留區分每個任務的獨特特徵。這個挑戰常常導致任務性能不平衡的問題,其中某些任務可能主導學習過程,而其他
任務則被忽視。在本文中,我們提出了一種新方法,稱為共享­獨特特徵和任務感知的優先經驗重放,以提高訓練穩定性並有效利用共享和獨特特徵。我們引入了一種簡單而有效的任務特定嵌入方法,以保留每個任務的獨特特徵,以減輕任務性能不平衡的潛在問題。此外,我們將任務感知設置引入到優先經驗重放算法中,以適應多任務訓練並增強訓練的穩定性。我們的方法在 Meta­World 的資料測試集中實現了最先進的平均成功率,同時在所有任務上保持穩定的性能,避免了任務性能不平衡的問題。結果證明了我們的方法在應對多任務強化學習挑戰方面的有效性。
zh_TW
dc.description.abstractMulti-task reinforcement learning (MTRL) has emerged as a challenging problem to reduce the computational cost of reinforcement learning and leverage shared features among tasks to improve the performance of individual tasks.
However, a key challenge lies in determining which features should be shared across tasks and how to preserve the unique features that differentiate each task. This challenge often leads to the problem of task performance imbalance, where certain tasks may dominate the learning process while others are neglected.
In this paper, we propose a novel approach called shared-unique features along with task-aware prioritized experience replay to improve training stability and leverage shared and unique features effectively.
We incorporate a simple yet effective task-specific embeddings to preserve the unique features of each task to mitigate the potential problem of task performance imbalance.
Additionally, we introduce task-aware settings to the prioritized experience replay (PER) algorithm to accommodate multi-task training and enhancing training stability.
Our approach achieves state-of-the-art average success rates on the Meta-World benchmark, while maintaining stable performance across all tasks, avoiding task performance imbalance issues. The results demonstrate the effectiveness of our method in addressing the challenges of MTRL.
en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2023-08-08T16:15:43Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2023-08-08T16:15:43Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsVerification Letter from the Oral Examination Committee i
Acknowledgements iii
摘要 v
Abstract vii
Contents ix
List of Figures xi
List of Tables xiii
Denotation xv
Chapter 1 Introduction 1
Chapter 2 Related Work 5
2.1 Multi­Task Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Multi­Task Reinforcement Learning . . . . . . . . . . . . . . . . . . 5
2.3 Off­Policy Reinforcement Learning and Experience Replay . . . . . 6
Chapter 3 Method 9
3.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Shared­Unique Features . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Task­Aware Prioritized Experience Replay . . . . . . . . . . . . . . 11
Chapter 4 Experimental Results 13
4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Results on Benchmark . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 Task Imbalance Performance Problem . . . . . . . . . . . . . . . . . 15
4.4 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Chapter 5 Conclusion and Limitations 19
References 21
Appendix A — Experiment Details 25
A.1 Comparison of each task’s performance in baselines . . . . . . . . . 25
A.2 Results on MT50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
-
dc.language.isoen-
dc.subject多任務強化學習zh_TW
dc.subject共享-獨特特徵zh_TW
dc.subject經驗回放zh_TW
dc.subject機器人學zh_TW
dc.subject深度學習zh_TW
dc.subjectExperience Replayen
dc.subjectShared-­Unique Featuresen
dc.subjectRoboticsen
dc.subjectMulti­-Task Reinforcement Learningen
dc.subjectDeep Learningen
dc.title具有共享及獨特特徵和優先權經驗回放的多任務強化學習zh_TW
dc.titleMulti­-Task Reinforcement Learning with Shared-­Unique Features and Task-­Aware Prioritized Experience Replayen
dc.typeThesis-
dc.date.schoolyear111-2-
dc.description.degree碩士-
dc.contributor.oralexamcommittee陳文進;葉梅珍;陳奕廷zh_TW
dc.contributor.oralexamcommitteeWen-Chin Chen;Mei-Chen Yeh;Yi-Ting Chenen
dc.subject.keyword多任務強化學習,經驗回放,共享-獨特特徵,機器人學,深度學習,zh_TW
dc.subject.keywordMulti­-Task Reinforcement Learning,Experience Replay,Shared-­Unique Features,Robotics,Deep Learning,en
dc.relation.page27-
dc.identifier.doi10.6342/NTU202301033-
dc.rights.note同意授權(限校園內公開)-
dc.date.accepted2023-07-14-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept資訊工程學系-
dc.date.embargo-lift2028-07-11-
顯示於系所單位:資訊工程學系

文件中的檔案:
檔案 大小格式 
ntu-111-2.pdf
  未授權公開取用
4.26 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved