Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92313
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor楊家驤zh_TW
dc.contributor.advisorChia-Hsiang Yangen
dc.contributor.author陳世豪zh_TW
dc.contributor.authorShih-Hao Chenen
dc.date.accessioned2024-03-21T16:34:19Z-
dc.date.available2024-03-22-
dc.date.copyright2024-03-21-
dc.date.issued2024-
dc.date.submitted2024-01-23-
dc.identifier.citation[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning,” 2013.
[2] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of go without humanknowledge,” Nature, vol. 550, no. 7676, pp. 354–359, 2017.
[3] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wünsch, K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Hassabis, C. Apps, and D. Silver, “Grandmaster level in starcraft ii using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350–354, 2019.
[4] A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J.-M. Allen, V.-D. Lam, A. Bewley, and A. Shah, “Learning to drive in a day,” in International Conference on Robotics and Automation (ICRA), pp. 8248–8254, 2019.
[5] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, and S. Levine, “Soft actor-critic algorithms and applications,” 2019.
[6] T. Z. Zhao, J. Luo, O. Sushkov, R. Pevceviciute, N. Heess, J. Scholz, S. Schaal, and S. Levine, “Offline meta-reinforcement learning for industrial insertion,” in International Conference on Robotics and Automation (ICRA), pp. 6386–6393, 2022.
[7] C. Kim, S. Kang, D. Shin, S. Choi, Y. Kim, and H.-J. Yoo, “A 2.1tflops/w mobile deep rl accelerator with transposable pe array and experience compression,” in IEEE International Solid-State Circuits Conference (ISSCC), pp. 136–138, 2019.
[8] J. Lee, S. Kim, S. Kim, W. Jo, D. Han, J. Lee, and H.-J. Yoo, “Omnidrl: A 29.3 tflops/w deep reinforcement learning processor with dualmode weight compression and on-chip sparse weight transposer,” in IEEE Symposium on VLSI Technology and Circuits (VLSI), pp. 1–2, 2021.
[9] L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: A survey,” Journal of Artificial Intelligence Research (JAIR), vol. 4, pp. 237–285, 1996.
[10] Z.-S. Fu, Y.-C. Lee, A. Park, and C.-H. Yang, “A 40-nm 646.6tops/w sparsity-scaling dnn processor for on-device training,” in IEEE Symposium on VLSI Technology and Circuits (VLSI), pp. 40–41, 2022.
[11] S. Kang, D. Han, J. Lee, D. Im, S. Kim, S. Kim, and H.-J. Yoo, “7.4 ganpu: A 135tflops/w multi-dnn training processor for gans with speculative dual-sparsity exploitation,” in IEEE International Solid-State Circuits Conference (ISSCC), pp. 140–142, 2020.
[12] A. Nø kland, “Direct feedback alignment provides learning in deep neural networks,” in Advances in Neural Information Processing Systems, vol. 29, 2016.
[13] D. Han, J. Lee, J. Lee, and H.-J. Yoo, “A 1.32 tops/w energy efficient deep neural network learning processor with direct feedback alignment based heterogeneous core architecture,” in IEEE Symposium on VLSI Technology and Circuits (VLSI), pp. C304–C305, 2019.
[14] X. Sun, X. Ren, S. Ma, and H. Wang, “meProp: Sparsified back propagation for accelerated deep learning with reduced overfitting,” in Proceedings of the 34th International Conference on Machine Learning, vol. 70 of Proceedings of Machine Learning Research, pp. 3299–3308, 06–11 Aug 2017.
[15] M. A. Raihan and T. Aamodt, “Sparse weight activation training,” in Advances in Neural Information Processing Systems, vol. 33, pp. 15625–15638, 2020.
[16] I. Osband, C. Blundell, A. Pritzel, and B. Van Roy, “Deep exploration via bootstrapped dqn,” in Advances in Neural Information Processing Systems, vol. 29, 2016.
[17] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” CoRR, vol. abs/1606.01540, 2016.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92313-
dc.description.abstract強化學習已被廣泛應用於各種領域,尤其在智慧機器人領域已展現優異成果。本論文提出了一強化學習處理器,藉由完整的估測功能與訓練推論的平行處理,達成高效強化學習運算。本論文藉由演算法和硬體的共同設計優化,使強化學習的計算複雜度降低90%。並利用強化學習資料在空間和時間上的相關性,提出一種資料編碼方法以降低位元遮罩的資料大小達65%。處理器的硬體架構完整支援資料稀疏性與估測,並同時維持高硬體利用率。該處理器並整合訓練與推論的平行處理,以進一步降低執行時間。總體而言,本論文提出的處理器使執行強化學習演算法的延遲時間減少了89%。該處理器以40nm CMOS製程設計與製造,晶片達到209TOPS/W 能量效率以及2341GOPS/mm^2的面積效率。與過往文獻中的最佳設計相比,本晶片的能源效率和面積效率分別提高了7.1倍與7.3倍。zh_TW
dc.description.abstractReinforcement learning has shown remarkable performance in the field of autonomous vehicles and intelligent robots. This paper presents a reinforcement learning processor with full speculation exploitation and parallel processing for inference and training. By the co-design of algorithm and hardware, the computational complexity is significantly reduced by 90%. A data encoding scheme leveraging both spatial and temporal data correlations is proposed to reduce the bitmask size by 65%. The architecture is designed to perform operations with full support sparsity and speculation while maintaining high hardware utilization. The chip is designed to support parallel processing of inference and training to reduce latency. Overall, the proposed processor achieves an 89% reduction in processing time. Fabricated in 40-nm CMOS, the proposed reinforcement learning processor delivers a 209TOPS/W energy efficiency and a 2341GOPS/mm2 area efficiency. This work achieves 7.1× and 7.3× higher energy efficiency and area efficiency, respectively, than the state of the arts.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-03-21T16:34:19Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-03-21T16:34:19Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontents口試委員會審定書 ii
摘要 iii
ABSTRACT iv
Contents v
List of Figures vii
List of Tables viii
1 Introduction 1
2 Background and Related Work 3
2.1 Reinforcement Learning Algorithms 3
2.2 Sparsity and Speculation Exploitation 4
2.3 Memory Access in Reinforcement Learning Processing 5
3 Algorithm-hardware Co-design 7
3.1 Binary Direct Feedback Alignment for Error Calculation 7
3.2 Full Speculation Exploitation 8
3.3 Data Encoding Scheme 9
3.3.1 Block Floating-point Arithmetics 9
3.3.2 Hierarchical Bitmask Encoding 10
3.3.3 Difference Bitmask Encoding 12
3.4 Summary of Proposed Training Scheme 13
4 System Architecture 15
4.1 Parallel Processing of Inference and Training 15
4.2 Processing Core 17
4.2.1 Index Pairing Unit and Data Fetching Unit 17
4.2.2 Multipliers and Dynamic Accumulator 18
4.3 Speculation Core 19
4.4 Memory Access Minimization 20
4.4.1 Inter-core Data Casting 20
4.4.2 Block-wise Non-zero Data Ordering 21
4.4.3 Intra-core Data Casting 22
5 Experimental Verification 24
5.1 Chip Implementation 24
5.2 Performance Evaluation 24
5.3 Performance Comparison 25
6 Conclusion 29
REFERENCE 30
-
dc.language.isoen-
dc.subject強化學習zh_TW
dc.subject估測功能利用zh_TW
dc.subject稀疏度利用zh_TW
dc.subject平行處理zh_TW
dc.subject數位積體電路zh_TW
dc.subjectSpeculation Exploitationen
dc.subjectReinforcement Learningen
dc.subjectDigital Integrated Circuitsen
dc.subjectParallel Processingen
dc.subjectSparsity Exploitationen
dc.title應用於強化學習之高能效神經網路處理器晶片zh_TW
dc.titleAn Energy-Efficient Neural Network Processor for Reinforcement Learningen
dc.typeThesis-
dc.date.schoolyear112-1-
dc.description.degree碩士-
dc.contributor.oralexamcommittee翁詠祿;張錫嘉zh_TW
dc.contributor.oralexamcommitteeYeong-Luh Ueng;Hsie-Chia Changen
dc.subject.keyword強化學習,估測功能利用,稀疏度利用,平行處理,數位積體電路,zh_TW
dc.subject.keywordReinforcement Learning,Speculation Exploitation,Sparsity Exploitation,Parallel Processing,Digital Integrated Circuits,en
dc.relation.page32-
dc.identifier.doi10.6342/NTU202400132-
dc.rights.note未授權-
dc.date.accepted2024-01-25-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電子工程學研究所-
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-112-1.pdf
  未授權公開取用
3.83 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved