請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49886
完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.advisor | 楊佳玲(Chia-Lin Yang) | |
dc.contributor.author | Wei-Ting Lin | en |
dc.contributor.author | 林蔚廷 | zh_TW |
dc.date.accessioned | 2021-06-15T12:25:44Z | - |
dc.date.issued | 2021 | |
dc.date.submitted | 2021-04-12 | |
dc.identifier.citation | A. Shafieeet al., “Isaac: A convolutional neural network accelerator with in-situ analogarithmetic in crossbars,” inISCA, 2016, pp. 14–26. P. Chiet al., “Prime: A novel processing-in-memory architecture for neural network com-putation in reram-based main memory,” inISCA, 2016, pp. 27–39. W. H. Chenet al., “A 65nm 1mb nonvolatile computing-in-memory reram macro with sub-16ns multiply-and-accumulate for binary dnn ai edge processors,” inISSCC, 2018, pp. 494–496. F. Suetal., “A 462gops/j rram-based nonvolatile intelligent processor for energy harvestingioe system featuring nonvolatile logics and processing-in-memory,” inVLSI-TSA, 2017,T260–T261. P. Y. Chenet al., “Neurosim: A circuit-level macro model for benchmarking neuro-inspiredarchitectures in online learning,”IEEE TCAD, pp. 1–1, 2018. L. Xiaet al., “Mnsim: Simulation platform for memristor-based neuromorphic computingsystem,”IEEE TCAD, pp. 1–1, 2017. M. Huet al., “Dot-product engine for neuromorphic computing: Programming 1t1m cross-bar to accelerate matrix-vector multiplication,” inDAC, 2016, pp. 1–6. R. Balasubramonian, A. B. Kahng, N. Muralimanohar, A. Shafiee, and V. Srinivas, “CACTI7: New tools for interconnect exploration in innovative off-chip memories,”ACM TACO,vol. 14, no. 2, Jun. 2017. M. Saberiet al., “Analysis of power consumption and linearity in capacitive digital-to-analog converters used in successive approximation adcs,” pp. 1736–1748, 2011. Y. Lecunet al., “Gradient-based learning applied to document recognition,”Proceedings ofthe IEEE, pp. 2278–2324, 1998. L. Wolfet al., “Face recognition in unconstrained videos with matched background simi-larity,” 2011, pp. 529–534. Y. Sunet al., “Deep learning face representation from predicting 10,000 classes,” 2014,pp. 1891–1898. K. Alex, “Learning multiple layers of features from tiny images,”University of Toronto,2012. J. Denget al., “Imagenet: A large-scale hierarchical image database,” 2009, pp. 248–255. A. Krizhevskyet al., “Imagenet classification with deep convolutional neural networks,”Neural Information Processing Systems, 2012. P. Sermanetet al.,Overfeat: Integrated recognition, localization and detection using con-volutional networks, 2014. Y. N. Wuet al., “An architecture-level energy and area estimator for processing-in-memoryaccelerator designs,” 2020, pp. 116–118. I. Chakrabortyet al.,Geniex: A generalized approach to emulating non-ideality in memris-tive xbars using neural networks, 2020. M. K. F. Leeet al., “A system-level simulator for rram-based neuromorphic computingchips,” 2019 | |
dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/49886 | - |
dc.description.abstract | 可變電阻式記憶體之神經網路加速器利用記憶體式運算技術計算類神經演算法,記憶體式運算技術能使得記憶體不僅能夠儲存神經網路之權重,還能夠實現向量與矩陣乘法,因此可以提升系統的能源效率。由於不同類神經網路的權重配置、排程以及硬體設置皆會影響加速器的效能和耗能,為了設計高效率或低耗能的可變電阻式記憶體之神經網路加速器,我們會需要一套模擬框架分析不同設計對系統的效能與耗能的影響。 此篇論文中,我們提出了一個可變電阻式記憶體之神經網路加速器模擬框架,此框架可根據使用者選擇的權重配置方法、排程方式與硬體配置當作輸入參數,模擬加速器的效能與耗能,此模擬框架由數個模組組成,使用者除了使用預設選項當模擬器的輸入外,還能夠改寫模擬框架的模組,來彈性地達成不同的權重配置、排程。 我們使用多個不同的卷積神經網路為討論對象,以證實此模擬框架可提供使用者設計觀點,幫助設計可變電阻式記憶體之神經網路加速器。 | zh_TW |
dc.description.abstract | ReRAM-based Neural Network (NN) accelerator utilizes in-memory computing technology to perform NN algorithm. The in-memory computing enables memory can not only store the weights of NN but also perform matrix-vector multiplication, which improves the energy efficiency of system. Different mapping of NN's weights, scheduling policy and hardware configuration setting affect performance and energy consumption of accelerator. To design a high performance and low energy consumption ReRAM-based NN accelerator, a flexible simulation platform is needed. In this paper, we proposed a simulation framework of ReRAM-based NN accelerator. The users could select the mapping policy, scheduling policy and hardware configuration as input parameter to simulate the performance and energy consumption of ReRAM-based NN accelerator. The simulation framework includes multiple modules. Beside using default input options, users can rewrite the module of simulation framework to flexibly achieve different weight mapping and scheduling. We use multiple convolutional neural networks as case studies to show that the simulation framework can provides insights for users to design the ReRAM-based NN accelerator. | en |
dc.description.provenance | Made available in DSpace on 2021-06-15T12:25:44Z (GMT). No. of bitstreams: 1 U0001-1104202122171300.pdf: 5102656 bytes, checksum: 57cfa5c233d46de9fca4e497b9f1ee6a (MD5) Previous issue date: 2021 | en |
dc.description.tableofcontents | 摘要 ii Abstract iii 誌謝 iv Contents v 1 Introduction 1 2 Background 3 2.1 Convolution Neural Network 3 2.2 ReRAM crossbar array 5 2.3 Perform vector-matrix multiplication in CNN with ReRAM crossbar array 6 3 Architecture 7 3.1 ReRAM-based deep learning accelerator architecture 7 3.2 Data Flow 9 4 Performance and Energy Related Deployment Strategies 11 4.1 Tiling 12 4.2 Mapping 13 4.3 Scheduling 14 5 Simulation Framework 16 5.1 Computation Order Generation Module 17 5.2 Performance and Energy Simulation Module 20 6 Experimental Results 22 6.1 Validation 22 6.2 Case studies for performance and energy efficiency 23 6.2.1 Tiling 24 6.2.2 Mapping 26 6.2.3 Scheduling 28 6.2.4 Impact of the eDRAM buffer capacity 29 6.3 The architectural states of different tiling/mapping/scheduling policies 32 7 Related Work 35 8 Conclusion 37 Reference 38 | |
dc.language.iso | en | |
dc.title | 可變電阻式記憶體類神經網路加速器: 效能與耗能模擬框架 | zh_TW |
dc.title | ReRAM-based Neural Network Accelerator: Performance and Energy Consumption Simulation Framework | en |
dc.type | Thesis | |
dc.date.schoolyear | 109-2 | |
dc.description.degree | 碩士 | |
dc.contributor.oralexamcommittee | 鄭湘筠(Hsiang-Yun Cheng),張原豪(Yuan-Hao Chang) | |
dc.subject.keyword | 神經網路,可變電阻式記憶體,加速器架構,模擬器, | zh_TW |
dc.subject.keyword | Neural network,ReRAM,Accelerator architecture,Simulator, | en |
dc.relation.page | 39 | |
dc.identifier.doi | 10.6342/NTU202100826 | |
dc.rights.note | 有償授權 | |
dc.date.accepted | 2021-04-13 | |
dc.contributor.author-college | 電機資訊學院 | zh_TW |
dc.contributor.author-dept | 資訊網路與多媒體研究所 | zh_TW |
顯示於系所單位: | 資訊網路與多媒體研究所 |
文件中的檔案:
檔案 | 大小 | 格式 | |
---|---|---|---|
U0001-1104202122171300.pdf 目前未授權公開取用 | 4.98 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。