請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73457完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 郭大維(Tei-Wei Kuo) | |
| dc.contributor.author | Yao-Wen Kang | en |
| dc.contributor.author | 康耀文 | zh_TW |
| dc.date.accessioned | 2021-06-17T07:36:00Z | - |
| dc.date.available | 2024-05-10 | |
| dc.date.copyright | 2019-05-10 | |
| dc.date.issued | 2019 | |
| dc.date.submitted | 2019-04-22 | |
| dc.identifier.citation | [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processingsystems,pages1097–1105,2012.
[2] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp),2013ieeeinternationalconferenceon,pages6645–6649.IEEE,2013. [3] Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. Prime: A novel processing-in-memory architecture for neuralnetworkcomputationinreram-basedmainmemory.InACMSIGARCHComputerArchitectureNews,volume44,pages27–39.IEEEPress,2016. [4] StephenWKeckler,WilliamJDally,BrucekKhailany,MichaelGarland,andDavid Glasco. Gpusandthefutureofparallelcomputing. IEEEMicro,(5):7–17,2011. [5] H-SPhilipWong,Heng-YuanLee,ShimengYu,Yu-ShengChen,YiWu,Pang-Shiu Chen, Byoungil Lee, Frederick T Chen, and Ming-Jinn Tsai. Metal–oxide rram. ProceedingsoftheIEEE,100(6):1951–1970,2012. [6] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scaleimagerecognition. arXivpreprintarXiv:1409.1556,2014. [7] Jiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang, Ningyi Xu, Sen Song, et al. Going deeper with embedded fpga platformforconvolutionalneuralnetwork. InProceedingsofthe2016ACM/SIGDA InternationalSymposiumonField-ProgrammableGateArrays,pages26–35.ACM, 2016. [8] Clément Farabet, Berin Martini, Polina Akselrod, Selçuk Talay, Yann LeCun, and Eugenio Culurciello. Hardware accelerated convolutional neural networks for syn thetic vision systems. In Circuits and Systems (ISCAS), Proceedings of 2010 IEEE InternationalSymposiumon,pages257–260.IEEE,2010. [9] Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R Stanley Williams, and Vivek Srikumar. Isaac: A convolutionalneuralnetworkacceleratorwithin-situanalogarithmeticincrossbars. ACMSIGARCHComputerArchitectureNews,44(3):14–26,2016. [10] LinghaoSong,XuehaiQian,HaiLi,andYiranChen. Pipelayer: Apipelinedrerambased accelerator for deep learning. In High Performance Computer Architecture (HPCA),2017IEEEInternationalSymposiumon,pages541–552.IEEE,2017. [11] TianqiTang,LixueXia,BoxunLi,YuWang,andHuazhongYang. Binaryconvolutionalneuralnetworkonrram. InDesignAutomationConference(ASP-DAC),2017 22ndAsiaandSouthPacific,pages782–787.IEEE,2017. [12] MiaoHu,JohnPaulStrachan,ZhiyongLi,EmmanuelleMGrafals,NoraicaDavila, CatherineGraves,SityLam,NingGe,JianhuaJoshuaYang,andRStanleyWilliams. Dot-product engine for neuromorphic computing: programming 1t1m crossbar to accelerate matrix-vector multiplication. In Proceedings of the 53rd annual design automationconference,page19.ACM,2016. [13] KCHsu,FMLee,YYLin,EKLai,JYWu,DYLee,MHLee,HLLung,KYHsieh, andCYLu. Astudyofarrayresistancedistributionandanoveloperationalgorithm forwoxrerammemory. InProc.Int.Conf.SSDM,pages1168–1169,2015. [14] M Ueki, K Takeuchi, T Yamamoto, A Tanabe, N Ikarashi, M Saitoh, T Nagumo, HSunamura,MNarihiro,KUejima,etal. Low-powerembeddedReRAMtechnology for IoT applications. In 2015 Symposium on VLSI Technology (VLSI Technology),pagesT108–T109.IEEE,2015. [15] XiaoyongXue,WenxiangJian,JianguoYang,FanjieXiao,GangChen,ShuliuXu, YufengXie,YinyinLin,RyanHuang,QingtianZou,etal. A0.13 µm8MbLogic BasedCuxSiyOReRAMwithSelf−AdaptiveOperationforYieldEnhancementand PowerReduction. IEEEJournalofSolid-StateCircuits,48(5):1315–1322,2013. [16] Shimeng Yu, Ximeng Guan, and H-S Philip Wong. On the switching parameter variation of metal oxide RRAM−Part ii: Model corroboration and device design strategy. IEEETransactionsonElectronDevices,59(4):1183–1188,2012. [17] Jaeyun Yi, Hyejung Choi, Seunghwan Lee, Jaeyeon Lee, Donghee Son, Sangkeum Lee, Sangmin Hwang, Seokpyo Song, Jinwon Park, Sookjoo Kim, et al. Highly reliable and fast nonvolatile hybrid switching ReRAM memory using thin Al 2 o 3 demonstratedat54nmmemoryarray.InVLSITechnology(VLSIT),2011Symposium on,pages48–49.IEEE,2011. [18] YangqingJia,EvanShelhamer,JeffDonahue,SergeyKarayev,JonathanLong,Ross Girshick,SergioGuadarrama,andTrevorDarrell.Caffe: Convolutionalarchitecture for fast feature embedding. In Proceedings of the 22nd ACM international conferenceonMultimedia,pages675–678.ACM,2014. [19] Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. AT&TLabs[Online].Available: http://yann.lecun.com/exdb/mnist,2,2010. | |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/73457 | - |
| dc.description.abstract | 龐大的深度類神經網路運算導致密集的記憶體存取而因此限制了馮諾伊曼架構之效能。為了彌平這樣的效能落差,
以記憶體作為運算主體的架構被廣泛的提倡,在這些研究之中以縱橫式可變電阻記憶體加速器為一大主要解決方案。然而由於可變電阻記憶體之寫入偏差會導致此類加速器有著嚴重的準確度問題。為了改善此確度問題,我們提出了自適應資料表示法用以降低因為可變電阻記憶體之寫入偏差所導致的錯誤。我們基於真實的可變電阻記憶體晶片數據做了一系列實驗模擬。結果顯示出根據我們提出之方法可讓MNIST之辨識準確率提升20%及CIFAR10之辨識準確率提升40% | zh_TW |
| dc.description.abstract | Current deep neural network computations incur intensive memory accesses and thus limit the performance of current Von-Neumann architecture. To bridge the performance gap, Processing-In-Memory (PIM) architecture is widely advocated and crossbar accelerators with Resistive Random-Access Memory (ReRAM) are one of the intensively-studied solutions. However, due to the programming variation of ReRAM, crossbar accelerators suffer from the serious accuracy issue. To improve the accuracy, we propose an adaptive data representation strategy to minimize the analog variation errors caused by the programming variation of ReRAM. The proposed strategy was evaluated by a series of intensive experiments based on the data collected from real ReRAM chips, and the results show that the proposed strategy can improve the accuracy for around 20% for MNIST which is close to the ideal case and 40% for CIFAR10. | en |
| dc.description.provenance | Made available in DSpace on 2021-06-17T07:36:00Z (GMT). No. of bitstreams: 1 ntu-108-R06922036-1.pdf: 7384673 bytes, checksum: 66bf497cc01148f47dbb2563f130fd25 (MD5) Previous issue date: 2019 | en |
| dc.description.tableofcontents | 口試委員審定書 i
中文摘要 ii Abstract iii Contents iv ListofFigures v 1 Introduction 1 2 BackgroundandMotivation 4 2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Motivation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 AdaptiveDataRepresentationStrategy 12 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 AdaptiveInputSubcyclingPolicy(AISP) . . . . . . . . . . . . . . . . . 12 3.3 WeightRoundingPolicy(WRP) . . . . . . . . . . . . . . . . . . . . . . 14 4 PerformanceEvaluation 18 4.1 PerformanceMetricsandEvaluationSetup . . . . . . . . . . . . . . . . 18 4.2 EvaluationResults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2.1 NormalizedAverageMACDeviation . . . . . . . . . . . . . . . 19 4.2.2 MNISTInferenceAccuracy . . . . . . . . . . . . . . . . . . . . 21 4.2.3 CIFAR10InferenceAccuracy . . . . . . . . . . . . . . . . . . . 23 5 Conclusion 26 Bibliography 27 | |
| dc.language.iso | en | |
| dc.subject | 類神經網路 | zh_TW |
| dc.subject | 可變電阻記憶體 | zh_TW |
| dc.subject | 記憶體運算 | zh_TW |
| dc.subject | 縱橫式 | zh_TW |
| dc.subject | Crossbar | en |
| dc.subject | Neural Network | en |
| dc.subject | Resistive Random-Access Memory (ReRAM) | en |
| dc.subject | Processing-In-Memory (PIM) | en |
| dc.title | 以基於縱橫式可變電阻記憶體之類神經網路加速器的自適應資料表示法降低類比計算誤差 | zh_TW |
| dc.title | Adaptive Data Representation to Decrease Analog Variation Error of ReRAM Crossbar Accelerator for Neural Networks | en |
| dc.type | Thesis | |
| dc.date.schoolyear | 107-2 | |
| dc.description.degree | 碩士 | |
| dc.contributor.coadvisor | 張原豪(Yuan-Hao Chang) | |
| dc.contributor.oralexamcommittee | 楊佳玲(Chia-Lin Yang),洪士灝(Shih-Hao Hung),施吉昇(Chi-Sheng Shih) | |
| dc.subject.keyword | 類神經網路,可變電阻記憶體,記憶體運算,縱橫式, | zh_TW |
| dc.subject.keyword | Neural Network,Resistive Random-Access Memory (ReRAM),Processing-In-Memory (PIM),Crossbar, | en |
| dc.relation.page | 29 | |
| dc.identifier.doi | 10.6342/NTU201900719 | |
| dc.rights.note | 有償授權 | |
| dc.date.accepted | 2019-04-23 | |
| dc.contributor.author-college | 電機資訊學院 | zh_TW |
| dc.contributor.author-dept | 資訊工程學研究所 | zh_TW |
| 顯示於系所單位: | 資訊工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-108-1.pdf 未授權公開取用 | 7.21 MB | Adobe PDF |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
