Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71407
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor簡韶逸(Shao-Yi Chien)
dc.contributor.authorHeng Leeen
dc.contributor.author李亨zh_TW
dc.date.accessioned2021-06-17T06:00:12Z-
dc.date.available2019-02-19
dc.date.copyright2019-02-19
dc.date.issued2019
dc.date.submitted2019-02-12
dc.identifier.citationS. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deepneural networks with pruning, trained quantization and huffman coding,”arXiv preprint arXiv:1510.00149, 2015.
A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen, “Incremental network quan-tization: Towards lossless cnns with low-precision weights,”arXiv preprintarXiv:1702.03044, 2017.
Y.-H. Chen, T. Krishna, J. S. Emer, and V. Sze, “Eyeriss: An energy-efficientreconfigurable accelerator for deep convolutional neural networks,”IEEEJournal of Solid-State Circuits, vol. 52, no. 1, pp. 127–138, 2017.
S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally,“Eie: efficient inference engine on compressed deep neural network,” inComputer Architecture (ISCA), 2016 ACM/IEEE 43rd Annual InternationalSymposium on. IEEE, 2016, pp. 243–254.
A. Ardakani, C. Condo, and W. J. Gross, “Sparsely-connected neural net-works: towards efficient vlsi implementation of deep neural networks,”arXivpreprint arXiv:1611.01427, 2016.
J. Cheng, J. Wu, C. Leng, Y. Wang, and Q. Hu, “Quantized cnn: a unifiedapproach to accelerate and compress convolutional networks,”IEEE Transactions on Neural Networks and Learning Systems, 2017.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification withdeep convolutional neural networks,” inProc. Advances in neural informationprocessing systems, 2012, pp. 1097–1105.
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks forsemantic segmentation,” inProc. IEEE Conference on Computer Vision andPattern Recognition (CVPR), 2015, pp. 3431–3440.
C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutionalnetwork for image super-resolution,” inProc. European Conference on Com-puter Vision (ECCV). Springer, 2014, pp. 184–199.
J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution usingvery deep convolutional networks,” inProc. IEEE Conference on ComputerVision and Pattern Recognition (CVPR), 2016, pp. 1646–1654.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for imagerecognition,” inProceedings of the IEEE conference on computer vision andpattern recognition, 2016, pp. 770–778.
S. Wang, D. Zhou, X. Han, and T. Yoshimura, “Chain-NN: An energy-efficient 1d chain architecture for accelerating deep convolutional neuralnetworks,” inProc. 2017 Design, Automation & Test in Europe Conference& Exhibition (DATE), 2017, pp. 1032–1037.
A. Ardakani, C. Condo, M. Ahmadi, and W. J. Gross, “An architectureto accelerate convolution in deep neural networks,”IEEE Transactions onCircuits and Systems I: Regular Papers, vol. 65, no. 4, pp. 1349–1362, 2018.
Y. Guo, A. Yao, and Y. Chen, “Dynamic network surgery for efficient dnns,”inProc. Advances In Neural Information Processing Systems, 2016, pp.1379–1387.
I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Quan-tized neural networks: Training neural networks with low precision weightsand activations.”Journal of Machine Learning Research, vol. 18, pp. 187–1,2017.
Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutionalnetworks using vector quantization,”arXiv preprint arXiv:1412.6115, 2014.
A. Polino, R. Pascanu, and D. Alistarh, “Model compression via distillationand quantization,”arXiv preprint arXiv:1802.05668, 2018.
T.-J. Yang, Y.-H. Chen, and V. Sze, “Designing energy-efficient con-volutional neural networks using energy-aware pruning,”arXiv preprintarXiv:1611.05128, 2016.
H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters forefficient convnets,”arXiv preprint arXiv:1608.08710, 2016.
Y. He, X. Zhang, and J. Sun, “Channel pruning for accelerating very deepneural networks,” in2017 IEEE International Conference on Computer Vision(ICCV). IEEE, 2017, pp. 1398–1406.
G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neuralnetwork,”arXiv preprint arXiv:1503.02531, 2015.
Z. Cai, X. He, J. Sun, and N. Vasconcelos, “Deep learning with low precisionby half-wave gaussian quantization,” in2017 IEEE Conference on ComputerVision and Pattern Recognition (CVPR). IEEE, 2017, pp. 5406–5414.
M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenetclassification using binary convolutional neural networks,” inEuropean Con-ference on Computer Vision. Springer, 2016, pp. 525–542.
J. Albericio, P. Judd, T. Hetherington, T. Aamodt, N. E. Jerger, andA. Moshovos, “Cnvlutin: Ineffectual-neuron-free deep neural network com-puting,” inACM SIGARCH Computer Architecture News, vol. 44, no. 3.IEEE Press, 2016, pp. 1–13.
T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: Asmall-footprint high-throughput accelerator for ubiquitous machine-learning,”ACM Sigplan Notices, vol. 49, no. 4, pp. 269–284, 2014.
K. Simonyan and A. Zisserman, “Very deep convolutional networks forlarge-scale image recognition,”arXiv preprint arXiv:1409.1556, 2014.
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/71407-
dc.description.abstract深度神經網路在許多邊緣端的電腦視覺任務上已經展現了令人印象深刻的表現,使得在手機上或是物聯網裝置上的深度神經網路加速器需求越來越多。然而,巨量的能量消耗與儲存量的需求使得硬體設計越來越困難。因此,在這篇論文中,我們提出了一個基於壓縮技術(向量量化)來同時減少的神經網路模型的大小與計算量的神經網路加速器。此外,我們設計了一種特化的處理單元與資料流,前者有不同的靜態隨機存取記憶體配置,後者則是可以使加速器支援不同的卷積濾波器的大小,並在輸入或輸出的維度極小時亦保持高度的使用率。與現今最佳的神經網路加速器相比,我們提出的加速器可以減少3.94倍的動態隨機存取記憶體存取量以及在單批次的神經網路推論下減少1.2倍的時間。zh_TW
dc.description.abstractDeep neural networks (DNNs) have demonstrated impressive performance in many edge computer vision tasks, causing the increasing demand for DNN accelerator on mobile and internet of things (IoT) devices. However, the massive power consumption and storage requirement make the hardware design challenging. In this paper, we introduce a DNN accelerator based on a model compression technique vector quantization (VQ), which can reduce the network model size and computation cost simultaneously. Moreover, a specialized processing element (PE) is designed with various SRAM bank configurations as well as dataflows such that it can support different codebook/kernel sizes, and keep high utilization under small input or output channel numbers. Compared to the state-of-the-art, the proposed accelerator architecture achieves 3.94 times reduction in memory access and 1.2 times in latency for batch-one inference.en
dc.description.provenanceMade available in DSpace on 2021-06-17T06:00:12Z (GMT). No. of bitstreams: 1
ntu-108-R05943046-1.pdf: 1872199 bytes, checksum: 0ed15119be538310f7e76d9b69c8173e (MD5)
Previous issue date: 2019
en
dc.description.tableofcontentsAbstract ... i
List of Figures ... v
List of Tables ... vii
1 introduction ... 1
1.1 Motivation ... 1
1.2 Challenges ... 1
1.3 Keynote ... 2
1.3.1 Model compression ... 2
1.3.2 Batch size ... 3
1.3.3 Dataflow ... 3
1.4 Contribution ...4
1.5 Thesis Organization ... 4
2 Background Knowledge and Related Work ... 5
2.1 Model Compression Method ... 6
2.1.1 Pruning ... 6
2.1.2 Quantization ... 7
2.2 Neural Network Accelerators ... 8
2.2.1 Convolutional Layer ... 9
2.2.2 Co-design with Algorithm ... 10
2.3 Vector Quantization ... 12
3 Proposed Architecture ... 17
3.1 Overview ... 17
3.2 Processing Element -Baseline ... 19
3.2.1 Precompute Stage ... 20
3.2.2 Dispatch Stage ... 22
3.2.3 Accumulation Stage ... 22
3.2.4 SRAM in PE ... 22
3.3 Processing Element - Improved ... 24
3.3.1 Dispatch Stage ... 24
3.3.2 Precompute Stage ... 24
3.3.3 Accumulation Stage ... 25
3.4 On-Chip SRAM ... 25
3.4.1 Input SRAM ... 25
3.4.2 Output SRAM ... 26
3.5 Dataflow ... 27
3.5.1 Weight Stationary ... 27
3.5.2 Row Stationary-like ... 28
4 Implementation and Experimental Results ... 31
4.1 Implementation ... 31
4.1.1 Processing Element ... 31
4.1.2 Proposed Architecture ... 31
4.2 Experimental result ... 32
5 Conclusions ... 37
Reference ... 39
dc.language.isozh-TW
dc.subject神經網路加速器zh_TW
dc.subject模型壓縮zh_TW
dc.subject向量量化zh_TW
dc.subjectmodel compressionen
dc.subjectneural network acceleratoren
dc.subjectvector quantizationen
dc.title基於向量量化之卷積神經網路處理器架構設計zh_TW
dc.titleConvolutional Neural Network Accelerator with Vector Quantizationen
dc.typeThesis
dc.date.schoolyear107-1
dc.description.degree碩士
dc.contributor.oralexamcommittee劉宗德(Tsung-Te Liu),盧奕璋(Yi-Chang Lu),賴伯承(Bo-Cheng Lai)
dc.subject.keyword神經網路加速器,向量量化,模型壓縮,zh_TW
dc.subject.keywordneural network accelerator,vector quantization,model compression,en
dc.relation.page39
dc.identifier.doi10.6342/NTU201900441
dc.rights.note有償授權
dc.date.accepted2019-02-12
dc.contributor.author-college電機資訊學院zh_TW
dc.contributor.author-dept電子工程學研究所zh_TW
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-108-1.pdf
  未授權公開取用
1.83 MBAdobe PDF
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved