Skip navigation

DSpace

機構典藏 DSpace 系統致力於保存各式數位資料(如:文字、圖片、PDF)並使其易於取用。

點此認識 DSpace
DSpace logo
English
中文
  • 瀏覽論文
    • 校院系所
    • 出版年
    • 作者
    • 標題
    • 關鍵字
    • 指導教授
  • 搜尋 TDR
  • 授權 Q&A
    • 我的頁面
    • 接受 E-mail 通知
    • 編輯個人資料
  1. NTU Theses and Dissertations Repository
  2. 電機資訊學院
  3. 電子工程學研究所
請用此 Handle URI 來引用此文件: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92085
完整後設資料紀錄
DC 欄位值語言
dc.contributor.advisor楊佳玲zh_TW
dc.contributor.advisorChia-Lin Yangen
dc.contributor.author魏旻良zh_TW
dc.contributor.authorMing-Liang Weien
dc.date.accessioned2024-03-05T16:12:59Z-
dc.date.available2024-03-06-
dc.date.copyright2024-03-05-
dc.date.issued2024-
dc.date.submitted2024-02-05-
dc.identifier.citation[1] Iulia M Comsa, Krzysztof Potempa, Luca Versari, Thomas Fischbacher, Andrea Gesmundo, and Jyrki Alakuijala. Temporal coding in spiking neural networks with alpha synaptic function. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8529–8533. IEEE, 2020.
[2] Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, Brian Taba, Michael Beakes, Bernard Brezzo, Jente B. Kuang, Rajit Manohar, William P. Risk, Bryan Jackson, and Dharmendra S. Modha. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10):1537–1557, 2015.
[3] Ben Varkey Benjamin, Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand R Chandrasekaran, Jean-Marie Bussat, Rodrigo Alvarez-Icaza, John V Arthur, Paul A Merolla, and Kwabena Boahen. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proceedings of the IEEE, 102(5):699–716, 2014.
[4] Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro, 38(1):82–99, 2018.
[5] Andreas Baumbach, Sebastian Billaudelle, Virginie Sabado, and Mihai A Petrovici. Brainscales: Greater versatility for neuromorphic emulation. Brain-inspired Computing, page 15, 2021.
[6] Eustace Painkras, Luis A Plana, Jim Garside, Steve Temple, Francesco Galluppi, Cameron Patterson, David R Lester, Andrew D Brown, and Steve B Furber. Spinnaker: A 1-w 18-core system-on-chip for massively-parallel neural network simulation. IEEE Journal of Solid-State Circuits, 48(8):1943–1953, 2013.
[7] Surya Narayanan, Ali Shafiee, and Rajeev Balasubramonian. Inxs: Bridging the throughput and energy gap for spiking neural networks. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2451–2459. IEEE, 2017.
[8] Sonali Singh, Anup Sarma, Nicholas Jao, Ashutosh Pattnaik, Sen Lu, Kezhou Yang, Abhronil Sengupta, Vijaykrishnan Narayanan, and Chita R Das. Nebula: A neuromorphic spin-based ultra-low power architecture for snns and anns. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), pages 363–376. IEEE, 2020.
[9] Matthew Kay Fei Lee, Yingnan Cui, Thannirmalai Somu, Tao Luo, Jun Zhou, Wai Teng Tang, Weng-Fai Wong, and Rick Siow Mong Goh. A system-level simulator for rram-based neuromorphic computing chips. ACM Transactions on Architecture and Code Optimization (TACO), 15(4):1–24, 2019.
[10] Abhishek Moitra, Abhiroop Bhattacharjee, Runcong Kuang, Gokul Krishnan, Yu Cao, and Priyadarshini Panda. Spikesim: An end-to-end compute-in-memory hardware evaluation tool for benchmarking spiking neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2023.
[11] Pai-Yu Chen and Shimeng Yu. Reliability perspective of resistive synaptic devices on the neuromorphic system performance. In 2018 IEEE International Reliability Physics Symposium (IRPS), pages 5C–4. IEEE, 2018.
[12] Lei Deng, Guanrui Wang, Guoqi Li, Shuangchen Li, Ling Liang, Maohua Zhu, Yujie Wu, Zheyu Yang, Zhe Zou, Jing Pei, et al. Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation. IEEE Journal of Solid-State Circuits, 55(8):2228–2246, 2020.
[13] Surya Narayanan, Karl Taht, Rajeev Balasubramonian, Edouard Giacomin, and Pierre-Emmanuel Gaillardon. Spinalflow: An architecture and dataflow tailored for spiking neural networks. In ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), pages 349–362. IEEE, 2020.
[14] Jeong-Jun Lee, Wenrui Zhang, and Peng Li. Parallel time batching: Systolic-array acceleration of sparse spiking neural computation. In IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 317–330. IEEE, 2022.
[15] Anshujit Sharma, Richard Afoakwa, Zeljko Ignjatovic, and Michael Huang. Increasing ising machine capacity with multi-chip architectures. In Proceedings of the 49th Annual International Symposium on Computer Architecture, pages 508–521, 2022.
[16] Jeong-Jun Lee, Wenrui Zhang, Yuan Xie, and Peng Li. Saarsp: An architecture for systolic-array acceleration of recurrent spiking neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 18(4):1–23, 2022.
[17] Jean-François Cordeau, Gilbert Laporte, Martin WP Savelsbergh, and Daniele Vigo. Vehicle routing. Handbooks in operations research and management science, 14:367–428, 2007.
[18] Marshall Fisher. Vehicle routing. Handbooks in operations research and management science, 8:1–33, 1995.
[19] Paolo Toth and Daniele Vigo. Vehicle routing: problems, methods, and applications. SIAM, 2014.
[20] Victor Pillac, Michel Gendreau, Christelle Guéret, and Andrés L Medaglia. A review of dynamic vehicle routing problems. European Journal of Operational Research, 225(1):1–11, 2013.
[21] David Pisinger and Stefan Ropke. A general heuristic for vehicle routing problems. Computers & operations research, 34(8):2403–2435, 2007.
[22] Michel Gendreau, Gilbert Laporte, and René Séguin. Stochastic vehicle routing. European Journal of Operational Research, 88(1):3–12, 1996.
[23] Bruce L Golden, Subramanian Raghavan, Edward A Wasil, et al. The vehicle routing problem: latest advances and new challenges, volume 43. Springer, 2008.
[24] Baofeng Sun, Yue Yang, Junyan Shi, and Lili Zheng. Dynamic pick-up and delivery optimization with multiple dynamic events in real-world environment. IEEE Access, 7:146209–146220, 2019.
[25] Kevin Dorling, Jordan Heinrichs, Geoffrey G Messier, and Sebastian Magierowski. Vehicle routing problems for drone delivery. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(1):70–85, 2016.
[26] Rui Wang, Xiangping Bryce Zhai, Yunlong Zhao, and Xuedong Zhao. Delivery optimization for unmanned aerial vehicles based on minimum cost maximum flow with limited battery capacity. In International Conference on Wireless Algorithms, Systems, and Applications, pages 77–85. Springer, 2021.
[27] Sebastián Dávila, Martine Labbé, Vladimir Marianov, Fernando Ordónẽz, and Frédéric Semet. Product line optimization with multiples sites. Computers & Operations Research, 148:105978, 2022.
[28] Dimitris Bertsimas and Velibor V Mišić. Exact first-choice product line optimization. Operations Research, 67(3):651–670, 2019.
[29] Stelios Tsafarakis, Yannis Marinakis, and Nikolaos Matsatsinis. Particle swarm optimization for optimal product line design. International Journal of Research in Marketing, 28(1):13–22, 2011.
[30] I Antoniolli, P Guariente, T Pereira, L Pinto Ferreira, and FJG Silva. Standardization and optimization of an automotive components production line. Procedia Manufacturing, 13:1120–1127, 2017.
[31] Diomidis Spinellis, Chrissoleon Papadopoulos, and J MacGregor Smith. Large production line optimization using simulated annealing. International journal of production research, 38(3):509–541, 2000.
[32] N Rezg, X Xie, and Y Mati. Joint optimization of preventive maintenance and inventory control in a production line using simulation. International Journal of Production Research, 42(10):2029–2046, 2004.
[33] Haoyu Cheng, Erich D Jarvis, Olivier Fedrigo, Klaus-Peter Koepfli, Lara Urban, Neil J Gemmell, and Heng Li. Haplotype-resolved assembly of diploid genomes without parental data. Nature Biotechnology, 40(9):1332–1335, 2022.
[34] Qianxing Mo and Faming Liang. A hidden ising model for chip-chip data analysis. Bioinformatics, 26(6):777–783, 2010.
[35] Natalia N Vtyurina, David Dulin, Margreet W Docter, Anne S Meyer, Nynke H Dekker, and Elio A Abbondanzieri. An ising model describes hysteresis in a novel form of cooperative binding. Biophysical Journal, 110(3):239a, 2016.
[36] Jacek Majewski, Hao Li, and Jurg Ott. The ising model in physics and statistical genetics. The American Journal of Human Genetics, 69(4):853–862, 2001.
[37] AS Boev, AS Rakitko, SR Usmanov, AN Kobzeva, IV Popov, VV Ilinsky, EO Kiktenko, and AK Fedorov. Genome assembly using quantum and quantum-inspired annealing. Scientific Reports, 11(1):13183, 2021.
[38] Xumeng Li, F Alex Feltus, Xiaoqian Sun, Zijun Wang, and Feng Luo. A non-parameter ising model for network-based identification of differentially expressed genes in recurrent breast cancer patients. In IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 214–217. IEEE, 2010.
[39] Audun Bakk and Johan S Høye. One-dimensional ising model applied to protein folding. Physica A: Statistical Mechanics and its Applications, 323:504–518, 2003.
[40] Michail Yu Lobanov and Oxana V Galzitskaya. The ising model for prediction of disordered residues from protein sequence alone. Physical biology, 8(3):035004, 2011.
[41] Bernd Rosenow, Vasiliki Plerou, Parameswaran Gopikrishnan, and H Eugene Stanley. Portfolio optimization and the random magnet problem. Europhysics Letters, 59(4):500, 2002.
[42] Philippe Jorion. Portfolio optimization in practice. Financial analysts journal, 48(1):68–74, 1992.
[43] Tunchan Cura. Particle swarm optimization approach to portfolio optimization. Nonlinear analysis: Real world applications, 10(4):2396–2406, 2009.
[44] Michael Marzec. Portfolio optimization: Applications in quantum computing. Handbook of High-Frequency Trading and Modeling in Finance, pages 73–106, 2016.
[45] Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, and Kaushik Roy. Resparc: A reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. In Proceedings of the 54th Annual Design Automation Conference (DAC), pages 1–6, 2017.
[46] Yachen Xiang, Peng Huang, Runze Han, Chu Li, Kunliang Wang, Xiaoyan Liu, and Jinfeng Kang. Efficient and robust spike-driven deep convolutional neural networks based on nor flash computing array. IEEE Transactions on Electron Devices (TED), 67(6):2329–2335, 2020.
[47] Ming-Liang Wei, Mikail Yayla, Shu-Yin Ho, Jian-Jia Chen, Hussam Amrouch, and Chia-Lin Yang. Impact of non-volatile memory cells on spiking neural network annealing machine with in-situ synapse processing. IEEE Transactions on Circuits and Systems I: Regular Papers, 2023.
[48] Ming-Liang Wei, Hussam Amrouch, Cheng-Lin Sung, Hang-Ting Lue, Chia-Lin Yang, Keh-Chung Wang, and Chih-Yuan Lu. Robust brain-inspired computing: On the reliability of spiking neural network using emerging non-volatile synapses. In International Reliability Physics Symposium (IRPS), pages 1–8. IEEE, 2021.
[49] Xiaoling Xia, Cui Xu, and Bing Nan. Inception-v3 for flower classification. In 2017 2nd international conference on image, vision and computing (ICIVC), pages 783–787. IEEE, 2017.
[50] Sasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029, 2016.
[51] Haşim Sak, Andrew Senior, and Françoise Beaufays. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128, 2014.
[52] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
[53] Zhong-Qiu Zhao, Peng Zheng, Shou-tao Xu, and Xindong Wu. Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems, 30(11):3212–3232, 2019.
[54] Thomas Natschläger and Berthold Ruf. Pattern analysis with spiking neurons using delay coding. Neurocomputing, 26:463–469, 1999.
[55] Seongsik Park, Seijoon Kim, Byunggook Na, and Sungroh Yoon. T2fsnn: Deep spiking neural networks with time-to-first-spike coding. In 2020 57th ACM/IEEE Design Automation Conference (DAC), pages 1–6. IEEE, 2020.
[56] Kwabena Boahen. Dendrocentric learning for synthetic intelligence. Nature, 612(7938):43–50, 2022.
[57] Geoffrey Portelli, John M Barrett, Gerrit Hilgen, Timothée Masquelier, Alessandro Maccione, Stefano Di Marco, Luca Berdondini, Pierre Kornprobst, and Evelyne Sernagor. Rank order coding: a retinal information decoding strategy revealed by large-scale multielectrode array retinal recordings. eneuro, 3(3), 2016.
[58] Peter U Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In 2015 International joint conference on neural networks (IJCNN), pages 1–8. ieee, 2015.
[59] Daniel Auge, Julian Hille, Etienne Mueller, and Alois Knoll. A survey of encoding techniques for signal processing in spiking neural networks. Neural Processing Letters, 53(6):4693–4710, 2021.
[60] Chunyu Yuan and Sos S Agaian. A comprehensive review of binary neural network. Artificial Intelligence Review, pages 1–65, 2023.
[61] Mingzhu Shen, Kai Han, Chunjing Xu, and Yunhe Wang. Searching for accurate binary neural architectures. In Proceedings of the IEEE/CVF international conference on computer vision workshops, pages 0–0, 2019.
[62] Tifenn Hirtzlin, Marc Bocquet, Jacques-Olivier Klein, Etienne Nowak, Elisa Vianello, Jean Michel Portal, and Damien Querlioz. Outstanding bit error tolerance of resistive ram-based binarized neural networks. In International Conference on Artificial Intelligence Circuits and Systems, AICAS, pages 288–292, 2019.
[63] Sebastian Buschjäger, Jian-Jia Chen, Kuan-Hsun Chen, Mario Günzel, Christian Hakert, Katharina Morik, Rodion Novkin, Lukas Pfahler, and Mikail Yayla. Margin-maximization in binarized neural networks for optimizing bit error tolerance. DATE ’21, 2020.
[64] Haralampos-G Stratigopoulos, Theofilos Spyrou, and Spyridon Raptis. Testing and reliability of spiking neural networks: A review of the state-of-the-art. In 2023 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nan-otechnology Systems (DFT), pages 1–8. IEEE, 2023.
[65] Fei Su, Chunsheng Liu, and Haralampos-G Stratigopoulos. Testability and depend ability of ai hardware: Survey, trends, challenges, and perspectives. IEEE Design & Test, 2023.
[66] Theofilos Spyrou, Sarah A El-Sayed, Engin Afacan, Luis A Camuñas-Mesa, Bernabé Linares-Barranco, and Haralampos-G Stratigopoulos. Reliability analysis of a spiking neural network hardware accelerator. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 370–375. IEEE, 2022.
[67] Elena-Ioana Vatajelu, Giorgio Di Natale, and Lorena Anghel. Special session: Reliability of hardware-implemented spiking neural networks (snn). In 2019 IEEE 37th VLSI Test Symposium (VTS), pages 1–8. IEEE, 2019.
[68] Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, and Muhammad Shafique. Softsnn: Low-cost fault tolerance for spiking neural network accelerators under soft errors. In Proceedings of the 59th ACM/IEEE Design Automation Conference, pages 151–156, 2022.
[69] Wilkie Olin-Ammentorp, Karsten Beckmann, Catherine D Schuman, James S Plank, and Nathaniel C Cady. Stochasticity and robustness in spiking neural networks. Neurocomputing, 419:23–36, 2021.
[70] Sebastian Schmitt, Johann Klähn, Guillaume Bellec, Andreas Grübl, Maurice Guettler, Andreas Hartel, Stephan Hartmann, Dan Husmann, Kai Husmann, Sebastian Jeltsch, et al. Neuromorphic hardware in the loop: Training a deep spiking network on the brainscales wafer-scale system. In 2017 international joint conference on neural networks (IJCNN), pages 2227–2234. IEEE, 2017.
[71] Dohun Kim, Guhyun Kim, Cheol Seong Hwang, and Doo Seok Jeong. ewb: Event-based weight binarization algorithm for spiking neural networks. IEEE Access, 9:38097–38106, 2021.
[72] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Advances in neural information processing systems, pages 4107–4115, 2016.
[73] Sen Lu and Abhronil Sengupta. Exploring the connection between binary and spiking neural networks. Frontiers in Neuroscience, 14, Jun 2020.
[74] Catherine D. Schuman, J. Parker Mitchell, J. Travis Johnston, Maryam Parsa, Bill Kay, Prasanna Date, and Robert M. Patton. Resilience and robustness of spiking neural networks for neuromorphic systems. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–10, 2020.
[75] Elena-Ioana Vatajelu, Giorgio Di Natale, and Lorena Anghel. Special session: Reliability of hardware-implemented spiking neural networks (snn). In 2019 IEEE 37th VLSI Test Symposium (VTS), pages 1–8, 2019.
[76] Theofilos Spyrou, Sarah A El-Sayed, Engin Afacan, Luis A Camuñas-Mesa, Bernabé Linares-Barranco, and Haralampos-G. Stratigopoulos. Neuron Fault Tolerance in Spiking Neural Networks. In DATE 2021, Grenoble (virtuel), France, February 2021.
[77] Chris Eliasmith. How to build a brain: A neural architecture for biological cognition. OUP USA, 2013.
[78] Yanping Huang and Rajesh P Rao. Neurons as monte carlo samplers: Bayesian inference and learning in spiking networks. Advances in neural information processing systems, 27, 2014.
[79] Ola Friman, Gunnar Farneback, and C-F Westin. A bayesian approach for stochastic white matter tractography. IEEE transactions on medical imaging, 25(8):965–978, 2006.
[80] David C Knill and Alexandre Pouget. The bayesian brain: the role of uncertainty in neural coding and computation. TRENDS in Neurosciences, 27(12):712–719, 2004.
[81] Ting-Shuo Chou, Hirak J Kashyap, Jinwei Xing, Stanislav Listopad, Emily L, Rounds, Michael Beyeler, Nikil Dutt, and Jeffrey L Krichmar. Carlsim 4: An open source library for large scale, biologically detailed spiking neural network simulation using heterogeneous clusters. In International joint conference on neural networks (IJCNN), pages 1–8. IEEE, 2018.
[82] Birgit Kriener, Hkon Enger, Tom Tetzlaff, Hans E Plesser, Marc-Oliver Gewaltig, and Gaute T Einevoll. Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses. Frontiers in computational neuroscience, 8:136, 2014.
[83] Shen-Fu Hsiao, Chia-Sheng Wen, and Ming-Yu Tsai. Low-cost design of reciprocal function units using shared multipliers and adders for polynomial approximation and newton raphson interpolation. In International Symposium on Next Generation Electronics, pages 40–43. IEEE, 2010.
[84] Hande Alemdar, Vincent Leroy, Adrien Prost-Boucle, and Frédéric Pétrot. Ternary neural networks for resource-efficient ai applications. In 2017 international joint conference on neural networks (IJCNN), pages 2547–2554. IEEE, 2017.
[85] Lei Deng, Peng Jiao, Jing Pei, Zhenzhi Wu, and Guoqi Li. Gxnor-net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework. Neural Networks, 100:49–58, 2018.
[86] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision, pages 525–542. Springer, 2016.
[87] Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, and Ping Luo. Omniquant: Omni-directionally calibrated quantization for large language models. arXiv preprint arXiv:2308.13137, 2023.
[88] Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 293–302, 2019.
[89] Zhen Dong, Zhewei Yao, Daiyaan Arfeen, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq-v2: Hessian aware trace-weighted quantization of neural networks. Advances in neural information processing systems, 33:18518–18529, 2020.
[90] Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael Mahoney, et al. Hawq-v3: Dyadic neural network quantization. In International Conference on Machine Learning, pages 11875–11886. PMLR, 2021.
[91] Hannes Uppman. On Some Combinatorial Optimization Problems: Algorithms and Complexity. PhD thesis, Linköping University Electronic Press, 2015.
[92] Fred Glover, Gary Kochenberger, and Yu Du. A tutorial on formulating and using qubo models. arXiv preprint arXiv:1811.11538, 2018.
[93] Abraham P Punnen. Introduction to qubo. In The Quadratic Unconstrained Binary Optimization Problem, pages 1–37. Springer, 2022.
[94] Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. Journal of chemical physics, 21(6):1087–1092, 1953.
[95] Wally R Gilks, Nicky G Best, and Keith KC Tan. Adaptive rejection metropolis sampling within gibbs sampling. Journal of the royal statistical society: series c, 44(4):455–472, 1995.
[96] Takashi Takemoto, Masato Hayashi, Chihiro Yoshimura, and Masanao Yamaoka. A 2 x 30k-spin multi-chip scalable cmos annealing processor based on a processing-in-memory approach for solving large-scale combinatorial optimization problems. IEEE Journal of Solid-State Circuits (JSSC), 55(1):145–156, 2019.
[97] Takashi Takemoto, Masato Hayashi, Chihiro Yoshimura, and Masanao Yamaoka. A 2 x 30k-spin multichip scalable annealing processor based on a processing-in-memory approach for solving large-scale combinatorial optimization problems. In International Solid-State Circuits Conference (ISSCC), pages 52–54. IEEE, 2019.
[98] Ming-Chun Hong, Le-Chih Cho, Chih-Sheng Lin, Yu-Hui Lin, Po-An Chen, I-Ting Wang, Pei-Jer Tzeng, Shyh-Shyuan Sheu, Wei-Chung Lo, Chih-I Wu, et al. In-memory annealing unit (imau): Energy-efficient (2000 tops/w) combinatorial optimizer for solving travelling salesman problem. In International Electron Devices Meeting (IEDM), pages 21–3. IEEE, 2021.
[99] John J Hopfield and David W Tank. Computing with neural circuits: A model. Science, 233(4764):625–633, 1986.
[100] SG Hu, Y Liu, Z Liu, TP Chen, JJ Wang, Q Yu, LJ Deng, Y Yin, and Sumio Hosaka. Associative memory realized by a reconfigurable memristive hopfield neural network. Nature communications, 6(1):1–8, 2015.
[101] Zeno Jonke, Stefan Habenschuss, and Wolfgang Maass. Solving constraint satisfaction problems with networks of spiking neurons. Frontiers in neuroscience, 10:118, 2016.
[102] Gabriel A Fonseca Guerra and Steve B Furber. Using stochastic spiking neural networks on spinnaker to solve constraint satisfaction problems. Frontiers in neuroscience, 11:714, 2017.
[103] Chris Yakopcic, Nayim Rahman, Tanvir Atahary, Tarek M Taha, and Scott Douglass. Solving constraint satisfaction problems using the loihi spiking neuromorphic processor. In 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 1079–1084. IEEE, 2020.
[104] Sourav Dutta, Clemens Schafer, Jorge Gomez, Kai Ni, Siddharth Joshi, and Suman Datta. Supervised learning in all fefet-based spiking neural network: Opportunities and challenges. Frontiers in Neuroscience, 14, 2020.
[105] David Heeger et al. Poisson model of spike generation. Handout, University of Standford, 5(1-13):76, 2000.
[106] Hang-Ting Lue, Po-Kai Hsu, Ming-Liang Wei, Teng-Hao Yeh, Pei-Ying Du, Wei-Chen Chen, Keh-Chung Wang, and Chih-Yuan Lu. Optimal design methods to transform 3d nand flash into a high-density, high-bandwidth and low-power non-volatile computing in memory (nvcim) accelerator for deep-learning neural networks (dnn). In 2019 IEEE International Electron Devices Meeting (IEDM), pages 38–1. IEEE, 2019.
[107] Tianqi Tang, Lixue Xia, Boxun Li, Rong Luo, Yiran Chen, Yu Wang, and Huazhong Yang. Spiking neural network with rram: Can we use it for real-world application? In 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 860–865. IEEE, 2015.
[108] Wei-Chih Chien, Ming-Hsiu Lee, Feng-Ming Lee, Yu-Yu Lin, Hsiang-Lan Lung, Kuang-Yeu Hsieh, and Chih-Yuan Lu. Multi-level 40nm wo x resistive memory with excellent reliability. In 2011 International electron devices meeting, pages 31–5. IEEE, 2011.
[109] Xiaokang Li, Baotong Zhang, Bowen Wang, Xiaoyan Xu, Yuancheng Yang, Shuang Sun, Qifeng Cai, Shijie Hu, Xia An, Ming Li, et al. Low power and high uniformity of hfox-based rram via tip-enhanced electric fields. Science China Information Sciences, 62(10):1–7, 2019.
[110] Hang-Ting Lue, Tzu-Hsuan Hsu, Teng-Hao Yeh, Wei-Chen Chen, Chieh Roger Lo, Chia-Tze Huang, Guan-Ru Lee, Chia-Jung Chiu, Keh-Chung Wang, and Chih-Yuan Lu. A vertical 2t nor (v2t) architecture to enable scaling and low-power solutions for nor flash technology. In 2020 IEEE Symposium on VLSI Technology, pages 1–2. IEEE, 2020.
[111] Kai Ni, Aniket Gupta, Om Prakash, Simon Thomann, X Sharon Hu, and Hussam Amrouch. Impact of extrinsic variation sources on the device-to-device variation in ferroelectric fet. In International Reliability Physics Symposium (IRPS), pages 1–5. IEEE, 2020.
[112] Wei-Hao Chen, Kai-Xiang Li, Wei-Yu Lin, Kuo-Hsiang Hsu, Pin-Yi Li, Cheng-Han Yang, Cheng-Xin Xue, En-Yu Yang, Yen-Kai Chen, Yun-Sheng Chang, et al. A 65nm 1mb nonvolatile computing-in-memory reram macro with sub-16ns multiply-and-accumulate for binary dnn ai edge processors. In 2018 IEEE International Solid-State Circuits Conference-(ISSCC), pages 494–496. IEEE, 2018.
[113] Gerhard Reinelt. Tsplib95. Interdisziplinäres Zentrum für Wissenschaftliches Rechnen (IWR), Heidelberg, 338:1–16, 1995.
[114] Ming-Liang Wei, Mikail Yayla, Shu-Yin Ho, Jiap-Jia Chen, Chia-Lin Yang, and Hussam Amrouch. Binarized snns: Efficient and error-resilient spiking neural networks through binarization. In 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pages 1–9. IEEE, 2021.
[115] Aniket Gupta, Kai Ni, Om Prakash, X Sharon Hu, and Hussam Amrouch. Temperature dependence and temperature-aware sensing in ferroelectric fet. In IEEE International Reliability Physics Symposium (IRPS), pages 1–5. IEEE, 2020.
[116] Christian Walczyk, Damian Walczyk, Thomas Schroeder, Thomas Bertaud, Magorzata Sowinska, Mindaugas Lukosius, Mirko Fraschke, Dirk Wolansky, Bernd Tillack, Enrique Miranda, et al. Impact of temperature on the resistive switching behavior of embedded hfo2-based rram devices. IEEE Transactions on Electron Devices (TED), 58(9):3124–3131, 2011.
[117] Xinjie Guo, F Merrikh Bayat, Mirko Prezioso, Y Chen, B Nguyen, N Do, and Dmitri B Strukov. Temperature-insensitive analog vector-by-matrix multiplier based on 55 nm nor flash memory cells. In Custom Integrated Circuits Conference (CICC), pages 1–4. IEEE, 2017.
[118] Takahiro Inagaki, Yoshitaka Haribara, Koji Igarashi, Tomohiro Sonobe, Shuhei Tamate, Toshimori Honjo, Alireza Marandi, Peter L McMahon, Takeshi Umeki, Koji Enbutsu, et al. A coherent ising machine for 2000-node optimization problems. Science, 354(6312):603–606, 2016.
[119] Masanao Yamaoka, Takuya Okuyama, Masato Hayashi, Chihiro Yoshimura, and Takashi Takemoto. Cmos annealing machine: an in-memory computing accelerator to process combinatorial optimization problems. In IEEE Custom Integrated Circuits Conference (CICC), pages 1–8. IEEE, 2019.
[120] Giacomo Indiveri. A low-power adaptive integrate-and-fire neuron circuit. In Proceedings of the International Symposium on Circuits and Systems (ISCAS), volume 4, pages IV–IV. IEEE, 2003.
[121] Cheng-Lin Sung, Hang-Ting Lue, Ming-Liang Wei, Shu-Yin Ho, Han-Wen Hu, Pei-Ying Du, Wei-Chen Chen, Chieh Roger Lo, Teng-Hao Yeh, Keh-Chung Wang, et al. A novel super-steep slope (~ 0.015 mv/dec) gate-controlled thyristor (gct) functional memory device to support the integrate-and-fire circuit for spiking neural networks. In 2020 IEEE International Electron Devices Meeting (IEDM), pages 21–3. IEEE, 2020.
[122] Valentin Schmutz, Johanni Brea, and Wulfram Gerstner. Convergence of redundancy-free spiking neural networks to rate networks. arXiv preprint arXiv:2303.05174, 2023.
[123] Bruno Cessac and Thierry Viéville. On dynamics of integrate-and-fire neural networks with conductance based synapses. Frontiers in computational neuroscience, 2:228, 2008.
[124] Wulfram Gerstner. Integrate-and-fire neurons and networks. The handbook of brain theory and neural networks, 2:577–581, 2002.
[125] Manon Dampfhoffer, Thomas Mesquida, Alexandre Valentian, and Lorena Anghel. Investigating current-based and gating approaches for accurate and energy-efficient spiking recurrent neural networks. In International Conference on Artificial Neural Networks, pages 359–370. Springer, 2022.
[126] Mike Davies, Andreas Wild, Garrick Orchard, Yulia Sandamirskaya, Gabriel A Fonseca Guerra, Prasad Joshi, Philipp Plank, and Sumedh R Risbud. Advancing neuromorphic computing with loihi: A survey of results and outlook. Proceedings of the IEEE, 109(5):911–934, 2021.
[127] Giacomo Pedretti, Piergiulio Mannocci, Shahin Hashemkhani, Valerio Milo, Octavian Melnic, Elisabetta Chicca, and Daniele Ielmini. A spiking recurrent neural network with phase-change memory neurons and synapses for the accelerated solution of constraint satisfaction problems. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, 6(1):89–97, 2020.
[128] Lars Buesing, Johannes Bill, Bernhard Nessler, and Wolfgang Maass. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS computational biology, 7(11):e1002211, 2011
[129] Prasanna Date, Robert Patton, Catherine Schuman, and Thomas Potok. Efficiently embedding qubo problems on adiabatic quantum computers. Quantum Information Processing, 18:1–31, 2019.
[130] Md Zahangir Alom, Brian Van Essen, Adam T Moody, David Peter Widemann, and Tarek M Taha. Quadratic unconstrained binary optimization (qubo) on neuromorphic computing system. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 3922–3929. IEEE, 2017.
[131] Endre Boros, Peter L Hammer, and Gabriel Tavares. Local search heuristics for quadratic unconstrained binary optimization (qubo). Journal of Heuristics, 13:99–132, 2007.
[132] Iain Dunning, Swati Gupta, and John Silberholz. What works best when? a systematic evaluation of heuristics for max-cut and qubo. INFORMS Journal on Computing, 30(3):608–624, 2018.
[133] Kyle Henke, Michael Teti, Garrett Kenyon, Ben Migliori, and Gerd Kunde. Apples-to-spikes: The first detailed comparison of lasso solutions generated by a spiking neuromorphic processor. In Proceedings of the International Conference on Neuromorphic Systems 2022, pages 1–8, 2022.
[134] Samuel Shapero, Christopher Rozell, and Paul Hasler. Configurable hardware integrate and fire neurons for sparse approximation. Neural Networks, 45:134–143, 2013.
[135] Ping Tak Peter Tang, Tsung-Han Lin, and Mike Davies. Sparse coding by spiking neural networks: Convergence theory and computational results. arXiv preprint arXiv:1705.05475, 2017.
[136] Hui Zou. The adaptive lasso and its oracle properties. Journal of the American statistical association, 101(476):1418–1429, 2006.
[137] Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016.
[138] Robert Hecht-Nielsen. Theory of the backpropagation neural network. In Neural networks for perception, pages 65–93. Elsevier, 1992.
[139] Chenhui Zhao, Zenan Huang, and Donghui Guo. Spiking neural network dynamic system modeling for computation of quantum annealing and its convergence analysis. Quantum Information Processing, 20(2):1–16, 2021.
[140] Steve B Furber, Francesco Galluppi, Steve Temple, and Luis A Plana. The spinnaker project. Proceedings of the IEEE, 102(5):652–665, 2014.
[141] Ming-Liang Wei, Hang-Ting Lue, Shu-Yin Ho, Yen-Po Lin, Tzu-Hsuan Hsu, Chih-Chang Hsieh, Yung-Chun Li, Teng-Hao Yeh, Shih-Hung Chen, Yi-Hao Jhu, et al. Analog computing in memory (cim) technique for general matrix multiplication (gemm) to support deep neural network (dnn) and cosine similarity search computing using 3d and-type nor flash devices. In International Electron Devices Meeting (IEDM), pages 33–3. IEEE, 2022.
[142] Mike O’Connor, Niladrish Chatterjee, Donghyuk Lee, John Wilson, Aditya Agrawal, Stephen W Keckler, and William J Dally. Fine-grained dram: Energy-efficient dram for extreme bandwidth systems. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, pages 41–54, 2017.
[143] Javier Iglesias, Jan Eriksson, François Grize, Marco Tomassini, and Alessandro EP Villa. Dynamics of pruning in simulated large-scale spiking neural networks. Biosystems, 79(1-3):11–20, 2005.
[144] Nitin Rathi, Priyadarshini Panda, and Kaushik Roy. Stdp-based pruning of connections and weight quantization in spiking neural networks for energy-efficient recognition. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 38(4):668–677, 2018.
[145] Yuhan Shi, Leon Nguyen, Sangheon Oh, Xin Liu, and Duygu Kuzum. A soft-pruning method applied during training of spiking neural networks for in-memory computing applications. Frontiers in neuroscience, 13:405, 2019.
[146] Yanqi Chen, Zhaofei Yu, Wei Fang, Zhengyu Ma, Tiejun Huang, and Yonghong Tian. State transition of dendritic spines improves learning of sparse spiking neural networks. In International Conference on Machine Learning, pages 3701–3715. PMLR, 2022.
[147] Faramarz Faghihi, Hany Alashwal, and Ahmed A Moustafa. A synaptic pruning-based spiking neural network for hand-written digits classification. Frontiers in Artificial Intelligence, 5:680165, 2022.
[148] Thao NN Nguyen, Bharadwaj Veeravalli, and Xuanyao Fong. Connection pruning for deep spiking neural networks with on-chip learning. In International Conference on Neuromorphic Systems 2021, pages 1–8, 2021.
[149] Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, and Priyadarshini Panda. Workload-balanced pruning for sparse spiking neural networks. arXiv preprint arXiv:2302.06746, 2023.
[150] Ruizhi Chen, Hong Ma, Shaolin Xie, Peng Guo, Pin Li, and Donglin Wang. Fast and efficient deep sparse multi-strength spiking neural networks with dynamic pruning. In 2018 international joint conference on neural networks (ijcnn), pages 1–8. IEEE, 2018.
[151] LW Meng, GC Qiao, XY Zhang, J Bai, Y Zuo, PJ Zhou, Y Liu, and SG Hu. An efficient pruning and fine-tuning method for deep spiking neural network. Applied Intelligence, pages 1–14, 2023.
[152] Bing Han, Feifei Zhao, Yi Zeng, and Wenxuan Pan. Adaptive sparse structure development with pruning and regeneration for spiking neural networks. arXiv preprint arXiv:2211.12219, 2022.
[153] Chang Siau, Kwang-Ho Kim, Seungpil Lee, Katsuaki Isobe, Noboru Shibata, Kapil Verma, Takuya Ariki, Jason Li, Jong Yuh, Anirudh Amarnath, et al. A 512gb 3-bit/cell 3d flash memory on 128-wordline-layer with 132mb/s write performance featuring circuit-under-array technology. In IEEE International Solid-State Circuits Conference-(ISSCC), pages 218–220. IEEE, 2019.
[154] Jae-Woo Park, Doogon Kim, Sunghwa Ok, Jaebeom Park, Taeheui Kwon, Hyunsoo Lee, Sungmook Lim, Sun-Young Jung, Hyeongjin Choi, Taikyu Kang, et al. A 176-stacked 512gb 3b/cell 3d-nand flash with 10.8 gb/mm 2 density with a peripheral circuit under cell array architecture. In IEEE International Solid-State Circuits Conference (ISSCC), volume 64, pages 422–423. IEEE, 2021.
[155] Hwang Huh, Wanik Cho, Jinhaeng Lee, Yujong Noh, Yongsoon Park, Sunghwa Ok, Jongwoo Kim, Kayoung Cho, Hyunchul Lee, Geonu Kim, et al. A 1tb 4b/cell 96-stacked-wl 3d nand flash memory with 30mb/s program throughput using peripheral circuit under memory cell array technique. In IEEE International Solid-State Circuits Conference-(ISSCC), pages 220–221. IEEE, 2020.
[156] Bo Lu, Chen-Rui Fan, Lu Liu, Kai Wen, and Chuan Wang. Speed-up coherent ising machine with a spiking neural network. Optics Express, 31(3):3676–3684, 2023.
[157] Timothy A Davis and Yifan Hu. The university of florida sparse matrix collection. ACM Transactions on Mathematical Software (TOMS), 38(1):1–25, 2011.
[158] Scott P Kolodziej, Mohsen Aznaveh, Matthew Bullock, Jarrett David, Timothy A Davis, Matthew Henderson, Yifan Hu, and Read Sandstrom. The suitesparse matrix collection website interface. Journal of Open Source Software, 4(35):1244, 2019.
[159] Michel X Goemans and David P Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995.
[160] Jeffrey P Gavornik and Harel Z Shouval. A network of spiking neurons that can represent interval timing: mean field analysis. Journal of computational neuroscience, 30:501–513, 2011.
[161] Alessandro Treves. Mean-field analysis of neuronal spike dynamics. Network: Computation in Neural Systems, 4(3):259, 1993.
[162] Radford M Neal. Annealed importance sampling. Statistics and computing, 11:125–139, 2001.
[163] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[164] Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In International Conference on Machine Learning, pages 6545–6554. PMLR, 2019.
[165] Rasmus Palm, Ulrich Paquet, and Ole Winther. Recurrent relational networks. Advances in neural information processing systems, 31, 2018.
[166] Hongqing Cao, Friedrich Recknagel, and Philip T Orr. Parameter optimization algorithms for evolving rule models applied to freshwater ecosystems. IEEE Transactions on Evolutionary Computation, 18(6):793–806, 2013.
[167] Maxence Bouvier, Alexandre Valentian, Thomas Mesquida, Francois Rummens, Marina Reyboz, Elisa Vianello, and Edith Beigne. Spiking neural networks hardware implementations and challenges: A survey. J. Emerg. Technol. Comput. Syst., 15(2), April 2019.
[168] Hesham Mostafa. Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 29(7):3227–3235, 2018.
[169] Iulia-Maria Comşa, Krzysztof Potempa, Luca Versari, Thomas Fischbacher, Andrea Gesmundo, and Jyrki Alakuijala. Temporal coding in spiking neural networks with alpha synaptic function: Learning with backpropagation. IEEE Transactions on Neural Networks and Learning Systems, pages 1–14, 2021.
[170] Tao Liu, Zihao Liu, Fuhong Lin, Yier Jin, Gang Quan, and Wujie Wen. Mt-spike: A multilayer time-based spiking neuromorphic architecture with temporal error back propagation. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 450–457. IEEE, 2017.
[171] Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, and Damien Querlioz. Synaptic metaplasticity in binarized neural networks. Nature Communications, 12(2549), 2021.
[172] Saeed Reza Kheradpisheh, Maryam Mirsadeghi, and Timothée Masquelier. Bs4nn: Binarized spiking neural networks with temporal coding and learning, 2020.
[173] Eyyüb Sari, Mouloud Belbahri, and Vahid Partovi Nia. How does batch normalization help binary training? arXiv:1909.09139, 2019.
[174] Tifenn Hirtzlin, Bogdan Penkovsky, Marc Bocquet, Jacques-Olivier Klein, Jean-Michel Portal, and Damien Querlioz. Stochastic computing for hardware implementation of binarized neural networks. IEEE Access, 7:76394–76403, 2019.
[175] Abhronil Sengupta, Gopalakrishnan Srinivasan, Deboleena Roy, and Kaushik Roy. Stochastic inference and learning enabled by magnetic tunnel junctions. In 2018 IEEE International Electron Devices Meeting (IEDM), pages 15–6. IEEE, 2018.
[176] Minsuk Koo, Gopalakrishnan Srinivasan, Yong Shim, and Kaushik Roy. Sbsnn: Stochastic-bits enabled binary spiking neural network with on-chip learning for energy efficient neuromorphic computing at the edge. IEEE Transactions on Circuits and Systems I: Regular Papers, 67(8):2546–2555, 2020.
[177] Kai Ni et al. Ferroelectric ternary content-addressable memory for one-shot learning. Nature Electronics, 2(11):521–529, 2019.
[178] Xiaoming Chen, Xunzhao Yin, Michael Niemier, and Xiaobo Sharon Hu. Design and optimization of fefet-based crossbars for binary convolution neural networks. In 2018 Design, Automation Test in Europe Conference Exhibition (DATE), pages 1205–1210, 2018.
[179] Changhoon Lee, Juho Sung, and Changhwan Shin. Understanding of feedback field-effect transistor and its applications. Applied Sciences, 10(9):3070, 2020.
[180] Young-Won Kim, Joo-Seong Kim, Jae-Hyuk Oh, Yoon-Suk Park, Jong-Woo Kim, Kwang-Il Park, Bai-Sun Kong, and Young-Hyun Jun. Low-power cmos synchronous counter with clock gating embedded into carry propagation. IEEE Transactions on Circuits and Systems II: Express Briefs, 56(8):649–653, 2009.
[181] Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R Stanley Williams, and Vivek Srikumar. Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Computer Architecture News, 44(3):14–26, 2016.
[182] Vagner Guidotti, Guilherme Paim, Leandro MG Rocha, Eduardo Costa, Sérgio Almeida, and Sergio Bampi. Power-efficient approximate newton–raphson integer divider applied to nlms adaptive filter for high-quality interference cancelling. Circuits, Systems, and Signal Processing, 39:5729–5757, 2020.
[183] Ben Perach et al. An asynchronous and low-power true random number generator using stt-mtj. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 27(11):2473–2484, 2019.
-
dc.identifier.urihttp://tdr.lib.ntu.edu.tw/jspui/handle/123456789/92085-
dc.description.abstract作為人工智能領域的一項重大進展,脈衝神經網絡以其獨特且受大腦啟發的解題、推理和推斷方法,展現了巨大潛力。這種潛力促使眾多組織和企業投入於算法和硬件設計的開發,目標是應對並克服未來的技術挑戰。然而,現有的硬件和算法解決方案在仿真硬件的成本效益及深度神經模型的推理效率方面存在局限。本研究提出了一種旨在實現高成本效益和高效運作的硬件設計與算法改進。在硬體層面,該硬件使用計算型非揮發性記憶體,其特點是無需權重移動、低比特成本和提高的面積效率,這些特性共同提升了處理能力和成本效益。在算法層面,本研究開發了專門為脈衝神經網絡加速器設計的神經模型。為了實現此硬件與演算法的共同優化,該研究著重於三個部分:(1) 由於記憶體單元的不完美而導致的可靠性降低;(2) 非揮發性記憶體的大粒度與高延遲造成效能損失;以及 (3) 為實現準確的計算結果而需多次脈衝處理次數。針對圖像識別與解題應用,我們對各種記憶裝置進行了可靠性分析。此外,我提出了架構設計以減緩處理速度損失,並提出將神經模型轉化為二進制神經網絡,以微小的準確度的犧牲,換得顯著提升分類推理的能效。我們的研究結果指出,在校準具有小開關比和高信噪比的單元時,會有巨大的電容成本,為此應選用電晶體為主的記憶體。此外,我們的設計在MAX-CUT、數獨和LASSO任務上明顯優於以往的數位型SNN處理器,分別實現了3.1倍、1.8倍和2.2倍的加速。最後,與4位元SNN相比,我們整合的容錯二進制神經網絡不僅將電容大小減半,能源消耗減少了57%,同時也大幅降低了兩個數量級的延遲,並保持了分類的準確性。zh_TW
dc.description.abstractAs a significant advancement in the field of artificial intelligence, spiking neural networks showcase a unique, brain-inspired approach to computational problem-solving, reasoning, and inference. This notable potential drives various organizations and industries to dedicate efforts to the development of algorithms and hardware design, aiming to achieve a substantial leap in addressing and overcoming imminent technological challenges of the future. However, the existing hardware and algorithmic solutions demonstrate a lack of cost-effectiveness in terms of emulation hardware and inefficiency in the inference of deep neural models. This thesis presents a hardware design and algorithmic improvements aimed at achieving cost-effective and efficient operation. From a hardware perspective, the hardware utilizes computational non-volatile memory, characterized by its ability to operate without weight movement, lower bit costs, and improved area efficiency, which collectively enhances processing throughput at a reduced cost compared to prior architectures. On the algorithmic front, this study develops a neural model specially tailored for a spiking neural network accelerator. However, to fully implement this inference system, it is necessary to address three critical challenges: (1) the decline in reliability due to imperfections in memory cells, (2) the considerable size and extended latency associated with non-volatile memory macros, and (3) the requirement for multiple spiking processing cycles to achieve accurate computational outcomes. In response, we conduct a reliability analysis of various memory devices for image classification and optimization problem-solving. Additionally, we delve into architectural designs to counteract losses in processing throughput, especially when dealing with sparse inputs or a limited input degree of network model inference. We also propose transforming neural models into binary neural networks to substantially improve processing speed and energy efficiency with minor sacrifices in accuracy for image classification. Our results highlight the substantial capacitance expense incurred when calibrating cells with a minimal ON-OFF ratio and a high signal-to-noise ratio. Thus, transistor-based memory is suggested. The design we propose significantly outperforms previous digital-based SNN processors, achieving processing speeds that are 3.1x, 1.8x, and 2.2x faster for MAX-CUT, SUDOKU, and LASSO tasks, respectively. In addition, when compared to 4-bit SNNs, our approach of integrating error-resilient binary neural networks (ER-BNN) into the probabilistic inference machine not only cuts the capacitor size by 50% and reduces energy consumption by 57%, but also significantly decreases latency, all while preserving the accuracy of classification.en
dc.description.provenanceSubmitted by admin ntu (admin@lib.ntu.edu.tw) on 2024-03-05T16:12:59Z
No. of bitstreams: 0
en
dc.description.provenanceMade available in DSpace on 2024-03-05T16:12:59Z (GMT). No. of bitstreams: 0en
dc.description.tableofcontentsDoctoral Dissertation Acceptance Certificate . . . . . . . . . . . . . . . . . . . . . i
致謝 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
摘要 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Figures .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Denotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Inference of Spiking Neural Network .. . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Reliability Analysis for Non-volatile Synaptic Devices . . . . . . . . . . . . . . 2
1.3 Architecture Design for Efficient Inference .. . . . . . . . . . . . . . . . . . . 4
1.4 Algorithmic Enhancement for Image Recognition .. . . . . . . . . . . . . . . . . . 6
1.5 Organization of the Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chapter 2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Reliability Analysis of Processing in Memory . . . . . . . . . . . . . . . . . . . 9
2.2 Spiking Neural Network Architecture . . . . . .. . . . . . . . . . . . . . . . . . 10
2.3 Binarizad Spiking Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 3 Impact of Non-volatile Memory Devices on Spiking Neural Network .. . . . . . 13
3.1 Background and Motivation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Probabilistic Inference with Spiking Neurons . . . . . . . . . . . . . . . . . . 13
3.1.2 SNN Model and IF Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.3 Precision Modulation for Image Recognition . . . . . . . . . . . . . . . . . . . 16
3.1.3.1 Neural Network with Quantizated Input .. . . . . . . . . . . . . . . . . . . . 16
3.1.3.2 Precision Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 Stochastic Annealing for Optimization Solver . . . . . . . . . . . . . . . . . . 18
3.1.4.1 Constraint Satisfaction Problem .. . . . . . . . . . . . . . . . . . . . . . . 18
3.1.4.2 SNN Annealing Machine .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.5 Unreliability Issues in SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.5.1 Impact of Memory Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.5.2 Impact of Circuit Constraints .. . . . . . . . . . . . . . . . . . . . . . . . 24
3.1.5.3 Overall Scope of Reliability Analysis .. . . . . . . . . . . . . . . . . . . . 26
3.2 Design Tradeoff .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.0.1 Impact of Capacitance .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.0.2 Impact of Cell Current on Cmem . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.0.3 Current Variation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.0.4 Leak Current from IOFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Reliability Evaluation for Image Classification .. . . . . . . . . . . . . . . . . 33
3.3.1 Evaluation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.1.1 Simulation Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.1.2 Collection of Various Technologies . . . . . . . . . . . . . . . . . . . . . . 34
3.3.2 Experiment Result .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.2.1 ON-OFF Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.2.2 ON-state Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2.3 Normalized Standard Deviation .. . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Reliability Evaluation for Annealing Purpose . . . . . . . . . . . . . . . . . . . 43
3.4.1 Evaluation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.1.1 Collection of CSP Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.1.2 SNN Model Setup .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4.1.3 Energy Model Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.1.4 Collection of Memory Devices . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.2 Experiment Result .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.2.1 Impact of Current Variation .. . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.2.2 Cost of Calibrating Current Variation .. . . . . . . . . . . . . . . . . . . . 49
3.4.2.3 Impact of Leak Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.4.2.4 Impact of Temperature .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Chapter 4 A Neuromorphic Spiking Signal Processor with Advanced Memory Technology .. . 57
4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.1 SNN Annealing Machine .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.1.1 Recurrent-structured SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.1.2 Formulation of Optimization Problems . . . . . . . . . . . . . . . . . . . . . 59
4.1.1.3 Solve Optimization Problem by SNN .. . . . . . . . . . . . . . . . . . . . . . 59
4.1.2 Spike Signal Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.2.1 Tick-batching .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.1.2.2 Digital Synapse Operation .. . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.1.2.3 Synapse Operation in 3D-NOR Flash .. . . . . . . . . . . . . . . . . . . . . . 62
4.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2.1 Opportunity of 3D-NOR Flash Synaptic Array for Solving Optimization Problems . . 64
4.2.1.1 Computing-dominated Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.1.2 Switching-dominated Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.2 Challenge of applying 3D-NOR Flash as Synaptic Array . . . . . . . . . . . . . . 65
4.2.2.1 Sparse Input Spike . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.2.2 Small Input Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.2.3 Ineffective Pruning .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3 Proposed Architecture-Neureka .. . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3.1 Time Coarsening Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.3.2 Reduction of Input Granularity of Synaptic Array . . . . . . . . . . . . . . . . 71
4.3.3 Skipping Polarized Neuron for Optimization Applications .. . . . . . . . . . . . 73
4.4 Evaluation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.4.1 Latency Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.4.2 Evaluation Across Diverse Applications . . . . . . . . . . . . . . . . . . . . . 78
4.4.3 Implementation of Heuristic Methods .. . . . . . . . . . . . . . . . . . . . . . 79
4.5 Evaluation Result .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5.1 End to End Comparison .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.5.2 Effect of Skipping and Coarsening .. . . . . . . . . . . . . . . . . . . . . . . 81
4.5.3 Discussion of Neural Firing Rate Scaling . . . . . . . . . . . . . . . . . . . . 82
4.5.4 Comparison with CPU and GPU .. . . . . . . . . . . . . . . . . . . . . . . . . . 82
Chapter 5 Efficient and Error-resilient Spiking Neural Networks through Binarization . 85
5.1 Background and Motivation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1.1 Spiking Neural Networks (SNNs) . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1.2 Existing challenges in SNNs: . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.1.3 Binarized Neural Networks (BNNs): .. . . . . . . . . . . . . . . . . . . . . . . 86
5.1.4 Binarized Spiking Neural Networks (BSNNs) .. . . . . . . . . . . . . . . . . . . 87
5.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.1 Binarized Neural Networks (BNNs) . . . . . . . . . . . . . . . . . . . . . . . . 87
5.2.2 Spiking Neural Networks (SNNs) . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.3 Impacts of Binarization on SNNs .. . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.1 Impact of Binarization in SNNs on Membrane Capacitor . . . . . . . . . . . . . . 93
5.3.2 Impact of Binarization in SNNs on Latency .. . . . . . . . . . . . . . . . . . . 95
5.3.3 Impact of Binarization in SNNs on Energy . . . . . . . . . . . . . . . . . . . . 96
5.3.4 Errors by Cmem Reduction and Countermeasures . . . . . . . . . . . . . . . . . . 97
5.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.4.1 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.4.2 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Chapter 6 Conclusions .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.1 Memory Device Reliability Analysis . . . . . . . . . . . . . . . . . . . . . . . . 105
6.1.1 Reliability on Image Classification .. . . . . . . . . . . . . . . . . . . . . . 105
6.1.2 Reliability on Solving Optimization Problems . . . . . . . . . . . . . . . . . . 106
6.2 Architecture Design Optimized for Solving Optimization Problems .. . . . . . . . . 107
6.3 Binary Spiking Neural Network .. . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
-
dc.language.isoen-
dc.subject脈衝神經網路zh_TW
dc.subject快閃記憶體zh_TW
dc.subject記憶體內運算zh_TW
dc.subject神經態運算zh_TW
dc.subject可靠度分析zh_TW
dc.subjectNeuromorphic Computingen
dc.subjectSpiking Neural Networken
dc.subjectFlash Memoryen
dc.subjectReliability Analysisen
dc.subjectComputing in Memoryen
dc.title優化脈衝神經網絡推理: 架構設計與算法增強的協同方法zh_TW
dc.titleOptimizing Spiking Neural Network Inference: A Synergistic Approach to Architecture Design and Algorithmic Enhancementen
dc.typeThesis-
dc.date.schoolyear112-1-
dc.description.degree博士-
dc.contributor.oralexamcommittee鄭湘筠;王克中;盧奕璋;王成淵zh_TW
dc.contributor.oralexamcommitteeHsiang-Yun Cheng;Keh-Chung Wang;Yi-Chang Lu;Cheng-Yuan Wangen
dc.subject.keyword脈衝神經網路,快閃記憶體,記憶體內運算,神經態運算,可靠度分析,zh_TW
dc.subject.keywordSpiking Neural Network,Flash Memory,Computing in Memory,Neuromorphic Computing,Reliability Analysis,en
dc.relation.page134-
dc.identifier.doi10.6342/NTU202400540-
dc.rights.note同意授權(全球公開)-
dc.date.accepted2024-02-11-
dc.contributor.author-college電機資訊學院-
dc.contributor.author-dept電子工程學研究所-
顯示於系所單位:電子工程學研究所

文件中的檔案:
檔案 大小格式 
ntu-112-1.pdf6.78 MBAdobe PDF檢視/開啟
顯示文件簡單紀錄


系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved