簡易檢索 / 詳目顯示

研究生: 葉宸甫
Yeh, Chen-Fu
論文名稱: 整數二次項積分與激發神經模型
Integer Quadratic Integrate­-and­-Fire Neuron Model
指導教授: 羅中泉
Lo, Chung-Chuan
口試委員: 鄭桂忠
Tang, Kea-Tiong
陳右穎
Chen, You-Yin
學位類別: 碩士
Master
系所名稱: 生命科學暨醫學院 - 系統神經科學研究所
Institute of Systems Neuroscience
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 34
中文關鍵詞: 仿神經運算機器學習神經網路量化
外文關鍵詞: Neuromorphic computing, Machine learning, network quantization
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 進行突波神經網路之模擬時常需求解大量微分方程,而這使得現代電腦進行一實時處理因圖片而產生之光流的仿生視覺系統模擬時面臨一大挑戰。爲解決此一問題,我們開發了整數二次項積分與激發(Integer Quadratic Integrate-and-Fire, IQIF)模型,作爲新型的神經模型。IQIF 可產生與經典QIF 神經模型相同之激發模式,並將常見於模擬之浮點運算縮減爲整數運算;以受限制的膜電位與突觸電流動態範圍爲代價,此模型將提供對記憶體與邏輯閘較少需求之模擬方案。因此,IQIF 是一遵守生物可信性之模型,並可實作於邊緣運算平臺上,以達成低功耗、低成本之神經科學研究與機器學習應用。


    Performing simulations of a spiking neural network evolves solving a large number of differential equations. This becomes a real challenge for the modern computer systems when one simulates a visual system that processes (optical flow)image signals in real time. To address the problem, we design a novel neuron model, the Integer Quadratic Integrate-and-Fire (IQIF) neuron. IQIF reproduces spiking behavior similar to the classical Quadratic Integrate-and-Fire (QIF) neuron but reduces all variables from commonly used floating points to integers, providing a simulation which requires significantly less memory and gates at the cost of limited dynamical ranges of membrane potential and synaptic current. IQIF is thus a bio-plausible-compliant model that can be applied to edge devices to achieve low-power low-cost neuroscience research and machine learning.

    Chapter 1 Introduction 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Bio­plausible traits of SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Learning algorithms for SNN . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Neuromorphic computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Neuromorphic Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.2 Neuromorphic hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Goal of This Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2 IQIF Model and Test Method 2.1 Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.2 Synapse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Test Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Chapter 3 Result 3.1 IQIF Spiking Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 Synapse exponential decay approximation . . . . . . . . . . . . . . . . . . . . . 18 3.3 IQIF as Activation Function in Fully Connected ANNs . . . . . . . . . . . . . . 19 3.4 IQIF Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 4 Discussion 4.1 Effect of Number of Bits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 Speed Difference Between Integer and Floating-­Point Operations . . . . . . . . 23 4.3 Signature of Membrane Potential Types . . . . . . . . . . . . . . . . . . . . . . 24 4.4 Effect of Quadratic Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.5 Effect of Exponential Synapse . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    [1] Aditya Gilra: Biologically ­plausible learning in neural networks for movement control and cog­nitive tasks.
    [2] C/C+ + Why to use unsigned char for binary data? https://stackoverflow.com/questions/13642381/c-c-why-to-use-unsigned-char-for-binary-data.
    [3] QNNPACK: Open source library for optimized mobile deep learning, Oct. 2018.
    [4] K. Amunts, C. Ebell, J. Muller, M. Telefont, A. Knoll, and T. Lippert. The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain. Neuron, 92(3):574–581, Nov. 2016.
    [5] A. V. Andreev and A. N. Pisarchik. External stimulus classification by Hodgkin­-Huxley neural network. In Saratov Fall Meeting 2020: Computations and Data Analysis: from Molecular Processes to Brain Functions, volume 11847, page 118470H. International Society for Optics
    and Photonics, May 2021.
    [6] M. A. Arbib, editor. The Handbook of Brain Theory and Neural Networks. A Bradford Book, Cambridge, MA, USA, 2 edition, Nov. 2002.
    [7] P. Blouw, X. Choo, E. Hunsberger, and C. Eliasmith. Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware. In Proceedings of the 7th Annual Neuro-­inspired Computational Elements Workshop on ­ NICE ’19, pages 1–8, Albany, NY, USA, 2019. ACM
    Press.
    [8] Y. Cao, Y. Chen, and D. Khosla. Spiking Deep Convolutional Neural Networks for Energy­-Efficient Object Recognition. International Journal of Computer Vision, 113(1):54–66, May 2015.
    [9] N. Caporale and Y. Dan. Spike Timing–Dependent Plasticity: A Hebbian Learning Rule. Annual Review of Neuroscience, 31(1):25–46, July 2008. Publisher: Annual Reviews.
    [10] A. S. Cassidy, P. Merolla, J. V. Arthur, S. K. Esser, B. Jackson, R. Alvarez­Icaza, P. Datta, J. Sawada, T. M. Wong, V. Feldman, A. Amir, D. B.­D. Rubin, F. Akopyan, E. McQuinn, W. P. Risk, and D. S. Modha. Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores. In The 2013 International Joint Conference on Neural Networks (IJCNN), pages 1–10, Aug. 2013. ISSN: 2161­4407.
    [11] M. Cheely and T. Horiuchi. Analog VLSI Models of Range­Tuned Neurons in the Bat Echolo­cation System. EURASIP Journal on Advances in Signal Processing, 2003(7):598240, Dec. 2003.
    [12] M. Davies, N. Srinivasa, T.­H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y. Liao, C.­K. Lin, A. Lines, R. Liu, D. Mathaikutty, S. McCoy, A. Paul, J. Tse, G. Venkataramanan, Y.­H. Weng, A. Wild, Y. Yang, and H. Wang. Loihi: A Neuro­
    morphic Manycore Processor with On­Chip Learning. IEEE Micro, 38(1):82–99, Jan. 2018. Conference Name: IEEE Micro.
    [13] P. Diehl and M. Cook. Unsupervised learning of digit recognition using spike-­timing­-dependent-plasticity. Frontiers in Computational Neuroscience, 9:99, 2015.
    [14] P. U. Diehl, D. Neil, J. Binas, M. Cook, S.­C. Liu, and M. Pfeiffer. Fast­classifying, high­ accuracy spiking deep networks through weight and threshold balancing. In 2015 International Joint Conference on Neural Networks (IJCNN), pages 1–8, July 2015. ISSN: 2161­4407.
    [15] W. Fang, Y. Chen, J. Ding, D. Chen, Z. Yu, H. Zhou, Y. Tian, and other contributors. Spikingjelly.
    https://github.com/fangwei123456/spikingjelly, 2020. Accessed: 2021­10­17.
    [16] M. Fariselli, M. Rusci, J. Cambonie, and E. Flamand. Integer­Only Approximated MFCC for Ultra­ Low Power Audio NN Processing on Multi­Core MCUs. In 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), pages 1–4, June 2021.
    [17] C. Frenkel, J.­D. Legat, and D. Bol. MorphIC: A 65­nm 738k­Synapse/mm$^2$ Quad­-Core Binary­-Weight Digital Neuromorphic Processor With Stochastic Spike­Driven Online Learning. IEEE Transactions on Biomedical Circuits and Systems, 13(5):999–1010, Oct. 2019. Conference Name: IEEE Transactions on Biomedical Circuits and Systems.
    [18] M.­O. Gewaltig and M. Diesmann. Nest (neural simulation tool). Scholarpedia, 2(4):1430, 2007.
    [19] H. Hazan, D. J. Saunders, H. Khan, D. Patel, D. T. Sanghavi, H. T. Siegelmann, and R. Kozma. BindsNET: A Machine Learning­ Oriented Spiking Neural Networks Library in Python. Frontiers in Neuroinformatics, 12:89, 2018.
    [20] W. He, Y. Wu, L. Deng, G. Li, H. Wang, Y. Tian, W. Ding, W. Wang, and Y. Xie. Com­paring SNNs and RNNs on Neuromorphic Vision Datasets: Similarities and Differences. arXiv:2005.02183 [cs, eess], May 2020. arXiv: 2005.02183.
    [21] ISO. ISO/IEC 14882:2017 Information technology — Programming languages — C++. Fifth edition, Dec. 2017.
    [22] E. Izhikevich. Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5):1063–1070, Sept. 2004. Conference Name: IEEE Transactions on Neural Net­works.
    [23] E. R. Kandel, J. H. Schwartz, T. M. Jessell, S. A. Siegelbaum, and A. J. Hudspeth, editors. Principles of Neural Science. McGraw­Hill Professional Pub, New York, 5th edition edition, Oct. 2012.
    [24] D. Khudia, J. Huang, P. Basu, S. Deng, H. Liu, J. Park, and M. Smelyanskiy. Fbgemm: Enabling high­-performance low­-precision deep learning inference. arXiv preprint arXiv:2101.05615, 2021.
    [25] C. Lee, P. Panda, G. Srinivasan, and K. Roy. Training Deep Spiking Convolutional Neural Networks With STDP­Based Unsupervised Pre­training Followed by Supervised Fine­Tuning. Frontiers in Neuroscience, 12:435, 2018.
    [26] C. Mayr, S. Hoeppner, and S. Furber. SpiNNaker 2: A 10 Million Core Processor System for Brain Simulation and Machine Learning. arXiv:1911.02385 [cs], Nov. 2019. arXiv: 1911.02385.
    [27] S. Moradi, N. Qiao, F. Stefanini, and G. Indiveri. A scalable multi­core architecture with hetero­geneous memory structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs). IEEE Transactions on Biomedical Circuits and Systems, 12(1):106–122, Feb. 2018. arXiv:
    1708.04198.
    [28] R. Naud, N. Marcille, C. Clopath, and W. Gerstner. Firing patterns in the adaptive exponential integrate-­and­-fire model. Biological Cybernetics, 99(4­5):335–347, Nov. 2008.
    [29] E. O. Neftci, H. Mostafa, and F. Zenke. Surrogate Gradient Learning in Spiking Neural Net­works. arXiv:1901.09948 [cs, q­bio], May 2019. arXiv: 1901.09948.
    [30] D.­A. Nguyen, X.­T. Tran, and F. Iacopi. A Review of Algorithms and Hardware Implemen­tations for Spiking Neural Networks. Journal of Low Power Electronics and Applications, 11(2):23, May 2021.
    [31] J. Nickolls, I. Buck, M. Garland, and K. Skadron. Scalable Parallel Programming with CUDA: Is CUDA the parallel programming model that application developers have been waiting for? Queue, 6(2):40–53, Mar. 2008.
    [32] J. Park, S. Ha, T. Yu, E. Neftci, and G. Cauwenberghs. A 65k­neuron 73­Mevents/s 22­pJ/event asynchronous micro­pipelined integrate­-and­-fire array transceiver. In 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS) Proceedings, pages 675–678, Oct. 2014. ISSN:
    2163­4025.
    [33] C. Pehle and J. E. Pedersen. Norse ­ A deep learning library for spiking neural networks, Jan. 2021. Documentation: https://norse.ai/docs/.
    [34] T. Pfeil, T. C. Potjans, S. Schrader, W. Potjans, J. Schemmel, M. Diesmann, and K. Meier. Is a 4­-Bit Synaptic Weight Resolution Enough?–Constraints on Enabling Spike­-Timing Dependent Plasticity in Neuromorphic Hardware. Frontiers in Neuroscience, 6, 2012. Publisher: Frontiers.
    [35] B. Rueckauer, I.­A. Lungu, Y. Hu, M. Pfeiffer, and S.­C. Liu. Conversion of Continuous­ Valued Deep Networks to Efficient Event­Driven Networks for Image Classification. Frontiers in Neuroscience, 11:682, 2017.
    [36] D. E. Rumelhart, G. E. Hintont, and R. J. Williams. Learning representations by back­propagating errors. page 4, 1986.
    [37] M. Stimberg, R. Brette, and D. F. Goodman. Brian 2, an intuitive and efficient neural simulator. eLife, 8:e47314, Aug. 2019. Publisher: eLife Sciences Publications, Ltd.
    [38] S. J. van Albada, A. G. Rowley, J. Senk, M. Hopkins, M. Schmidt, A. B. Stokes, D. R. Lester, M. Diesmann, and S. B. Furber. Performance Comparison of the Digital Neuromorphic Hard­ware SpiNNaker and the Neural Network Simulation Software NEST for a Full­Scale Cortical Microcircuit Model. Frontiers in Neuroscience, 12:291, 2018.
    [39] J. C. R. Whittington and R. Bogacz. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity. Neural Computation, 29(5):1229–1262, May 2017.
    [40] J. Woo, S. H. Kim, K. Han, and M. Choi. Characterization of dynamics and information process­ing of integrate­and­fire neuron models. 54(44):445601, Oct. 2021. Publisher: IOP Publishing.
    [41] W.­C. Wu, C.­F. Yeh, A. J. White, C.­T. Wang, Z.­W. Yeh, C.­C. Hsieh, R.­S. Liu, K.­T. Tang, and C.­C. Lo. Integer Quadratic Integrate­and­Fire (IQIF): A Neuron Model for Digital Neuro­morphic Systems. In 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits
    and Systems (AICAS), pages 1–4, June 2021.
    [42] Z.­W. Yeh, C.­H. Hsu, C.­F. Yeh, W.­C. Wu, C.­T. Wang, C.­C. Lo, and K.­T. Tang. POPPINS: A Population­Based Digital Spiking Neuromorphic Processor with Integer Quadratic Integrate­-and­-Fire Neurons. In 2021 IEEE International Symposium on Circuits and Systems (ISCAS),
    pages 1–5, May 2021. ISSN: 2158­1525.
    [43] F. Zenke and S. Ganguli. SuperSpike: Supervised learning in multi­layer spiking neural net­works. Neural Computation, 30(6):1514–1541, June 2018. arXiv: 1705.11146.

    QR CODE