簡易檢索 / 詳目顯示

研究生: 謝弘毅
Hsieh, Hung-Yi
論文名稱: 一個具備擴充性,功率6.8μW,能長期儲存權重並能晶片學習的機率型突波神經網路晶片
A Scalable, 6.8 μW Probabilistic Spiking Neural Network Chip with Long-Term Synaptic Memory and On-Chip Learning Ability
指導教授: 鄭桂忠
Tang, Kea-Tiong
口試委員: 吳重雨
陳永耀
李順裕
陳新
李夢麟
劉奕汶
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 英文
論文頁數: 115
中文關鍵詞: 類比超大型積體電路學習晶片機率型突波神經網路仿生嗅覺類神經網路
相關次數: 點閱:4下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 攜帶式、植入式應用常需要在面積及功率的限制下進行訊號處理、學習並分類。功率、面積的限制讓結合機器學習演算法及類比電路設計的類比學習電路成為處理這些問題較好的選項。然而,目前以類比電路實現的學習演算法在學習能力上與軟體執行的演算法相較仍有所不足,為了讓學習晶片能有更廣泛的應用,本研究目標以類比電路實現類神經演算法,讓此學習晶片在低功率下運作並具備與程式執行的演算法相似的學習能力。目標應用在生理訊號及電子鼻氣味辨識等低功率、中低速的應用上。
    本研究大致分為三個部分,首先是為了電子鼻應用並參考哺乳動物嗅覺系統設計的仿生嗅覺突波神經網路晶片。此晶片能在3.6 μW的低功率下操作,分類速度約5 Hz,並對於商用電子鼻Cyranose 320所取得的水果氣味擁有約87.6 %的正確率。5 Hz的分類速度雖然滿足電子鼻的需求,對於生理訊號辨識上仍不足,除此之外,此晶片也不具備擴充性讓網路大小能根據不同應用而調整。為了提高學習能力、分類速度並實現擴充性,本研究又設計了相容於硬體的機率型突波神經網路演算法。藉由程式模擬得知此演算法不僅適用於類比電路實現,同時對於參數變異、突觸雜訊以及權重解析度均不敏感。這些特性讓此演算法在晶片實現後較可能保持與程式實現時相似的學習能力。為了驗證這點,本研究最後以0.18 μm CMOS製程將此演算法以類比電路實現,除了實現演算法外,也藉由電路設計減小晶片面積、功耗以及製程變異對於學習效能的影響。量測結果證實此晶片消耗功率僅6.8μW,分類速度達3 kHz,對於同樣的電子鼻資料有92 %的正確率,也能以100 %的正確率分辨ECG資料,並且在AUC保持0.8時能學習約80個隨機圖像。除了單晶片操作外,此晶片也具備擴充性。當連接兩顆晶片形成較大的網路架構,同樣的80個隨機圖像資料組在學習後的AUC可以達到0.9。量測結果除了滿足規格要求外,與現有文獻相較,此晶片消耗很低的功率並擁有最好的學習能力。


    摘要 i Abstract ii Acknowledgement iv Contents v List of Figures vii List of Tables xi Chapter 1 Introduction 1 1.1 Motivation and Goals 1 1.2 Contribution to Knowledge 5 Chapter 2 Literature Review 6 2.1 Artificial Neural Networks 6 2.2 Spiking Neural Networks 10 2.3 Analog Hardware Neural Networks 13 2.3.1 Artificial Neural Networks in Hardware 13 2.3.2 Spiking Neural Networks in Hardware 17 2.3.3 Scalability 22 Chapter 3 Bio-Inspired SNN 26 3.1 Circuit Architecture 27 3.1.1 Mitral Cell 29 3.1.2 Cortical Cell Group 31 3.1.3 Synapse 37 3.2 Experimental Results 40 Chapter 4 Remaining Issues and Solutions 47 Chapter 5 The PSNN Algorithm 49 5.1 Network 50 5.1.1 Encoding 51 5.1.2 Neuron 53 5.1.3 Synapse 53 5.2 Experimental Results 59 5.2.1 Artificial Data 59 5.2.2 Benchmark Datasets 62 5.2.3 Data in the Related Fields 68 Chapter 6 The PSNN Chip 72 6.1 Circuit Structure 72 6.1.1 ClockGen, Decoder, and N1 76 6.1.2 NoiseGen 78 6.1.3 Neuron 80 6.1.4 Synapse 83 6.2 Analysis of the Hardware Error 87 6.3 Multichip Operation 89 6.4 Experimental Results of the PSNN chip 91 6.4.1 Measurement Environment 93 6.4.2 Artificial Data 96 6.4.3 Data in the Related Fields 100 6.4.4 Benchmark Datasets 101 Chapter 7 Conclusion and Future Works 104 Reference 108

    [1]. C. Prajapati and P. Sahay, “Studies on metal-oxide semiconductor ZnO as a hydrogen gas sensor,” J. Nano-Electro. Phys., vol. 3, no. 1, pp. 714–720, 2011.
    [2]. D. Wilsonand and S. De Weerth, “Signal processing for improving gas sensor response time,” Sensors Actuat. B, Chem., vol. 41, nos. 1–3, pp. 63–70, 1997.
    [3]. Y. Sun and A. C. Cheng, “Machine learning on-a-chip: A high-performance low-power reusable neuron architecture for artificial neural networks in ECG classifications,” Computers in Biology and Medicine, vol. 42, pp. 751–757, 2012.
    [4]. J. Misra and I. Saha, “Artificial neural networks in hardware: A survey of two decades of progress,” Neurocomputing, vol. 74, pp. 239–255, 2010.
    [5]. T. G. Clarkson, D. Gorse, J. G. Taylor, and C. K. Ng, “Learning probabilistic RAM nets using VLSI structures,” IEEE Trans. Comput., vol. 41, no. 12, pp. 1552–1561, 1992.
    [6]. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Comput., vol. 1, pp. 541–551, 1989.
    [7]. K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Trans. Neural Netw., vol. 1, pp. 4–27, 1990.
    [8]. M. A. Kraaijveld, J. Mao and A. K. Jain, “A nonlinear projection method based on Kohonen's topology preserving maps,” IEEE Trans. Neural Netw., vol. 6, pp. 548–559, 1995.
    [9]. Q. Sun, F. Schwartz, J. Michel, Y. Herve, and R. Dalmolin, “Implementation study of an analog spiking neural network for assisting cardiac delay prediction in a cardiac resynchronization therapy device,” IEEE Trans. Neural Netw., vol. 22, pp. 858–869, 2011.
    [10]. A. Nedic and D. P. Bertsekas, “Incremental subgradient methods for nondifferentiable optimization,” SIAM J. Optim., vol. 12, no. 1, pp. 109–138, 2001.
    [11]. R. Zhang, Z.-B. Xu, G.-B. Huang, D. Wang, “Global convergence of online BP training with dynamic learning rate,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, pp. 330–341, 2012
    [12]. C. Cortes, V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, pp. 273–297, 1995.
    [13]. R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of eugenics, vol. 7, pp. 179–188, 1936.
    [14]. T. Cover, P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. Information Theory, vol. 13, pp. 21–27, 1967.
    [15]. A. L. Hodgkin, A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” Journal of Physiology, vol. 117, pp. 500– 544, 1952.
    [16]. B. W. Knight, “Dynamics of encoding in a population of neurons,” The Journal of General Physiology, vol. 59, pp.734–766, 1972.
    [17]. E. M. Izhikevich, “Resonate-and-fire neurons,” Neural Networks, vol. 14, pp. 883 – 894, 2001.
    [18]. E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Trans. Neural Netw., vol. 14, pp. 1569–1572, 2003.
    [19]. A. A. Lazar “Time encoding with an integrate-and-fire neuron with a refractory period,” Neurocomputing, vol. 58, pp. 53–58, 2004.
    [20]. Y.H. Liu, X.J. Wang, “Spike-frequency adaptation of a generalized leaky integrate-and-fire model neuron,” J. Computational Neurosci., vol. 10, pp. 25–45, 2001.
    [21]. S.M. Bohte and J.N. Kok and La Poutre, “Spike-prop: error-backpropagation in multi-layer networks of spiking neurons,” Neurocomputing, vol 48, pp. 17–37, 2002.
    [22]. Q.X. Wu, T.M. McGinnity, L.P. Maguire, B. Glackin, A. Belatreche, “Learning under weight constraints in networks of temporal encoding spiking neurons,” Neurocomputing, vol. 69, nos. 16–18, pp. 1912–1922, 2006.
    [23]. J. Wade, L. McDaid, J. Santos, and H. Sayers, “Swat: A spiking neural network training algorithm for classification problems,” IEEE Trans. Neural Netw., vol. 21, pp. 1817–1830, 2010.
    [24]. M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, “Stable Hebbian learning from spike timing-dependent plasticity,” J. Neurosci., vol. 20, no. 23, pp. 8812–8821, 2000.
    [25]. Timoth´ee Masquelier, Rudy Guyonneau, Simon J. Thorpe, “Competitive STDP-based spike pattern learning,” Neural Comput., vol. 21, pp. 1259–1276, 2009.
    [26]. J. M. Brader, W. Senn, S. Fusi, “Learning real-world stimuli in a neural network with spike-driven synaptic dynamics,” Neural Comput., vol. 19, pp. 2881–2912, 2007.
    [27]. F. Ponulak, and A. Kasinski, “Supervised learning in spiking neural networks with ReSuMe: Sequence learning, classification, and spike shifting,” Neural Comput., vol. 22, pp. 467–510, 2010.
    [28]. A. T. Schaefer and T. W. Margrie, “Spatiotemporal representations in the olfactory system,” Trends Neurosci., vol. 30, no. 3, pp. 92–100, 2007.
    [29]. K. Mori, H. Nagao, and Y. Yoshihara, “The olfactory bulb: Coding and processing of odor molecule information,” Science, vol. 286, no. 5440, pp. 711–715, 1999.
    [30]. H. S. Seung, “Learning in spiking neural networks by reinforcement of stochastic synaptic transmission,” Neuron, vol. 40, pp. 1063–1073, 2003.
    [31]. L. F. Abbott and W. G. Regehr, “Synaptic computation,” Nature, vol. 431, pp. 796–803, 2004.
    [32]. H. Markram, J. Lubke, M. Frotscher, and B. Sakmann, “Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs,” Science, vol. 275, no. 5297, pp. 213–215, 1997.
    [33]. J.P Pfister, W. Gerstner, “Beyond pair-based STDP: a phenomenological rule for spike triplet and frequency effects,” Advances in neural information processing systems, pp. 1081–1088, 2005.
    [34]. R. Kempter, W. Gerstner, J. L. Van Hemmen, “Hebbian learning and spiking neurons,” Physical Review E, vol. 59, pp. 4498–4514, 1999.
    [35]. C. D. Brody and J.J. Hopfield, “Simple networks for spike-timing-based computation, with application to olfactory processing,” Neuron, vol. 37, pp. 843–852, 2003.
    [36]. T. Morie, Y. Amemiya, “An all-analog expandable neural network LSI with on-chip backpropagation learning,” IEEE J. Solid-State Circuits, vol. 29, pp. 1086–1093, 1994.
    [37]. C.H. Pan, H.Y. Hsieh, and K.T. Tang, “An analog multilayer perceptron neural network for a portable electronic nose,” Sensors, vol. 13, pp. 193–207, 2012.
    [38]. H. Chible, “Analog circuit for synapse neural networks VLSI implementation,” IEEE International Conference on Electronics, Circuits and Systems, pp. 1004–1007, 2000.
    [39]. K. Nakada, T. Asai and Y. Amemiya, “An analog CMOS central pattern generator for interlimb coordination in quadruped locomotion,” IEEE Trans. Neural Netw., vol. 14, pp. 1356–1365, 2003.
    [40]. S. C. Liu, J. Kramer, G. Indiveri, T. Delbruck and R. Douglas, “Analog VLSI: Circuits and principles,” MIT Press, Cambridge, MA.
    [41]. H. Chen, P.C.D. Fleury and A. F. Murray, “Continuous-valued probabilistic behavior in a VLSI generative model,” IEEE Trans. Neural Netw., vol. 17, pp. 755–770, 2006.
    [42]. G. Cauwenberghs, “An analog VLSI recurrent neural network,” IEEE Trans. Neural Netw., vol. 7, pp. 346–360, 1996.
    [43]. C. Lu, B. Shi, “Circuit realization of a programmable neuron transfer function and its derivative,” IEEE-INNS-ENNS International Joint Conference on Neural Networks, pp. 47–50, 2000.
    [44]. V. F. Koosh, and R. M. Goodman, “Analog VLSI neural network with digital perturbative learning,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol. 49, pp. 359–368, 2002.
    [45]. T. Morishita, Y. Tamura, and T. Otsuki, “A BiCMOS analog neural network with dynamically updated weights,” in IEEE ISSCC Dig. Tech. Papers, pp. 142–143, 1990.
    [46]. B. Boser, E. Sackinger, J. Bromley, Y. Le Cun, and L. Jackel, “An analog neural network processor with programmable topology,” IEEE J. of Solid-State Circuits, vol. 26, pp. 2017–2025, 1991.
    [47]. Y. Arima, M. Murasaki, T. Yamada, A. Maeda, and H. Shinohara, “A refreshable analog VLSI neural network chip with 400 neurons and 40 k synapses,” IEEE J. of Solid-State Circuits, vol. 27, pp. 1854–1861, 1992.
    [48]. D. Durfee, and F. S. Shoucair, “Comparison of floating gate neural network memory cells in standard VLSI CMOS technology,” IEEE Trans. Neural Netw., vol. 3, pp. 347–353, 1992.
    [49]. A. Afifi, A. Ayatollahi and F. Raissi, “Implementation of biologically plausible spiking neural network models on the memristor crossbar-based CMOS/nano circuits,” in Proc. IEEE Eur. Conf. Circuit Theory Design, pp. 563–566, 2009.
    [50]. C. Diorio, S. Mahajan, P. Hasler, B. Minch, and C. Mead, “A high-resolution non-volatile analog memory cell,” in Proc. IEEE ISCAS, vol. 3, pp. 2233–2236, 1995.
    [51]. C. Huang, and S. Chakrabartty, “A temperature compensated array of CMOS floating-gate analog memory,” in Proc. IEEE ISCAS, pp. 109–112, 2010.
    [52]. D. Strukov, G. Snider, D. Stewart and R. Williams, “The missing memristor found,” Nature, vol. 453, pp. 80–83, 2008.
    [53]. S. Shin, K. Kim and S. Kang, “Memristor applications for programmable analog ICs,” IEEE Trans. Nanotechnology, vol. 10, pp. 266–274, 2011.
    [54]. E. Chicca, D. Badoni, V. Dante, M. D'Andreagiovanni, G. Salina, L. Carota, S. Fusi, and P. D. Giudice, “A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory,” IEEE Trans. Neural Netw., vol. 14, pp. 1297–1307, 2003.
    [55]. G. Indiveri, E. Chicca and R. Douglas, “A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity,” IEEE Trans. Neural Netw., vol. 17, pp. 211–221, 2006.
    [56]. A. Bofill-i-Petit, A. Murray, “Synchrony detection and amplification by silicon neurons with STDP synapses,” IEEE Trans. Neural Netw., vol. 15, pp. 1296–1304, 2004.
    [57]. S. Mitra, S. Fusi, and G. Indiveri, “Real-time classification of complex patterns using spike-based learning in neuromorphic vlsi,” IEEE Trans. Biomed. Circuits Syst., vol. 3, no. 1, pp. 32–42, 2009.
    [58]. T. J. Koickal, A. Hamilton, S. L. Tan, J. A. Covington, J. W. Gardner and T. C. Pearce, “Analog VLSI circuit implementation of an adaptive neuromorphic olfaction chip,” IEEE Trans. Circuits and Syst. I: Regular Papers, vol. 54, pp. 60–73, 2007.
    [59]. R. Vogelstein, U. Mallik, J. Vogelstein and G. Cauwenberghs, “Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses,” IEEE Trans. Neural Netw., vol. 18, pp. 253–265, 2007.
    [60]. H. Chen, S. Sa¨ıghi, L. Buhry, and S. Renaud, “Real-time simulation of biologically realistic stochastic neurons in VLSI,” IEEE Trans. Neural Netw., vol. 21, no. 9, pp. 1511–1561, 2010.
    [61]. A. S. Demirkol, S. Ozoguz, “A low power VLSI implementation of the Izhikevich neuron Model,” IEEE 9th International New Circuits and Systems Conference, pp. 169–172, 2011.
    [62]. C. Rasche, R. Douglas, “An improved silicon neuron,” Analog Integrated Circuits and Signal Processing, vol. 23(3), pp. 227–236, 2000.
    [63]. C. Bartolozzi and G. Indiveri, “Synaptic dynamics in analog VLSI,” Neural Comput., vol. 19, no. 10, pp. 2581–2603, 2007.
    [64]. S. Ramakrishnan, P. E. Hasler, C. Gordon, “Floating gate synapses with spike-time-dependent plasticity,” IEEE Trans. Biomed. Circuits Syst., vol. 5, pp. 244–252, 2011.
    [65]. K. A. Boahen, “Point-to-point connectivity between neuromorphic chips using address events,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol. 47, no. 5, pp. 416–434, 2000.
    [66]. T. Fawcett, “An introduction to ROC analysis,” Pattern recognition letters, vol. 27, pp. 861–874, 2006.
    [67]. C. C. Lu, C. Y. Hong, and H. Chen, “A scalable and programmable architecture for the continuous restricted Boltzmann machine in VLSI,” in Proc. IEEE ISCAS, pp. 1297–1300, 2007.
    [68]. G. Laurent, “Olfactory network dynamics and the coding of multidimensional signals,” Nature, vol. 3, no. 11, pp. 884–895, 2002.
    [69]. K. T. Tang, S. W. Chiu, M. F. Chang, C. C. Hsieh, J. M. Shyu, “A low-power electronic nose signal-processing chip for a portable artificial olfaction system,” IEEE Trans. Biomed. Circuits Syst., vol. 5, pp. 380–390, 2011.
    [70]. H.Y. Hsieh and K. T. Tang, “VLSI implementation of a bio-inspired olfactory spiking neural network,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 7, pp. 1065–1073, 2012.
    [71]. S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. J. Amit, “Spike-driven synaptic plasticity: Theory, simulation, VLSI implementation,” Neural Comput., vol. 12, no. 10, pp. 2227–2258, 2000.
    [72]. D. Floreano and J. Urzelai, “Neural morphogenesis, synaptic plasticity, and evolution,” Theory Biosci., vol. 120, nos. 3–4, pp. 225–240, 2001.
    [73]. J. J. Hopfield and C. D. Brody, “Learning rules and network repair in spike-timing-based computation networks,” Proc. Nat. Acad. Sci. United States Amer., vol. 101, no. 1, pp. 337–342, 2004.
    [74]. D. Luo, H. G. Hosseini, and J. R. Stewart, “Application of ANN with extracted parameters from an electronic nose in cigarette brand identification,” Sensors Actuat. B, Chem., vol. 99, nos. 2–3, pp. 253–257, 2004.
    [75]. P. Boilot, E. L. Hines, J. W. Gardner, R. Pitt, S. John, J. Mitchell, and D. W. Morgan, “Classification of bacteria responsible for ENT and eye infections using the Cyranose system,” IEEE Sensors J., vol. 2, no. 3, pp. 247–253, 2002.
    [76]. H. Y. Hsieh and K. T. Tang, “Hardware friendly probabilistic spiking neural network with long-term and short-term plasticity,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, pp. 2063–2074, 2013.
    [77]. F. Pouille, A. Marin-Burgin, H. Adesnik, B. V. Atallah, and M. Scanziani, “Input normalization by global feedforward inhibition expands cortical dynamic range,” Nature Neurosci., vol. 12, pp. 1577–1585, 2009.
    [78]. D. O. Hebb, The Organization of Behavior: A Neuropsychological Theory. New York, NY, USA: Wiley, 1949.
    [79]. A. Frank and A. Asuncion. (2010). UCI Machine Learning Repository, School of Information and Computer Science, Univ. California, Irvine, CA, USA [Online]. Available: http://archive.ics.uci.edu/ml.
    [80]. O. L. Mangasarian and W. H. Wolberg, “Cancer diagnosis via linear programming,” SIAM News, vol. 23, no. 5, pp. 1–18, Sep. 1990.
    [81]. A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, H. E. Stanley, “PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals,” Circulation, vol. 101, pp. 215–220, 2000.
    [82]. L. Tarassenko, G. Clifford, N. Townsend, “Detection of ectopic beats in the electrocardiogram using an auto-associative neural network,” Neural processing letters, vol. 14, pp. 15-25, 2001.
    [83]. M. Lin, K. Tang, X. Yao, “Dynamic sampling approach to training neural networks for multiclass imbalance classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 24, pp. 647–660, 2013.
    [84]. J. Holleman, B. Otis, S. Bridges, A. Mitros, and C. Diorio, “A 2.92 μW hardware random number generator,” Proceedings of the 32nd European Solid-State Circuits Conference, pp. 134–137, 2006.
    [85]. G. Cauwenberghs, “Delta-sigma cellular automata for analog VLSI random vector generation,” IEEE Trans. Circuits and Syst. II, Analog Digit. Signal Process., vol. 46, pp. 240–250, 1999.
    [86]. B. Irie and S. Miyake, “Capabilities of three-layered perceptrons,” IEEE International Conference on Neural Networks, pp. 641–648, 1988.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)
    全文公開日期 本全文未授權公開 (國家圖書館:臺灣博碩士論文系統)
    QR CODE