簡易檢索 / 詳目顯示

研究生: 吳宜達
WU, YI-DA
論文名稱: 應用於即時生醫訊號辨識之擴散網路晶片系統研發
The Diffusion Network on-a-Chip for Recognising Biomedical Signals in Real-Time
指導教授: 陳新
Chen, Hsin
口試委員: 呂忠津
鄭桂忠
蔡嘉明
黃聖傑
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 148
中文關鍵詞: 擴散網路類比積體電路
外文關鍵詞: Diffusion Network, Analog Integrated Circuits
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在許多的植入式生醫晶片系統中,為了即時感測多維的時變神經電生理訊號,一個後端的智慧型晶片系統是不可或缺的。舉例來說,腦機介面可以藉由多通道的神經訊號之後,避免以無線的方式傳送所有資訊,或者是將神經電生理訊號轉為電氣訊號,進而控制人工義肢等末端的機電輔具。

    由Movellan所提出的擴散網路是一個隨機型的類神經網路,藉由蒙地卡羅期望值最大化演算法,擴散網路可以用來學習特定機率分佈之連續訊號,更可以進一步用以摡括生理信號的變異性,故採用擴散網路來辨識連續的生理信號。我們利用電容充放電的行為將網路以類比電路的方式來實現晶片系統,供應電壓採用1.5伏特以降低功率消耗。為了不讓低電壓的設計直接壓縮到信號的動態範圍,我們採用對數領域的轉換來使得電晶體操作在次臨界區域。當狀態變數定義為電流時,可以對應出其節點電壓,這樣的指數關係式將原方程式轉換為對數領域的表示法,而新的方程式中的狀態變數將變為此節點電壓,原狀態變數也就表示說經過對數壓縮到節點電壓上,而經過對數領域的轉換將不會改變擴散網路應有的行為,狀態變數在電路上將具有數十倍的變動範圍。當在變數的動態範圍不受影響的狀況下,可以使電路操作在有限的供應電壓。半導體元件特性導致的電路非理想效應、以及對應輸出信號的影響亦進一步用模擬的方式量化呈現。在文末的量測結果亦證實,擴散網路可以順利的學習及辨識所需的生理訊號。


    1 Introduction 2 Literature Review 3 Diffusion Network 4 Circuit Implementations 5 Circuit Non-Idealities 6 Experimental Results 7 Conclusions Appendix A The Chip Configuration and Measurement Set-up References

    [1] C.-H. Chien, “A stochastic system on a chip basing on the diffusion network,” Master’s thesis, National Tsing Hua Univ., Oct. 2008.
    [2] G. Iddan, G. Meron, A. Glukhovsky, and P. Swain, “Wireless capsule endoscopy,” Nature, vol. 405, no. 6785, p. 417, July 2000.
    [3] T. W. Berger, M. Baudry, J.-S. L. Roberta Diaz Brinton, V. Z. Marmarelis, A. Y. Park, B. J. Sheu, and A. R. Tanguay, JR., “Brain-implantable biomimetic electronics as the next era in neural prosthetics,” Proc. IEEE, vol. 89, no. 7, pp. 993–1012, July 2001.
    [4] K. Wise, D. Anderson, J. Hetke, D. Kipke, and K. Najafi, “Wireless implantable microsystems: high-density electronic interfaces to the nervous system,” Proc. IEEE, vol. 92, no. 1, pp. 76–97, 2004.
    [5] G. Hinton and T. Sejnowski, “Learning and relearning in boltzmann machines,” MIT Press, Cambridge, Mass., 1986, vol. 1, pp. 282–317, 1986.
    [6] G. Hinton, T. Sejnowski, and D. Ackley, “Boltzmann machines: Constraint satisfaction networks that learn,” Cognitive Science, vol. 9, pp. 147–169, 1984.
    [7] J. Stevens, “Reverse engineering the brain,” Byte, vol. 10, no. 4, pp. 287–299, 1985.
    [8] R. Prager, T. Harrison, and F. Fallside, “Boltzmann machines for speech recognition,” Computer Speech & Language, vol. 1, no. 1, pp. 3–27, 1986.
    [9] T. Sejnowski and C. Rosenberg, “Parallel networks that learn to pronounce english text,” Complex systems, vol. 1, no. 1, pp. 145–168, 1987.
    [10] J. Y. Potvin, “The traveling salesman problem: a neural network perspective,” ORSA Journal on Computing, vol. 5, no. 4, pp. 328–348, 1993.
    [11] E. Aarts and J. Korst, “Boltzmann machines for travelling salesman problems,” European Journal of Operational Research, vol. 39, no. 1, pp. 79–95, 1989.
    [12] P. Smolensky, “Information processing in dynamical systems: foundations of harmony theory,” in Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. Cambridge, MA, USA: MIT Press, 1986, pp. 194–281.
    [13] G. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural computation, vol. 14, no. 8, pp. 1771–1800, 2002.
    [14] C.-C. Lu, “A scalable and programmable continuous restricted boltzmann machine in VLSI,” Ph.D. dissertation, National Tsing Hua Univ., Jul 2010.
    [15] J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the national academy of sciences, vol. 81, no. 10, pp. 3088–3092, 1984.
    [16] Y. Hsu, T. Chiu, and H. Chen, “Real-time recognition of continuous-time biomedical signals using the diffusion network,” in Proc. of the Int. Joint Conf. on Neural Networks (IJCNN). IEEE, 2008, pp. 2628–2633.
    [17] C. H. Chien, C. C. Lu, and H. Chen, “Mapping the diffusion network into a stochastic system in very large scale integration,” in Proc. of the Int. Joint Conf. on Neural Networks (IJCNN), 2010, pp. 1–7.
    [18] D. Specht, “Probabilistic neural networks and the polynomial adaline as complementary techniques for classification,” IEEE Trans. Neural Networks, vol. 1, no. 1, pp. 111–121, 1990.
    [19] N. Aibe, M. Yasunagat, I. Yoshiharatt, and J. H. Kimttt, “A probabilistic neural network hardware system using a learning-parameter parallel architecture,” in Proc. of the Int. Joint Conf. on Neural Networks (IJCNN), 2002, pp. 2270–2275.
    [20] R. Genov, S. Chakrabartty, and G. Cauwenberghs, “Silicon support vector machine with on-line learning,” International journal of pattern recognition and artificial intelligence, vol. 17, no. 3, pp. 385–404, 2003.
    [21] R. Genov and G. Cauwenberghs, “Kerneltron: Support vector machine in silicon,” IEEE Trans. Neural Networks, vol. 14, no. 5, pp. 1426–1434, 2003.
    [22] T. Clarkson, Y. Guan, J. Taylor, and D. Gorse, “Generalization in probabilistic ram nets,” IEEE Trans. Neural Networks, vol. 4, no. 2, pp. 360–363, 1993.
    [23] T. Clarkson, C. Ng, and Y. Guan, “The pram: An adaptive VLSI chip,”IEEE Trans. Neural Networks, vol. 4, no. 3, pp. 408–412, 1993.
    [24] H. Chen and A. Murray, “Continuous restricted boltzmann machine with an implementable training algorithm,” in Vision, Image and Signal Processing, IEE Proceedings-, vol. 150, no. 3. IET, 2003, pp. 153–158.
    [25] H. Chen, P. Fleury, and A. Murray, “Continuous-valued probabilistic behavior in a VLSI generative model,” IEEE Trans. Neural Networks, vol. 17, no. 3, pp. 755–770, 2006.
    [26] J. R. Movellan, P. Mineiro, and R. J.Williams, “A Monte Carlo EM approach for partially observable diffusion processes: Theory and applications to neural networks,” Neural Computation, vol. 14, pp. 1507–1544, July 2002.
    [27] J. R. Movellan, “A learning theorem for networks at detailed stochastic equilibrium,” Neural Computation, vol. 10, pp. 1157–1178, July 1998.
    [28] B. K. Oksendal, Stochastic differential equations: an introduction with applications. Springer Verlag, 2003.
    [29] H. Poor, An introduction to signal detection and estimation. Springer, 1994.
    [30] P. Mineiro, J. Movellan, and R. Williams, “Learning path distributions using nonequilibrium diffusion networks,” Advances in neural information processing systems, pp. 598–604, 1998.
    [31] P. Kloeden and E. Platen, Numerical solution of stochastic differential equations. Springer, 1992, vol. 23.
    [32] Y.-S. Hsu, “Biomedical signal recognition using diffusion networks,”Master’s thesis, National Tsing Hua Univ., July 2007.
    [33] D. Frey, “Log domain filters,” in Design of high frequency integrated analogue filters, Y. Sun, Ed. The Institution of Engineering and Technology, 2002, ch. 4, pp. 81–126.
    [34] R. W. Adams, “Filtering in the log domain,” in 63rd AES Conf., vol. 1470, May 1979.
    [35] D. R. Frey, “Exponential state space filters: A generic current mode design strategy,” IEEE Trans. Circuits Syst. I, vol. 43, pp. 34–42, Jan. 1996.
    [36] T. Serrano-Gotarredona and B. Linares-Barranco, “Log-domain implementation of complex dynamics reaction-diffusion neural networks,”IEEE Trans. Neural Networks, vol. 14, pp. 1337–1355, Sept. 2003.
    [37] E. Vittoz and J. Fellrath, “CMOS analog integrated circuits based on weak inversion operation,” IEEE J. Solid-State Circuits, vol. 12, pp. 224–231, June 1977.
    [38] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, and R. Douglas, Analog VLSI: Circuits and Principles. The MIT Press, 2002.
    [39] J. Mulder, W. Serdijn, A. van der Woerd, and A. van Roermund, “An instantaneous and syllabic companding translinear filter,” IEEE Trans. Circuits Syst. I, vol. 45, no. 2, pp. 150–154, 1998.
    [40] H. C. Yang and D. J. Allstot, “An active-feedback cascode current source,” IEEE Trans. Circuits Syst., vol. 37, no. 5, pp. 644–646, May 1990.
    [41] S. J. Upadhyaya, “Noise generators,” in Wiley Encyclopedia of Electrical and Electronics Engineering, J. Webster, Ed. Wiley, 2000.
    [42] J. Alspector, J. Gannett, S. Haber, M. Parker, and R. Chu, “A VLSI-efficient technique for generating multiple uncorrelated noise sources and its application to stochastic neural networks,” Circuits and Systems, IEEE Transactions on, vol. 38, no. 1, pp. 109–123, 1991.
    [43] G. Cauwenberghs, “Delta-sigma cellular automata for analog VLSI random vector generation,” IEEE Trans. Circuits Syst. II, vol. 46, no. 3, pp. 240–250, Mar. 1999.
    [44] J. Huang, Z. Liu, M. Jeng, K. Hui, M. Chan, P. Ko, and C. Hun, “Bsim3 version 2.0 user’s manual,” University of California, Berkeley, California, Mar 1994.
    [45] J. Connelly and P. Choi, Macromodeling with SPICE. Prentice-Hall, Inc., 1992.
    [46] A. Vladimirescu, The SPICE book. John Wiley & Sons, Inc., 1994.
    [47] C.-H. Chuang, “The measurement and improved design of the diffusion-network systems on-chip,” Master’s thesis, National Tsing Hua Univ., June 2010.
    [48] L. O. Chua, T. Roska, T. Kozek, and A. Zarandy, “CNN universal chips crank up the computing power,” IEEE Circuits DevicesMag., vol. 12, no. 4, pp. 18–28, July 1996.
    [49] T.-M. Kao, “Exploring the feasibility of training diffusion network with on-chip circuitry,” Master’s thesis, National Tsing Hua Univ., July 2010.
    [50] C. Diorio, D. Hsu, and M. Figueroa, “Adaptive CMOS: from biological inspiration to systems-on-a-chip,” Proc. IEEE, vol. 90, no. 3, pp. 345–357, 2002.
    [51] G. Serrano, P. Smith, H. Lo, R. Chawla, T. Hall, C. Twigg, and P. Hasler, “Automatic rapid programming of large arrays of floating-gate elements,” in IEEE Int. Symp. on Circuits and Syst. (ISCAS), vol. 1. IEEE,
    2004, pp. 373–376.
    [52] A. Annema, B. Nauta, R. van Langevelde, and H. Tuinhout, “Analog circuits in ultra-deep-submicron CMOS,” IEEE J. Solid-State Circuits, vol. 40, no. 1, pp. 132–143, 2005.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE