研究生: |
高子銘 Kao, Tzu-Ming |
---|---|
論文名稱: |
探究擴散網路之學習演算法實現於類比積體電路之可行性 Exploring the feasibility of training Diffusion Network with on-chip circuitry |
指導教授: |
陳新
Chen, Hsin |
口試委員: | |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 電子工程研究所 Institute of Electronics Engineering |
論文出版年: | 2010 |
畢業學年度: | 98 |
語文別: | 中文 |
論文頁數: | 88 |
中文關鍵詞: | 擴散網路 、積分器電路 、學習演算法 、最佳化方法 |
外文關鍵詞: | diffusion network, integrator circuit, learning algorithm, optimization method |
相關次數: | 點閱:2 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
人們對於生物體內如何運作很感興趣,因此數學家利用數學方式建立類神經網路演算法,來模仿生物體內神經元之間互相連結和傳遞訊息,最後利用類神經網路得到代表學習訊號的參數,拿它來做訊號辨識或分類;而此論文所要探討的內容:探討擴散網路(Diffusion Network)[1]之學習演算法實現於類比積體電路之可行性。
首先是修改和簡化擴散網路之學習演算法,修改學習演算法是為了易於實現成類比積體電路,簡化學習演算法的目標是為了讓電路較好設計,而學習理論部份最重要的是應用蒙地卡羅期望值最大化(Monte Carlo Expectation Maximization)來得到代表待學習訊號的擴散網路參數,接著再利用最佳化方法修改學習演算法,經過修改後之學習演算法可利用模擬軟體(MATLAB)來驗證,藉由不同的訓練資料來找尋擴散網路參數的範圍和解析度,來了解擴散網路要成功學習訊號,對於精確度要求程度高低,這些前置模擬工作對於電路規格有很大的關連性;接著探討電路實現之可行性,最後再利用實驗室虛擬儀器工程平(Laboratory Virtual Instrumentation Engineering Workbench)來模擬擴散網路學習過程。
People have always been interested in how neurons work in organisms, and therefore mathmeaticans built neural network algorithms with the use of mathematical method. A neural network algorithm can mimic behaviors that connect and communicate messages between neurons in organisms. By using neural network, we got the learning signal’s representative parameters that can apply to recognize and classify different signals. The theme of the thesis is ‘exploring the feasibility of training diffusion network with on-chip circuitry’.
First, modify and simplify the training algorithm of Diffusion Network. The modification of training algorithm of the diffusion network is for an easier implementation in analog integrated circuit. On the other hand, simplifying the training algorithm of the diffusion network enables the design of analog integrated circuit to become easier. The most important part of training theory is getting the parameters of the training signal by using the algorithm, ‘Monte Carlo Expectation Maximization.’ The training algorithm is modified by optimization method and can be verified by the mathematic simulation software (MATLAB). Searching the range and resolution of the parameters in the different training signals helps us to understand how necessary the accuracy of parameter is for the diffusion network to be successful. The simulation work is closely related to the circuit specification, then the study explores the feasibility of training diffusion network learning processes by the Laboratory Virtual Instrumentation Engineering Workbench (LABVIEW).
[1] J. R. Movellan, P. Mineiro, and R. J. Williams, "A Monte Carlo EM approach for partially observable diffusion processes: Theory and applications to neural networks," Neural Computation, vol. 14, no. 7, pp. 1507-1544, 2002.
[2] H. Chen and A. F. Murray, "Continuous restricted Boltzmann machine with an implementable training algorithm," IEE Proceedings-Vision Image and Signal
Processing, vol. 150, no. 3, pp. 153-158, 2003.
[3] Y.-S. Hsu, "Biomedical Signal Recognition Using Diffusion Networks."
Master's Thesis of National Tsing Hua University,Taiwan, 2007.
[4] Edwin K. P. Chong and Statislaw H. Zak , An Introduction to Optimization, 2nd Edition, 2001.
[5] G. Cauwenberghs, "An analog VLSI recurrent neural network learning a
continuous-time trajectory," IEEE Transactions on Neural Networks, vol. 7, pp. 346-361, 1996.
[6] Antonio J. Montalvo, Ronald S. Gyurcsik, and John J. Paulos, "An analog VLSI neural network with on-chip perturbation learning," IEEE journal of solid-state circuits, vol. 32, pp. 535-543, 1997.
[7] G. Cauwenberghs and M. Bayoumi, Learning on silicon: Adaptive VLSI neural systems: Kluwer Academic Publishers, 1999.
[8] Mirko Gravati, Maurizio Valle, Giuseppe Ferri, Nicola Guerrini, Linder Reyes, " A novel current-mode very low power analog CMOS four quadrant multiplier, " In: Proceedings of ESSCIRC, Grenoble, France, 0-7803-9205 IEEE, 2005.
[9] G. Cauwenberghs, "An analog VLSI recurrent neural network learning a continuous-time trajectory," IEEE Transactions on Neural Networks, vol. 7, pp. 346-361, 1996.
[10] M. Valle, "Analog VLSI implementation of artificial neural networks with Supervised On-Chip Learning," Analog Integrated Circuits and Signal Processing, vol. 33, pp. 263-287, 2002.
[11] C.-H. Chien, "A Stochastic System on a Chip Basing on the Diffusion Networks." Master's Thesis of National Tsing Hua University,Taiwan, 2008.
[12] Y. Tsividis, "Externally linear integrators," IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 45, pp. 1181-1187, 1998.
[13] G. Groenewold, "Optimal dynamic range integrators," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 39, pp. 614-627, 1992..
[14] J. Moreira and M. Silva, "Limits to the Dynamic Range of Low-Power Continuous-Time Integrators," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 48, p. 805-817, 2001.
[15] E. El-Masry and J. Wu, "Fully Differential Class-AB Log-Domain Integrator," Analog Integrated Circuits and Signal Processing, vol. 25, pp. 35-46, 2000.
[16] Ippei Akita, Kazuyuki Wada, Yoshiaki Tadokoro, "A 0.6-V Dynamic Biasing Filter With 89-dB Dynamic Range in 0.18-μm CMOS," IEEE journal of
solid-state circuits, vol. 44, pp. 2790-2799, 2009.
[17] Ippei Akita, Kazuyuki Wada, and Yoshiaki Tadokoro, "Simplified Low-Voltage
CMOS Syllabic Companding Log Domain Filter," IEEE international Symposium on Circuits and Sysytmes, pp. 2244-2247, May 2007.
[18] S. L. Smith, and E. Sánchez-Sinencio, “Low voltage integrators for high-frequency CMOS filters using current mode techniques,” IEEE Trans. on Circuits Syst. II: Analog and Digital Signal Processing, vol. 43, no. 1, Jan. 1996, pp. 39 – 48