簡易檢索 / 詳目顯示

研究生: 黃建銘
Huang, Chien-Ming
論文名稱: 基於連續型波茲曼模型之電子鼻氣體訊號辨識方法研究
Research in Recognition Method Based on Continuous Restricted Boltzmann Machine
指導教授: 陳新
Chen, Hsin
口試委員: 劉奕汶
Liu, Yi-Wen
楊家驤
Yang, Chia-Hsiang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 中文
論文頁數: 102
中文關鍵詞: 連續侷限性波茲曼模型電子鼻肺炎菌種辨識
外文關鍵詞: Continuous Restricted Boltzmann Machine, eNose, Pneumoniae strains, Recognition
相關次數: 點閱:2下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來電子鼻系統於醫學方面的應用逐漸受到重視,以本論文的應用為
    例,即是將電子鼻感測器陣列應用於肺炎氣體感測。藉由感測病人呼出的氣
    體,來判別病人是否有感染肺炎。但是目前的感測器對於肺炎病菌的代謝氣體
    仍不夠敏感,導致電子鼻得到的氣體資料中,特徵不夠明顯,導致有相當程度
    的混雜。因此,本論文提出以機率模型的方式來學習肺炎氣體資料,希望可以
    進一步分析混雜的肺炎氣體資料。連續侷限性波茲曼模型正是一種機率模型,
    可用以分群、分類與重建。使用其分群功能時,可以將資料經由學習,重新投
    影至高維或低維,使資料更容易被分類;當CRBM被加上一個新的神經元 –
    標籤神經元時,則可以做為一個完整的分類器;當CRBM著重於重建功能時,
    則可以透過能量函數找出重建點的機率分布,並以此作為貝氏分類器的分類依
    據。
    此外,本論文使用過去提出之類比CRBM晶片架構為基礎,建立一硬體平
    台,用以驗證CRBM之學習方法實現。若能實現,則可以進一步使用此硬體平
    台與前端感測器陣列結合,完成一具有學習能力之感測器系統。由於過去設計
    之CRBM類比晶片中未加入學習機制,所以必須透過外部的連結(資料擷取卡
    與FPGA卡),來組成一個迴圈,作為學習時,資料與參數的循環機制,也就是
    Chip-in-a-Loop學習的機制。


    In recent years, the biomedical application of electronic nose sensor system has
    been noticed, for example, this thesis will focus on the recognition of pneumonia data
    from patients. However, the sensitivity of sensor array is not high enough so that the
    captured data is somewhat overlapped. In order to analyze these data further, this
    thesis proposes some methods to classify them with probabilistic model, such as
    CRBM. Continuous Restricted Boltzmann Machine (CRBM) is a generative
    probabilistic model that can cluster and classify, and that can reconstruct data
    distribution from training data. Therefore, there are 3 possible ways to classify
    pneumonia data by CRBM. First, as a clusterer, CRBM can re-project data into higher
    -dimensional space or lower-dimensional space so that the data will be classified more
    easily. Secondly, as a classifier, CRBM uses an additional neuron as label to learn
    class of training data. Finally, as a generative model, CRBM can re-generate the data
    distribution of training data following its energy function so that we can estimate the
    probability density in the space. After estimating the probability density, the Bayesian
    Classifier can classify with it.
    In addition, this thesis proposes a setup to test 3
    rd
    CRBM analog chip. Since
    training mechanism was not designed for the this chip, so we use the data acquisition
    (DAQ) system and FPGA card to implement training algorithm of CRBM. This is the
    so-called Chip-in-a-Loop training. The performance of this training mechanism will
    be evalutated.

    第一章 緒論 1.1 研究動機 1.2 論文貢獻 1.3 論文內容簡述 第二章 文獻回顧 2.1 電子鼻氣體資料簡介 2.2 連續侷限性波茲曼模型(CRBM)簡介 2.2.1 連續侷限性波茲曼模型網路架構 2.2.2 連續侷限性波茲曼模型的重建 2.2.3 連續侷限性波茲曼模型學習方式 2.2.4 使用連續侷限性波茲曼模型減少資料維度 2.3 驗證線性可分割性的方法 2.3.1 線性規劃問題與單形法簡介 2.3.2 使用線性規劃方法驗證線性可分割性 2.3.3 輔助向量機與本方法的差異 2.4 高斯混合模型 2.4.1 高斯模型簡介 2.4.2 高斯混合模型簡介 2.4.3 高斯混合模型分類器 2.5 文獻中使用過的電子鼻氣體資料辨識方法 2.5.1 最近鄰居法 2.5.2 倒傳遞多層感知系統 2.5.3 機率類神經網路 2.5.4 總結 2.6 本論文提出之分類方法 2.7 結論 第三章 CRBM的降維分群模擬 3.1 電子鼻氣體描述與分群效果量化 3.2 水果氣體資料的模擬結果 3.3 假酒氣體資料的模擬結果 3.4 肺炎氣體的模擬結果 3.5 以降維處理肺炎氣體資料的困難與改進 第四章 以CRBM為基礎的分類方法 4.1 訓練標籤神經元重建連續侷限性波茲曼模型的分類結果 4.1.1 模型簡介與分類方法 4.1.2 標籤神經元分類模擬結果 - 人造資料I 4.1.3 標籤神經元分類模擬結果 - 人造資料II 4.1.4 標籤神經元分類模擬結果 - 人造資料III 4.1.5 CRBM分類器的極限與分類困難 4.2 以CRBM的學習參數作高維投影並使用線性規劃方法分類 4.2.1 分類方法簡介 4.2.2 柯佛定理簡介 4.2.3 線性規劃方法分類器 4.2.4 線性規劃與線性判別分析的比較 4.2.5 肺炎菌種分類結果 4.2.6 分類結果與過適討論 4.3 結論 第五章 機率模型分類器 5.1 貝氏分類器器簡介 5.2 高斯混合模型建立方式 5.3 使用高斯混合模型建立1D肺炎氣體資料機率模型 5.4 使用高斯混合模型建立32D肺炎氣體資料機率模型 5.5 CRBM之能量函數與重建分布探討 第六章 硬體量測驗證 6.1 系統架構簡介 6.2 量測環境設置與時序控制圖 6.3 Labview程式介面與量測結果 6.4 總結 參考文獻

    [1] H. Bai and G. Shi, “Gas Sensors Based on Conducting Polymers,” Sensors, vol. 7,
    no. 3, pp. 267–307, Mar. 2007.
    [2] C. C. Lu, C. C. Li, and H. Chen, “How Robust Is a Probabilistic Neural VLSI
    System Against Environmental Noise,” in Artificial Neural Networks in Pattern
    Recognition, pp. 44–53, 2008.
    [3] T. B. Tang, H. Chen, and A. F. Murray, “Adaptive, integrated sensor processing to
    compensate for drift and uncertainty: a stochastic ‘neural’ approach,”
    Nanobiotechnology IEE Proc. -, vol. 151, no. 1, pp. 28–34, Feb. 2004.
    [4] J. R. Movellan, “A Learning Theorem for Networks at Detailed Stochastic
    Equilibrium,” Neural Comput., vol. 10, no. 5, pp. 1157–1178, Jul. 1998.
    [5] G. E. Hinton and T. J. Sejnowski, “Unsupervised Learning: Foundations of
    Neural Computation,” MIT Press, 1999.
    [6] G. Hinton, “Training Products of Experts by Minimizing Contrastive
    Divergence,” Neural Comput., vol. 14, 2000.
    [7] K. G. Murty, “Linear programming,” New York: John Wiley & Sons Inc., 1983.
    [8] O. L. Mangasarian, “Linear and Nonlinear Separation of Patterns by Linear
    Programming,” Oper. Res., vol. 13, no. 3, pp. 444–452, Jun. 1965.
    [9] W. Zhou, L. Zhang, and L. Jiao, “Linear programming support vector machines,”
    2001.
    [10] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from
    incomplete data via the EM algorithm,” Journal of the Royal Statistical Society,
    vol. 39, no. 1, pp. 1–38, 1977.
    [11] L. Devroye, L. Györfi, and G. Lugosi, “A probabilistic theory of pattern
    recognition,” New York: Springer, 1996.
    [12] A. A. Ferreira, T. B. Ludermir, and R. R. B. de Aquino, “A comparative study of
    neural network to artificial noses,” Neural Networks, Proceedings. IEEE
    International Joint Conference on, vol. 4, pp. 2081–2086 vol. 4, 2005.
    [13] W. Jatmiko, T. Fukuda, F. Arai, and B. Kusumoputro, “Artificial odor
    discrimination system using multiple quartz-resonator sensor and neural network
    for recognizing fragrance mixtures,” Proceedings of the 2004 International
    Symposium on, pp. 169–174, 2004.
    [14] W. Jatmiko, T. Fukuda, and K. Sekiyama, “Optimized probabilistic neural
    networks in recognizing fragrance mixtures using higher number of sensors,”
    Sensors, pp. 1026–1029, 2005.
    [15] S. Guney and A. Atasoy, “Classification of n-butanol concentrations with k-NN
    algorithm and ANN in electronic nose,” Innovations in Intelligent Systems and
    101
    Applications, pp. 138–142, 2001.
    [16] K.-T. Tang, S.-W. Chiu, M.-F. Chang, C.-C. Hsieh, and J.-M. Shyu, “A LowPower Electronic Nose Signal-Processing Chip for a Portable Artificial Olfaction
    System,” IEEE Trans. Biomed. Circuits Syst., vol. 5, no. 4, pp. 380–390, Aug.
    2011.
    [17] X. He, S. Wei, and R. Wang, “Independent Component Analysis and Neural
    Network Applied on Electronic Nose System,” Bioinformatics and Biomedical
    Engineering, pp. 490–493, 2008.
    [18] H. GholamHosseini, D. Luo, H. Liu, and G. Xu, “Intelligent Processing of Enose Information for Fish Freshness Assessment,” Intelligent Sensors, Sensor
    Networks and Information, pp. 173–177, 2007.
    [19] K. Z. Mao, K.-C. Tan, and W. Ser, “Probabilistic neural-network structure
    determination for pattern classification,” IEEE Trans. Neural Netw., vol. 11, no. 4,
    pp. 1009–1016, Jul. 2000.
    [20] H. Chen and A. F. Murray, “Continuous restricted Boltzmann machine with an
    implementable training algorithm,” Vis. Image Signal Process. IEE Proc. -, vol.
    150, no. 3, pp. 153–158, Jun. 2003.
    [21] C. C. Lu, C. Y. Hong, and H. Chen, “A Scalable and Programmable Architecture
    for the Continuous Restricted Boltzmann Machine in VLSI,” in IEEE
    International Symposium on Circuits and Systems, pp. 1297–1300, 2007.
    [22] K. Pearson, “On lines and planes of closest fit to systems of points in space,”
    Philos. Mag., vol. 2, no. 6, pp. 559–572, 1901.
    [23] J.-H. Wang, “Design of Continuous Restricted Boltzmann Machine IC for
    Electronic Nose System,” NTHU, Hsinchu, Taiwan, 2013.
    [24] T. M. Cover, “Geometrical and Statistical Properties of Systems of Linear
    Inequalities with Applications in Pattern Recognition,” IEEE Trans. Electron.
    Comput., vol. EC-14, no. 3, pp. 326–334, Jun. 1965.
    [25] I. V. Tetko, D. J. Livingstone, and A. I. Luik, “Neural network studies. 1.
    Comparison of overfitting and overtraining,” J. Chem. Inf. Comput. Sci., vol. 35,
    no. 5, pp. 826–833, Sep. 1995.
    [26] K. Ito and K. Kunisch, “Lagrange Multiplier Approach to Variational Problems
    and Applications,” Society for Industrial and Applied Mathematics, 2008.
    [27] J. Bilmes, “A Gentle Tutorial of the EM Algorithm and its Application to
    Parameter Estimation for Gaussian Mixture and Hidden Markov Models,” 1998.
    [28] T. Schmah, G. E. Hinton, S. L. Small, S. Strother, and R. S. Zemel, “Generative
    versus discriminative training of RBMs for classification of fMRI images,”
    Advances in Neural Information Processing Systems 21, pp. 1409–1416, 2008.
    [29] S. Haykin, “Neural Networks: A Comprehensive Foundation,” 2nd ed. Upper
    102
    Saddle River, NJ, USA: Prentice Hall PTR, 1998.
    [30] A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” 2009.
    [31] C. C. Lu and H. Chen, “A Scalable and Programmable Probabilistic Generative
    Model in VLSI,” 2010.

    無法下載圖示 全文公開日期 本全文未授權公開 (校內網路)
    全文公開日期 本全文未授權公開 (校外網路)

    QR CODE