簡易檢索 / 詳目顯示

研究生: 林伃茹
Lin, Yu-Ru
論文名稱: 以深度自動編碼器基於圖像辨識對田納西伊士曼製程進行故障分類
Image Based Fault Classification for Tennessee Eastman Process using Deep Auto-Encoder
指導教授: 鄭西顯
Jang, Shi-Shang
口試委員: 汪上曉
Wong, David Shan-Hill
陳誠亮
Chen, Cheng-Liang
學位類別: 碩士
Master
系所名稱: 工學院 - 化學工程學系
Department of Chemical Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 54
中文關鍵詞: 深度學習製程分類深度自動編碼器新分類
外文關鍵詞: Deep learning, Fault classification, Deep Auto-Encoder, New classification
相關次數: 點閱:1下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 人工類神經網路(ANN)模仿生物神經網路系統的運作,以數學函數決定神經元的激發與否,而自動編碼器(AE)在人工類神經網絡中是最重要的無監督式(unsupervised)學習之一,目的在於從輸入數據中學習特徵(feature)的層次結構,通常用於減少變數維度,近來廣泛用於學習數據生成模型,其結構類似於多層感知器(MLP),但具有重建其原始輸入的能力。
    在本研究中,第一部份針對由美國國家標準技術研究所(MNIST)所提供的數據庫中手寫數字進行辨識與分類,第二部分則以化工經典案例田納西伊士曼製程(TEP)的數據對各種不同故障分類,本論文提供標準模型確立製程故障模式與其詳細信息,取代工廠現場人員的操作經驗,避免因人力流動而降低工廠控制或判斷製程故障的能力。此外,本研究還探討深度自動編碼器(DAE)學習新型分類的能力,並圖像化數據,以混淆矩陣(Confusion matrix)判斷模型優劣。


    Artificial neural network (ANN) mimics the operation of the biological neural network system, and using the mathematical functions to determine the artificial neuron excitation. Auto-encoder is one of the most important unsupervised learning in artificial neural networks which aims to learn a hierarchy of feature representations from input data. It is often used to reduce the dimension and widely used in learning data generation model. Its structure is similar to a multi-layer perceptron, but has the ability to reconstruct its original input.
    In this study, the handwritten digital classification on the MNIST database and the fault diagnosis in the Tennessee Eastman process were discussed and analyzed. We provide detailed information on how to determine the fault pattern of the process based on a standard model rather than an experience of the factory site personnel. Besides, this study also discusses the new class could be learned through the deep auto-encoder and provides diagnostic results using images.

    誌謝辭 I 摘要 II ABSTRACT III 目錄 IV 圖目錄 VII 表目錄 IX 第1章 緒論 1 1.1 研究背景及動機 1 1.2 文獻回顧 4 1.2.1 回歸方法及人工類神經網路 4 1.2.2 深度學習 6 1.2.3 自動編碼器 7 1.2.4 田納西伊士曼製程 8 1.3 研究目的 9 1.4 研究架構 10 第2章 理論與模型介紹 11 2.1 人工類神經網路 11 2.1.1 處理單元模型 12 2.1.2 類神經網路學習過程 15 2.1.3 倒傳遞網路演算法 16 2.2 深度神經網路 20 2.3 深度自動編碼器 22 2.3.1 自動編碼器 22 2.3.2 稀疏自動編碼器 23 2.3.3 成本函數 25 2.3.4 Softmax分類器 25 2.4 定義統計公式 26 第3章 結果討論 27 3.1 手寫數字分類 27 3.1.1 系統描述 27 3.1.2 驗證深度自動編碼器的分類成效 27 3.1.3 測試深度自動編碼器學習新分類的能力 31 3.2 田納西伊士曼製程 35 3.2.1 系統描述 35 3.2.2 模擬數據預處理 41 3.2.3 驗證深度自動編碼器的分類成效 43 3.2.4 測試深度自動編碼器學習新分類的能力 46 第4章 結論 49 符號對照表 50 參考文獻 52

    [1] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, et al., "Mastering the game of Go with deep neural networks and tree search," Nature, vol. 529, pp. 484-489, 2016.
    [2] C. Shang, F. Yang, D. Huang, and W. Lyu, "Data-driven soft sensor development based on deep learning technique," Journal of Process Control, vol. 24, pp. 223-233, 2014.
    [3] J. J. Downs and E. F. Vogel, "A plant-wide industrial process control problem," Computers & chemical engineering, vol. 17, pp. 245-255, 1993.
    [4] D. Wilson and G. W. Irwin, "PLS modelling and fault detection on the Tennessee Eastman benchmark," International Journal of Systems Science, vol. 31, pp. 1449-1457, 2000.
    [5] W. S. McCulloch and W. Pitts, "A logical calculus of the ideas immanent in nervous activity," The bulletin of mathematical biophysics, vol. 5, pp. 115-133, 1943.
    [6] F. Rosenblatt, "The perceptron: a probabilistic model for information storage and organization in the brain," Psychological review, vol. 65, p. 386, 1958.
    [7] D. E. Rumelhart, J. L. McClelland, and P. R. Group, Parallel distributed processing vol. 1: MIT press Cambridge, MA, USA:, 1987.
    [8] J. C. Hoskins and D. Himmelblau, "Artificial neural network models of knowledge representation in chemical engineering," Computers & Chemical Engineering, vol. 12, pp. 881-890, 1988.
    [9] J.-M. Horng, J.-W. Ko, I. Yet-Pole, and W.-T. Wu, "Determination of operating conditions for a crude tower using neural networks," Process Control and Quality, vol. 7, pp. 21-21, 1995.
    [10] J.-W. Ko, I. Yet-Pole, and W.-T. Wu, "Determination of an optimal operating condition for a crude tower via a neural network model," Process Control and Quality, vol. 3, pp. 291-298, 1997.
    [11] L. C.-K. Liau, T. C.-K. Yang, and M.-T. Tsai, "Expert system of a crude oil distillation unit for process optimization using neural networks," Expert Systems with Applications, vol. 26, pp. 247-255, 2004.
    [12] S. Motlaghi, F. Jalali, and M. N. Ahmadabadi, "An expert system design for a crude oil distillation column with the neural networks model and the process optimization using genetic algorithm framework," Expert Systems with Applications, vol. 35, pp. 1540-1545, 2008.
    [13] G. E. Hinton, S. Osindero, and Y.-W. Teh, "A fast learning algorithm for deep belief nets," Neural computation, vol. 18, pp. 1527-1554, 2006.
    [14] C. Poultney, S. Chopra, and Y. L. Cun, "Efficient learning of sparse representations with an energy-based model," in Advances in neural information processing systems, 2006, pp. 1137-1144.
    [15] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, "Greedy layer-wise training of deep networks," Advances in neural information processing systems, vol. 19, p. 153, 2007.
    [16] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning internal representations by error propagation," DTIC Document1985.
    [17] G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," science, vol. 313, pp. 504-507, 2006.
    [18] A. Ng, "Sparse autoencoder," CS294A Lecture notes, vol. 72, pp. 1-19, 2011.
    [19] Q. V. Le, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, et al., "Building High-level Features Using Large Scale Unsupervised Learning," 2012.
    [20] D. S. Chen, D. S. H. Wong, and J. L. Liu, "Process monitoring using a distance-based adaptive resonance theory," Industrial & engineering chemistry research, vol. 41, pp. 2465-2479, 2002.
    [21] L. Luo, H. Su, and L. Ban, "Independent component analysis-Based sparse autoencoder in the application of fault diagnosis," in Intelligent Control and Automation (WCICA), 2014 11th World Congress on, 2014, pp. 1378-1382.
    [22] D. Xie and L. Bai, "A hierarchical deep neural network for fault diagnosis on Tennessee-Eastman process," in Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on, 2015, pp. 745-748.
    [23] F. Lv, C. Wen, Z. Bao, and M. Liu, "Fault diagnosis based on deep learning," in American Control Conference (ACC), 2016, 2016, pp. 6851-6856.
    [24] X. Gao and J. Hou, "An improved SVM integrated GS-PCA fault diagnosis approach of Tennessee Eastman process," Neurocomputing, vol. 174, pp. 906-911, 2016.
    [25] Z. Zhang and J. Zhao, "A deep belief network based fault diagnosis model for complex chemical processes," Computers & Chemical Engineering, 2017.
    [26] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521, pp. 436-444, 2015.
    [27] H. A. Song and S.-Y. Lee, "Hierarchical representation using NMF," in International Conference on Neural Information Processing, 2013, pp. 466-473.
    [28] B. A. Olshausen, "Emergence of simple-cell receptive field properties by learning a sparse code for natural images," Nature, vol. 381, pp. 607-609, 1996.
    [29] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, pp. 2278-2324, 1998.
    [30] L. H. Chiang, E. L. Russell, and R. D. Braatz, Fault detection and diagnosis in industrial systems: Springer Science & Business Media, 2000.
    [31] P. R. Lyman and C. Georgakis, "Plant-wide control of the Tennessee Eastman problem," Computers & chemical engineering, vol. 19, pp. 321-331, 1995.

    QR CODE