研究生: |
何晉豪 HO, Ching-Hao |
---|---|
論文名稱: |
物理信息神經網路與深度運算子神經網路於非線性動態系統狀態觀察之應用 States Observation of Nonlinear Dynamic Systems Using Physics Informed Neural Networks and Deep Operator Networks |
指導教授: |
汪上曉
Wong, David Shan-Hill 姚遠 Yuan, Yao |
口試委員: |
陳榮輝
康嘉麟 |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 化學工程學系 Department of Chemical Engineering |
論文出版年: | 2022 |
畢業學年度: | 110 |
語文別: | 中文 |
論文頁數: | 46 |
中文關鍵詞: | 狀態觀察 、參數回歸 、物理信息神經網路 、深度運算子神經網路 |
外文關鍵詞: | state observation, parameter regression, physical information neural networks, deep operator neural networks |
相關次數: | 點閱:4 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在化工製程中能夠透過量測直接得到之變量為觀察變量(observable variable),而無法直接透過量測得到之變量為狀態變量(state variable),然而透過狀態觀察(state observation)獲得狀態變量能夠使我們獲取製程上更多資訊。
研究目標為透過導入物理資訊的物理信息神經網路(physics-Informed Neural Networks)與將神經網路訓練成為運算子的深度運算子神經網路(Deep Operator Networks)對乙烷裂解非線性系統進行建模,並對於該系統進行動態預測、參數回歸以及將透過以知的物理訊息導入模型中進行狀態觀察。
物理信息神經網路與深度運算子神經網路對於動態預測與參數回歸均有很好的預測效果,然而對於狀態觀察,有別於物理信息神經網路multiple to multiple微分數值預測不佳,深度運算子神經網路不僅準確觀測出狀態變數,並且運用觀測出之狀態變數,準確預測積分後數值,預測之R2皆達0.9以上,我們成功使用該模型同時進行狀態觀察與動態預測。
In the chemical process, variables that can be directly obtained through measurement are observable variables, and variables that cannot be directly obtained through measurement are state variables. However, obtaining state variables through state observation can help ous get more information on the process.
The research goal is to build a nonlinear system of ethane cracking through physics-Informed Neural Networks that import physical information into model and Deep Operator Networks that train neural networks as operators, then using them to perform dynamic prediction, parameter regression, and state observation of the system by importing known physical information into the model.
The physical information neural networks and the deep operator neural networks have good prediction effects for dynamic prediction and parameter regression. However, for state observation, different from the physical information neural networks multiple to multiple differential numerical prediction is not good. The deep operator neural networks not only accurately observes the state variables, but also use the observed state variables to accurately predict the value after integration. The predicted R2 is above 0.9. We successfully use this model to perform state observation and dynamic prediction at the same time.
[1] McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115-133(1943).
[2] Y.LeCun, L.Bottou, Y.Bengio, P.Haffner, (1998) Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86, 2278-2324.
[3] V.Nair, G.E.Hinton, (2010), Rectified linear units improve restricted boltzmann machines, In ICML
[4] J.Han, C.Moraga, (1995) The influence of the sigmoid function parameters on the speed of backpropagation learning, In International Workshop on Artificial Neural Networks, 195-201.
[5] D.P.Kingma, J.Ba, (2014) Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
[6] Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4), 303-314.
[7] Yang S(2019, Jul) Medium - Build up a Neural Network with Python - Retrieved July, 2022, from https://towardsdatascience.com/build-up-a-neural-network-with-python-7faea4561b31
[8] García-Ordás, M. T., Benítez-Andrades, J. A., García-Rodríguez, I., Benavides, C., & Alaiz-Moretón, H. (2020). Detecting respiratory pathologies using convolutional neural networks and variational autoencoders for unbalancing data. Sensors, 20(4), 1214.
[9] Ravindra Parmar(2018, Sep) Medium - Training Deep Neural Networks - Retrieved July, 2022, from https://towardsdatascience.com/training-deep-neural-networks-9fdb1964b964
[10] dProgrammer lopez(2019, April) dPorgrammer - RNN, LSTM & GRU - Retrieved July, 2022, from http://dprogrammer.org/rnn-lstm-gru
[11] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686-707.
[12] G. S. Misyris, A. Venzke and S. Chatzivasileiadis, "Physics-Informed Neural Networks for Power Systems," 2020 IEEE Power & Energy Society General Meeting (PESGM), 2020, pp. 1-5
[13] Weiqi Ji, Weilun Qiu, Zhiyu Shi, Shaowu Pan, and Sili Deng. Stiff-PINN: Physics-Informed Neural Network for Stiff Chemical Kinetics.The Journal of Physical Chemistry A 2021 125 (36), 8098-8106
[14] Sahli Costabal Francisco, Yang Yibo, Perdikaris Paris, Hurtado Daniel E., Kuhl Ellen. Physics-Informed Neural Networks for Cardiac Activation Mapping. Frontiers in Physics 2020,8,42
[15] Lu, L., Jin, P., Pang, G., Zhang, Z., & Karniadakis, G. E. (2021). Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3), 218-229.
[16] Welch, G., & Bishop, G. (1995). An introduction to the Kalman filter.
[17] Sundaram K.M., Froment G.F., Modeling of thermal cracking kinetics—I: Thermal cracking of ethane, propane and their mixtures, Chemical Engineering Science,Volume 32, Issue6, 1977, Pages 601-608
[18] Wilmott, P., Howson, S., Howison, S., & Dewynne, J. (1995). The mathematics of financial derivatives: a student introduction. Cambridge university press.
[19] Olver, P. J. (2014). Introduction to partial differential equations (pp. 182-184). Berlin: Springer.
[20] Petro, K. (2017). On the link between finite difference and derivative of polynomials.