簡易檢索 / 詳目顯示

研究生: 唐翊瑄
Tang, Yi-Xuan
論文名稱: 以物理訊息神經網路來解非局部邊界層問題
Physics informed neural network for solving nonlocal boundary layer problems
指導教授: 李俊璋
Lee, Chiun-Chang
林得勝
Lin, Te-Sheng
口試委員: 陳人豪
曾昱豪
學位類別: 碩士
Master
系所名稱: 理學院 - 計算與建模科學研究所
Institute of Computational and Modeling Science
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 34
中文關鍵詞: 物理信息神經網絡非局部邊界層問題
外文關鍵詞: PINNs, nonlocal boundary layer problem
相關次數: 點閱:43下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在本論文中,我們基於含有奇異擾動參數的非線性擴散方程,建構了神經網路模型以預測方程的解。我們從簡單的線性方程出發,逐步擴展到邊界項含有未知解的隱式方程,我們以物理訊息神經網路進行模型的訓練並且我們將結果與精確解做對比。我們發現以切比雪夫點來當訓練點求解方程效率較佳。最後,我們將這一方法延伸到求解無法得知精確解的非線性方程。


    In this paper, we construct a neural network model to solve the singularly perturbed nonlinear diffusion equation. We started from solving linear equations and gradually extended our methodology to consider problems with implicit boundary conditions, namely, the boundary condition contains the unknown solution. We solve the problem using physics-informed neural network. We found that using Chebyshev nodes as the training points improves the efficiency of the training significantly. Finally, we extend this approach to solve nonlinear equations for which exact solutions are not known.

    摘要 -----i Abstract -----ii Acknowledgements 1 Introduction -----1 2 Nonlocal boundary layer problems -----3 2.1 Preliminaries and the main difficulty -----3 2.2 Linear equation with implicit boundary conditions -----5 2.3 Nonlinear equation with implicit boundary conditions -----7 3 Physics informed neural networks -----9 3.1 Introduction of Physics informed neural networks -----9 3.2 Neural network and Loss function -----11 3.2.1 Linear equation with Dirichlet boundary conditions -----11 3.2.2 Linear equation with implicit boundary conditions -----12 3.2.3 Nonlinear equation with implicit boundary conditions -----14 4 Numerical experiments -----17 4.1 Numerical experiments -----17 4.1.1 Example 1 -----17 4.1.2 Example 2 -----22 4.1.3 Example 3 -----25 5 Conclusion -----31 References -----33

    [1] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A
    deep learning framework for solving forward and inverse problems involving nonlinear
    partial differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707,
    2019.
    [2] S. Cuomo, V. S. Di Cola, F. Giampaolo, G. Rozza, M. Raissi, and F. Piccialli, “Scientific
    machine learning through physics–informed neural networks: Where we are and what’s
    next,” Journal of Scientific Computing, vol. 92, no. 3, p. 88, 2022.
    [3] J. Sirignano and K. Spiliopoulos, “Dgm: A deep learning algorithm for solving partial
    differential equations,” Journal of Computational Physics, vol. 375, pp. 1339–1364, 2018.
    [4] Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, and P. Perdikaris, “Physics-constrained deep
    learning for high-dimensional surrogate modeling and uncertainty quantification without
    labeled data,” Journal of Computational Physics, vol. 394, pp. 56–81, 2019.
    [5] C.-C. Lee, “Uniqueness and asymptotics of singularly perturbed equations involving im
    plicit boundary conditions,” Revista de la Real Academia de Ciencias Exactas, Físicas y
    Naturales. Serie A. Matemáticas, vol. 117, no. 1, p. 51, 2023.
    [6] S.-H. M. C.-C. Lee, M. Mizuno, “On the uniqueness of linear convection–diffusion
    equations with integral boundary conditions,” Comptes Rendus. Mathématique, vol. 361,
    pp. 191–206, 2023.
    [7] W. Y. C.-C. Lee, Z. Wang, “Boundary-layer profile of a singularly perturbed non-local
    semi-linear problem arising in chemotaxis,” Nonlinearity, vol. 33, p. 5111, 2020.
    [8] O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, and H. Arshad,
    “State-of-the-art in artificial neural network applications: A survey,” Heliyon, vol. 4,
    no. 11, 2018.
    [9] D. F. Specht et al., “A general regression neural network,” IEEE Transactions on Neural
    Networks, vol. 2, no. 6, pp. 568–576, 1991.
    [10] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, “Efficient processing of deep neural net
    works: A tutorial and survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329,
    2017.
    [11] L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santa
    maría, M. A. Fadhel, M. Al-Amidie, and L. Farhan, “Review of deep learning: concepts,
    cnn architectures, challenges, applications, future directions,” Journal of Big Data, vol. 8,
    pp. 1–74, 2021.
    [12] L. Medsker and L. C. Jain, “Recurrent neural networks: design and applications,” 1999.
    [13] A. D. Rasamoelina, F. Adjailia, and P. Sinčák, “A review of activation function for artifi
    cial neural network,” in 2020 IEEE 18th World Symposium on Applied Machine Intelli
    gence and Informatics (SAMI), pp. 281–286, IEEE, 2020.
    [14] A. F. Agarap, “Deep learning using rectified linear units (relu),” arXiv preprint
    arXiv:1803.08375, 2018.
    [15] S. Narayan, “The generalized sigmoid activation function: Competitive supervised learn
    ing,” Information Sciences, vol. 99, no. 1-2, pp. 69–82, 1997.
    [16] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint
    arXiv:1412.6980, 2014.
    [17] D. C. Liu and J. Nocedal, “On the limited memory bfgs method for large scale optimiza
    tion,” Mathematical Programming, vol. 45, no. 1, pp. 503–528, 1989.
    [18] P. Moritz, R. Nishihara, and M. Jordan, “A linearly-convergent stochastic l-bfgs algo
    rithm,” in Artificial Intelligence and Statistics, pp. 249–258, PMLR, 2016.

    QR CODE