簡易檢索 / 詳目顯示

研究生: 林亮宇
Lin, Liang-Yu.
論文名稱: 利用神經網路求解Poisson方程之格林函數
Solving the Green’s function of the Poisson’s equation by neural networks
指導教授: 朱家杰
Chu, Chia-Cheih
口試委員: 蔡志強
Tsai, Je-Chiang
薛名成
Shiue, Ming-Cheng
學位類別: 碩士
Master
系所名稱: 理學院 - 數學系
Department of Mathematics
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 26
中文關鍵詞: 格林函數深度學習物理信息神經網絡
外文關鍵詞: Green’s function, Deep Learning, Physics-informed neural networks
相關次數: 點閱:52下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 偏微分方程在各種不同的領域皆有許多應用,因此如何解微分方程成為一
    大重要領域。近數十年來,仰仗於電腦科技的快速發展,從前只存在於理論中
    的數值方法因此而發揚光大,深度學習就是其中一項。在本文中我們將同時
    使用深度學習與數值分析領域的工具來解經典的微分方程,柏松方程(Poisson
    equation)。在微分方程領域,解柏松方程經常會使用到的技巧便是使用格林函
    數(Green’s function) ,一旦掌握了與方程對應的格林函數,便可輕易地解決同
    種類的問題,於是問題便轉移到了如何找到對應的格林函數。
    在本文中,我們將使用人工神經網路來學習並近似目標函數,其中主要的
    核心概念是來自由Raissi 所提出的物理信息神經網絡(physics-informed neural
    network),並透過近似狄拉克δ函數或利用其本身的性質來繞過原本單純使
    用物理信息神經網絡無法通過的障礙,並展示其方法的可行性。


    Partial differential equations have many applications in various fields, so how
    to solve differential equations has become an important field. In recent decades,
    relying on the rapid development of computer technology, some numerical methods
    have been developed, and deep learning is one of them. In this article we will use
    tools from the fields of deep learning and numerical analysis to solve the classic
    differential equation, Poisson equation. In the field of differential equations, a
    technique often used to solve Poisson’s equation is to use Green’s function. Once
    you know the Green’s function corresponding to the equation, you can easily solve
    the same type of problems, so the problem shifts to how to find the corresponding
    Green’s function.
    In this article, we’ll use artificial neural networks to learn and approximate
    the target function, the main core concepts are derived from the physics-informed
    neural networks (PINNs) by Raissi et al. And by approximating the Dirac
    delta function or using its own properties, we can bypass obstacles that were
    originally insurmountable by simply using physical information neural networks,
    and demonstrate the feasibility of our method.

    Abstract (Chinese) . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . II Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . III List of Figures . . . . . . . . . . . . . . . . . . . . . . . . V List of Tables . . . . . . . . . . . . . . . . . . . . . . . . VI 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1 Feedforward neural networks . . . . . . . . . . . . . . . . 3 2.2 Physics-informed neural networks . . . . . . . . . . . . . . 6 3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1 Method 1: The approximation of The Dirac delta function . . 8 3.2 Method 2: Discontinuity Capturing Shallow Neural Network . . 9 4 Numerical results . . . . . . . . . . . . . . . . . . . . . . 12 4.1 Model parameters setting . . . . . . . . . . . . . . . . . 12 4.2 The Poisson equation . . . . . . . . . . . . . . . . . . . 14 4.2.1 Effect of the symmetric property . . . . . . . . . . . . 14 4.2.2 Effect of the deployment of training data . . . . . . . . 16 4.2.3 Test on Method 1 . . . . . . . . . . . . . . . . . . . . 18 4.3 The variable coefficient Poisson Equation . . . . . . . . . 21 5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . .23 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . .24

    [1] Andrew R Barron. Universal approximation bounds for superpositions of a
    sigmoidal function. IEEE Transactions on Information theory, 39(3):930–945,
    1993.
    [2] T. Chen and H. Chen. Approximations of continuous functionals by neural
    networks with application to dynamic systems. IEEE Transactions on Neural
    Networks, 4(6):910–918, 1993.
    [3] Tianping Chen and Hong Chen. Approximation capability to functions of
    several variables, nonlinear functionals, and operators by radial basis function
    neural networks. IEEE Transactions on Neural Networks, 6(4):904–910, 1995.
    [4] Tianping Chen and Hong Chen. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application
    to dynamical systems. IEEE Transactions on Neural Networks, 6(4):911–917,
    1995.
    [5] G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals, and systems, 2(4):303–314, 1989.
    [6] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Adaptive computation and machine learning. The MIT Press, Cambridge, Massachusetts, 2016 - 2016.
    [7] Kurt Hornik. Approximation capabilities of multilayer feedforward networks.
    Neural networks, 4(2):251–257, 1991.
    [8] Wei-Fan Hu, Te-Sheng Lin, and Ming-Chih Lai. A discontinuity capturing
    shallow neural network for elliptic interface problems. Journal of Computational Physics, 469:111576, 2022.
    [9] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik
    Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator:
    Learning maps between function spaces with applications to pdes. Journal of
    Machine Learning Research, 24(89):1–97, 2023.
    [10] Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3(3):218–229,
    March 2021.
    [11] H. N. Mhaskar and Nahmwoo Hahm. Neural networks for functional approximation and system identification. Neural Computation, 9(1):143–159, 1997.
    [12] M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems
    involving nonlinear partial differential equations. Journal of Computational
    Physics, 378:686–707, 2019.
    [13] Fabrice Rossi and Brieuc Conan-Guez. Functional multi-layer perceptron: a
    non-linear tool for functional data analysis. Neural Networks, 18(1):45–60,
    2005.
    [14] Franco Scarselli and Ah Chung Tsoi. Universal approximation using feedforward neural networks: A survey of some existing methods, and some new
    results. Neural Networks, 11(1):15–37, 1998.
    [15] Lloyd N. Trefethen. Spectral Methods in MATLAB. Society for Industrial and
    Applied Mathematics, 2000.
    [16] Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, and Lu Lu. A comprehensive study of non-adaptive and residual-based adaptive sampling for
    physics-informed neural networks. Computer Methods in Applied Mechanics
    and Engineering, 403:115671, 2023.

    QR CODE