簡易檢索 / 詳目顯示

研究生: 周伊恩
Chou, Yi-En
論文名稱: 損失權重與模型複雜度對物理資訊神經網路在計算流體力學中的影響
Impact of Loss Weight and Model Complexity on Physics-Informed Neural Networks for Computational Fluid Dynamics
指導教授: 林昭安
Lin, Chao-An
口試委員: 陳慶耀
Chen, Ching-Yao
陳明志
Chern, Ming-Jyh
學位類別: 碩士
Master
系所名稱: 工學院 - 動力機械工程學系
Department of Power Mechanical Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 122
中文關鍵詞: 物理資訊神經網路計算流體力學深度學習損失權重模型複雜度
外文關鍵詞: PINN, CFD, deeplearning, lossweight, complexity
相關次數: 點閱:75下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究提出了一種決定物理資訊神經網路(PINN)損失權重的方法,透過
    平衡不同損失單元的數量級來達成。研究中針對損失函數單元進行了因次分
    析,並基於損失單元的數量級來決定損失權重的數值。過去的文獻中,損失
    權重和模型複雜度被視為超參數(hyper-parameter),需要手動調配且缺乏探
    討,因此難以重現結果。然而,不同的損失權重以及不同的模型複雜度對物理
    資訊神經網路(PINN)的計算表現會產生顯著影響。本文提出了兩種設定損失
    權重的方法。第一種方法考慮了可量化的參數,而第二種方法則同時考慮了可
    量化與不可量化的參數。在不同的物理問題中,採取第二種設定損失權重的方
    法,物理資訊神經網路(PINN)皆能穩定地計算出接近數值的結果。這些物理
    問題包括統御方程式為熱傳導方程式、熱對流擴散方程式以及納維-斯托克斯方
    程式的問題;另一方面,透過第一種方法或給予所有損失單元相同的損失權重
    時,物理資訊神經網路(PINN)未能穩定地計算出接近數值的結果。


    This study proposes a scheme for determining the loss weights of PhysicsInformed Neural Networks (PINN) by balancing the magnitudes of different loss components. The study conducts dimensional analysis on the loss components and determines the values of loss weights based on the magnitude of these components. In the past literature, loss weights and model complexity were treated as hyper-parameters that required manual tuning and lacked exploration, making it difficult to reproduce results. However, different loss weights and model complexities can have a significant impact on the computational performance of Physics-Informed Neural Networks (PINN). This thesis presents two schemes for setting the loss weights. The first scheme considers quantifiable parameters, while the second scheme simultaneously considers quantifiable and non-quantifiable parameters. Adopting the second weighting scheme, PINN performs well in various
    physical problems, including governing equations such as heat conduction equations, convection-diffusion equations, and Navier-Stokes equations. However, when using the first scheme or assigning equal loss weights to all loss components, the
    Physics-Informed Neural Network (PINN) fails to stably compute results close to the target values.

    Contents Abstract (Chinese) I Abstract II Contents III List of Figures V List of Tables XXIII List of Algorithms XXV 1 Introduction 1 1.1 Deep learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Artificial Intelligence(AI), Machine Learning(ML) and Deep Learning(DL) . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.2 Deep neural networks . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Literature survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.1 Machine Learning for Computational Fluid Dynamics . . . . 8 1.2.2 Physics Inform Neural Networks(PINN) . . . . . . . . . . . 10 1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Methodology 16 2.1 Define loss function L for the PINNs: Numerical Differentiation(CDS) 17 III 2.2 Determine loss weight λ for the PINNs: . . . . . . . . . . . . . . . . 19 2.3 Method to increase model complexity for PINNs . . . . . . . . . . . 22 2.4 Benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5 Framework for deep-learning: Pytorch . . . . . . . . . . . . . . . . 30 2.6 Integration of numerical solver and learning-based method . . . . . 30 3 Experimental study 32 3.1 Solution obtain by PINN with varying loss weight at h = 1 10 , 1 30 and 1 50 32 3.1.1 Conduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1.2 Convection-and-diffusion . . . . . . . . . . . . . . . . . . . . 42 3.1.3 Lid-driven-cavity . . . . . . . . . . . . . . . . . . . . . . . . 68 3.2 Model complexity and solution obtain by PINN with varying loss weight at h = 1 100 and 1 150 . . . . . . . . . . . . . . . . . . . . . . . . 102 3.2.1 Model complexity . . . . . . . . . . . . . . . . . . . . . . . . 103 3.2.2 Solution obtain by PINN with varying loss weight at h = 1 100 and 1 150 . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.3 Computation efficiency of C++ and python . . . . . . . . . . . . . 112 4 Conclusion 114 Bibliography 116

    [1] L. Agostini. Exploration and prediction of fluid dynamical systems using auto-encoder technology. physics of fluids. Physics of Fluids., 32(6), 2020.
    [2] John David Anderson and John Wendt. Computational fluid dynamics, volume 206. Springer, 1995.
    [3] Hoyer S. Hickey J. Brenner M. P. Bar-Sinai, Y. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31):15344–15349, 2019.
    [4] M.F. Bear, B.W. Connors, and M.A. Paradiso. Neuroscience. Neuroscience: Exploring the Brain. Lippincott Williams & Wilkins, 2007.
    [5] Md Muhtasim Billah, Aminul Islam Khan, Jin Liu, and Prashanta Dutta.
    Physics-informed deep neural network for inverse heat transfer problems in
    materials. Materials Today Communications, 35:106336, 2023.
    [6] Mao Z. Wang Z. Yin M. Karniadakis G. E. Cai, S. Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica,
    37(12):1727–1738, 2021.
    [7] S. Cai, S. Bileschi, and E. Nielsen. Deep Learning with JavaScript: Neural networks in TensorFlow.js. Manning, 2020.
    116
    [8] Wang Z. Wang S. Perdikaris P. Karniadakis G. E. Cai, S. Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer, 143(6), 2021.
    [9] Wong J. C. Ooi C. Dao M. H. Ong Y. S. Chiu, P. H. Can-pinn: A fast physicsinformed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering, page 395, 2022.
    [10] KR1442 Chowdhary and KR Chowdhary. Natural language processing. Fundamentals of artificial intelligence, pages 603–649, 2020.
    [11] Di Cola V. S. Giampaolo F. Rozza G. Raissi M. Piccialli F. Cuomo, S. Scientific machine learning through physics–informed neural networks: where we are and what’s next. Journal of Scientific Computing, 92(3):88, 2022.
    [12] Zhiwei Fang. A high-efficient hybrid physics-informed neural networks based on convolutional neural network. IEEE Transactions on Neural Networks and Learning Systems, 33(10):5514–5526, 2021.
    [13] T Gayatri, G Srinivasu, DMK Chaitanya, and VK Sharma. A review on optimization techniques of antennas using ai and ml/dl algorithms. International Journal of Advances in Microwave Technology, 7(2):288–295, 2022.
    [14] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315–323. JMLR Workshop and Conference Proceedings, 2011.
    [15] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
    117
    [16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
    [17] Elizabeth Goodspeed. Linear classifier. Own work, CC BY-SA 4.0,
    https://commons.wikimedia.org/w/index.php?curid=40188333.
    [18] Jun Han and Claudio Moraga. The influence of the sigmoid function parameters on the speed of backpropagation learning. In International workshop on artificial neural networks, pages 195–201. Springer, 1995.
    [19] Peter Jackson. Introduction to expert systems. 1986.
    [20] Piyasak Jeatrakul and Kok Wai Wong. Comparing the performance of different neural networks for binary classification problems. In 2009 Eighth International Symposium on Natural Language Processing, pages 111–115. IEEE, 2009.
    [21] Michael I Jordan and Tom M Mitchell. Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255–260, 2015.
    [22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
    [23] Gina Kolata. How can computers get common sense? two of the founders of the field of artificial intelligence disagree on how to make a thinking machine. Science, 217(4566):1237–1238, 1982.
    [24] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Commun. ACM, 60(6):84–90, may 2017.
    118
    [25] Alan Lapedes and Robert Farber. Nonlinear signal processing using neural networks: Prediction and system modelling. Technical report, 1987.
    [26] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
    [27] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.
    [28] Kang I. S. Lee, H. Neural algorithm for solving differential equations. Journal of Computational Physics, 91(1):110–131, 1990.
    [29] C. B. Huang C. A. Lin. Improved low-reynolds-number k– model based on direct numerical simulation data. AIAA JOURNAL, 36(1):38–43, 1998.
    [30] Kurzawski A. Templeton J. Ling, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807:155–166, 2016.
    [31] Danilo Mandic and Jonathon Chambers. Recurrent neural networks for prediction: learning algorithms, architectures and stability. Wiley, 2001.
    [32] Jagtap A. D. Karniadakis G. E. Mao, Z. Physics-informed neural networks for high-speed flows. Computer Methods in Applied Mechanics and Engineering, page 360, 2020.
    [33] John McCarthy et al. What is artificial intelligence. 2007.
    [34] Yee E. Lien F. S. McConkey, R. Deep structured neural networks for turbulence closure modeling. Physics of Fluids, 34(3):035110, 2022. 119
    [35] AFANASIEV K. MORZYNSKI M. TADMOR G. THIELE F. NOACK, B. ´
    A hierarchy of low-dimensional models for the transient and post-transient cylinder wake. Journal of Fluid Mechanics, 497:335–363, 2003.
    [36] Lucas Pinheiro Cinelli, Matheus Ara´ujo Marins, Eduardo Ant´unio Barros da Silva, and S´ergio Lima Netto. Variational autoencoder. In Variational Methods for Machine Learning with Applications to Deep Networks, pages 111–149.
    Springer, 2021.
    [37] Cai W. Zhao Y. Qu, J. Deep learning method for identifying the minimal representations and nonlinear mode decomposition of fluid flows. Physics of Fluids., 33(10), 2021.
    [38] Maziar Raissi, Zhicheng Wang, Michael S Triantafyllou, and George Em Karniadakis. Deep learning of vortex-induced vibrations. Journal of Fluid Mechanics, 861:119–137, 2019.
    [39] Perdikaris P. Karniadakis G. E. Raissi, M. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving
    nonlinear partial differential equations. Journal of Computational physics, (378):686–707, 2019.
    [40] Pu Ren, Chengping Rao, Yang Liu, Jian-Xun Wang, and Hao Sun. Phycrnet: Physics-informed convolutional-recurrent network for solving spatiotemporal pdes. Computer Methods in Applied Mechanics and Engineering, 389:114399, 2022.
    [41] Frank Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
    [42] Alexander G Schwing and Raquel Urtasun. Fully connected deep structured networks. arXiv preprint arXiv:1503.02351, 2015.
    120
    [43] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in neural information processing systems, 33:7462–7473, 2020.
    [44] Brunton S. L. Dawson S. T. Rowley C. W. Colonius T. McKeon B. J. ... Ukeiley L. S. Taira, K. Modal analysis of fluid flows: An Overview. Aiaa Journal, 55(12):4013–4041, 2017.
    [45] Schlachter K. Sprechmann P. Perlin K. Tompson, J. Accelerating eulerian fluid simulation with convolutional networks. International Conference on Machine Learning, PMLR:3424–3433, 2020.
    [46] Brunton S. L. Vinuesa, R. The potential of machine learning to enhance computational fluid dynamics. arXiv preprint, arXiv:2110.02085, 2021.
    [47] Wang H. Perdikaris P. Wang, S. On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 384(113938), 2021.
    [48] Eric W. Weisstein. Fourier series. From MathWorld–A Wolfram Web Resource. https://mathworld.wolfram.com/FourierSeries.html.
    [49] Ooi C. Gupta A. Ong Y. S. Wong, J. C. Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence., 2022.
    [50] Hui Wang Xubo Yang Xiangyun Xiao, Yanqing Zhou. A novel cnn-based
    poisson solver for fluid simulation. IEEE transactions on visualization and computer graphics, 26(3):1454–1465, 2018.
    121
    [51] Navid Zobeiry and Keith D Humfeld. A physics-informed machine learning approach for solving heat transfer equation in advanced manufacturing and engineering applications. Engineering Applications of Artificial Intelligence, 101:104232, 2021.
    [52] Antoni Zygmund. Trigonometric series, volume 1. Cambridge university press, 2002.
    [53] Hamzehloo A. Laizet S. Tzirakis P. Rizos G. Schuller B. Ozbay, A. Poisson ¨ cnn: Convolutional neural networks for the solution of the poisson equation on a cartesian mesh. Data-Centric Engineering, 2, 2021.

    QR CODE