研究生: |
李季恩 |
---|---|
論文名稱: |
運用科爾莫戈洛夫-阿諾德與深度神經網絡模擬經全局重整化群理論修正的混合自由能 Application of Kolmogorov-Arnold and Deep Neural Networks for Modeling Gibbs Free Energy of Mixing Modified by Global Renormalization Group Theory |
指導教授: |
汪上曉
WONG, SHANG-HSIAO 姚遠 YAO, YUAN |
口試委員: |
林祥泰
Shiang-Tai Lin 康嘉麟 KANG, JIA-LIN |
學位類別: |
碩士 Master |
系所名稱: |
工學院 - 化學工程學系 Department of Chemical Engineering |
論文出版年: | 2025 |
畢業學年度: | 113 |
語文別: | 英文 |
論文頁數: | 64 |
中文關鍵詞: | 熱力學 、液液平衡 、深度神經網路 、機器學習 、全局重整化群 、臨界性質 、科爾莫戈洛夫-阿諾德網絡 |
外文關鍵詞: | Thermodynamic, Liquid-liquid equilibrium, Deep neural network, Machine learning, Global renormalization group theory, Critical properties, Kolmogorov-Arnold network |
相關次數: | 點閱:11 下載:0 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
傳統的局部組成模型常用於描述多組分混合物中的相分離行為。然而,這些模型在臨界點附近的預測能力存在侷限。全域重整化群理論(Global Renormalization Group Theory, GRGT)可將傳統的平均場理論轉換為包含長程漲落效應的形式。該方法可應用於轉換各種局部組成模型,使其在溫度接近平均場理論所描述的臨界點時,關聯長度遵循非古典標度律。亦即,在臨界點附近,關聯長度隨溫度的變化方式將偏離古典預測,呈現特定的異常行為。然而,此方法並未提供混合吉布斯自由能對溫度與莫耳分率的封閉解析表達式。
神經網路被認為具有逼近任意連續函數的能力,因此可望用於建立一個由 GRGT 修正的混合吉布斯自由能之替代微分模型。在各種神經網路架構中,多層感知器(Multi-layer Perceptron, MLP)或許是最常用於函數逼近的形式。近年來,一種稱為 Kolmogorov–Arnold Network(KAN)的特殊神經網路亦被提出。本研究中,同時使用 MLP 與 KAN 來逼近經 GRGT 處理後所得之超額吉布斯自由能資料,並將這些代理模型應用於相平衡計算中。
本研究以二項 Margules 模型與非隨機雙液體模型(Non-Random Two-Liquid, NRTL)為例,展示該程序的可行性。儘管超額吉布斯自由能可以被替代模型準確地逼近,但這並不必然代表其對溫度與組成的導數也同樣精確。由於替代的超額吉布斯自由能模型最終必須應用於相平衡計算,因此必須確保其在組成與溫度導數方面也具備良好的預測能力。
此外,在訓練資料稀少的情況下,KAN 在超額吉布斯自由能的逼近上表現較佳,但其導數表現不如 MLP,後者亦具有較佳的 Gibbs-Duhem 一致性。值得注意的是,像是 ReLU(修正線性單元)這類分段線性啟動函數因其導數為零,無法應用於本研究中。
Traditional local composition models are commonly used to describe phase separation in multi-component mixtures. However, these models have limitations near the critical point. Global Renormalization Group Theory (GRGT) can transform classical mean-field theory to include long-range fluctuations. This method can be applied to convert various local composition models so that the correlation length follows nonclassical
scaling laws as the temperature approaches the critical point described by mean-field theory. This means that near the critical point, the way the correlation length changes with temperature deviates from classical predictions and follows a specific behavior. However, this method did not provide a closed-form expression for the Gibbs free energy of mixing as a function of temperature and mole fraction.
Neural networks are known to be universal approximators of any continuous functions. Hence, it is expected that they can be used to develope a surrogate differential model for Gibbs free energy of mixing modified by GRGT.Multi-layer perceptron (MLP) network is perhaps the most common form of neural network used in function approximation. Recently, a special neural network known as the Kolmogorov-Arnold Network (KAN) has also been proposed. In this study, both KAN and MLP were used to approximate the excess Gibbs free energy data obtained after applying the GRGT process. These surrogates were applied to phase equilibrium calculations.
Two examples, a two-suffix Margules model and a non-random-two-liquid (NRTL) model were used to demonstrate the feasibility of this procedure. Although the excess Gibbs free energy can be closely approximated by the surrogate model, this does not inherently ensure that its temperature and compositional derivatives are equally well captured. Since the surrogate excess Gibbs free energy model has to be used in phase equilibrium calculations, it is necessary to guarantee good approximation of their compositional and temperature derivatives.
Moreover, KAN provides a better approximation of the Gibbs excess free energy model when training data are scarce, but the derivates are inferior to those of MLP, which also gives better Gibbs-Duhem consistency. However, piecewise linear activation function such as rectified linear unit (ReLU) cannot be used since their derivatives are zero.