簡易檢索 / 詳目顯示

研究生: 倪大鈞
Ni, Da-Jyun
論文名稱: 生成式預測在建立化工製程數位孿生之應用
Application of Generative Prediction to Build Digital Twin of Chemical Process Operation
指導教授: 汪上曉
WONG, SHANG-HSIAO
姚遠
YAO, YUAN
口試委員: 王聖潔
WANG, SHENG-CHIEH
康嘉麟
KANG, JIA-LIN
學位類別: 碩士
Master
系所名稱: 工學院 - 化學工程學系
Department of Chemical Engineering
論文出版年: 2025
畢業學年度: 113
語文別: 英文
論文頁數: 99
中文關鍵詞: 數位孿生深度學習長時間預測生成式預測序列對序列模 型變換器模型自回歸訓練損失
外文關鍵詞: Digital Twin, Deep Learning, Long-term Prediction, Generative Prediction, Sequence to Sequence Model, Transformer Model, Autoregressive Training Loss
相關次數: 點閱:64下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著數位轉型與智慧製造的興起,數位孿生(Digital Twin, DT)已成為
    推動化工製程優化與減碳的重要技術。傳統以物理方程為基礎的數位孿生模
    型(First-Principle Models, FPMs)雖具備物理一致性與外推能力,但開發門檻
    高、建模成本昂貴,促使資料驅動模型(Data-Driven Models, DDMs)成為新興
    解決方案。然而,數據驅動模型在長期預測與物理一致性方面仍有挑戰。
    本研究以丙烯回收製程(PRU)為例,結合Aspen Dynamics動態模擬平台,
    產生具代表性的控制資料集,建立三種深度學習模型:序列對序列模型、
    加入記憶層的序列對序列模型模型與變換器模型,並引入自回歸訓練損失
    (Autoregressive Training Loss, AT)以強化其生成預測能力。模型預測能力則
    透過四項測試進行評估,並以MAE/y¯ 作為以上四種預測好壞的指標。
    實驗結果顯示,不管在沒有回饋控制的開環控制或是具有回饋控制迴路的閉
    環數據資料下,序列對序列模型表現最差,對於各項測試的MAE/y¯ 的誤差皆
    在3.2%以上,加入記憶層後得以使預測能力提升,但提升效果有限,而引入自
    回歸訓練損失後,對於序列對序列模型在長期預測能力都獲得顯著的改善。而
    變換器模型的表現因其強大的自注意力機制所以本來預測效果就與加入記憶層
    的序列對序列模型不分軒輊,而自回歸訓練損失的引入進一步提升其穩定性與
    準確度。整體而言,自回歸訓練損失增強了模型的長期生成式預測能力,顯示
    其於建構具物理一致性與生成能力的數位孿生系統上具有相當的潛力。
    本研究證實透過適當的訓練策略與架構設計,可有效提升資料驅動模型在化
    工製程數位孿生應用中的實用性與準確性,為未來智慧製程控制與減碳技術提
    供技術基礎。


    With the rise of digital transformation and smart manufacturing, Digital Twin
    technology has become a crucial tool for optimizing chemical processes and reducing
    carbon emissions. While traditional First-Principles Models (FPMs) for digital twins
    offer physical consistency and extrapolation capabilities, their high development
    barriers and expensive modeling costs have spurred Data-Driven Models (DDMs)
    as an emerging solution. However, Data-Driven Models still face challenges in
    long-term prediction and maintaining physical consistency.
    This study uses a Propylene Recovery Unit (PRU) as a case study. We integrated
    with the Aspen Dynamics dynamic simulation platform to generate a representative
    control dataset. We then developed three deep learning models: a Sequence to
    Sequence (StS) model, an Sequence to sequence model with a memory layer, and
    a Transformer model. To enhance their generative prediction capabilities, we
    introduced Autoregressive Training Loss (AT). Model prediction performance was
    evaluated through four tests, using MAE/y¯ as key performance indicators.
    Experimental results show that the Sequence-to-Sequence (StS) model performed
    the worst in both open-loop control (without feedback control) and closed-loop
    data (with feedback control loops). Its MAE/y¯ error for all tests was consistently
    above 3.2%. While adding a memory layer improved its predictive ability, the
    enhancement was limited. However, introducing Autoregressive Training (AT)
    loss significantly improved the Sequence to Sequence model’s long-term prediction
    capabilities. Due to its powerful self-attention mechanism, the Transformer model
    inherently performed comparably to the Sequence to Sequence model with a
    memory layer. The introduction of Autoregressive training loss further boosted the
    Transformer model’s stability and accuracy. Overall, Autoregressive training loss
    enhanced the models’ long-term generative prediction capabilities, demonstrating
    VI
    its significant potential in building digital twin systems with physical consistency
    and generative power.
    This research confirms that appropriate training strategies and architectural
    design can effectively improve the practicality and accuracy of data-driven models in
    chemical process digital twin applications, providing a robust technical foundation
    for future smart process control and carbon reduction technologies.

    QR CODE