Skip navigation

DSpace JSPUI

DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets

Learn More
DSpace logo
English
中文
  • Browse
    • Communities
      & Collections
    • Publication Year
    • Author
    • Title
    • Subject
    • Advisor
  • Search TDR
  • Rights Q&A
    • My Page
    • Receive email
      updates
    • Edit Profile
  1. NTU Theses and Dissertations Repository
  2. 工學院
  3. 土木工程學系
Please use this identifier to cite or link to this item: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100986
Title: 基於 MLP 與 KAN 模型架構的物理資訊驅動神經網路於多維問題分析之研究
Physics-Informed Neural Networks With MLP-Based and KAN-Based Models for Analysing Multi-Dimensional Problems
Authors: 林晉宇
Chin-Yu Lin
Advisor: 陳俊杉
Chuin-Shan Chen
Keyword: 物理資訊驅動神經網路,偏微分方程式Kolmogorov–Arnold 神經網路基於 Chebyshev 多項式的 Kolmogorov–Arnold 神經網路
Physics-Informed Neural Networks,Partial Differential EquationsKolmogorov–Arnold NetworksChebyshev Polynomial-Based Kolmogorov–Arnold Networks
Publication Year : 2025
Degree: 碩士
Abstract: 物理資訊驅動神經網路(Physics-Informed Neural Networks, PINNs)已成為求解偏微分方程(Partial Differential Equations, PDEs)的有力工具,具有結合物理法則與資料驅動模型的彈性。然而,傳統多層感知器(Multilayer Perceptron, MLP)在神經元上採用固定的激勵函數;相較之下,Kolmogorov–Arnold 網路(KAN)則在邊上引入可學習的激勵函數,以探索不同網路架構的可能性。本論文系統性地比較了三種代表性架構——MLP、KAN 與 Chebyshev 多項式為基礎的 KAN(Chebyshev-KAN),並於二維與三維的 Poisson 方程與 Navier–Cauchy 方程上進行實驗,針對準確度、收斂性、參數效率與計算成本進行分析與比較。

實驗結果顯示,MLP 在低複雜度問題中能提供穩定的準確度與泛化能力,但在高複雜度情境下則需透過增加網路寬度或深度來調整架構。相較之下,KAN 模型透過調整內部網格架構展現了更強的近似能力與收斂性,但同時伴隨較高的訓練時間與 GPU 記憶體需求。Chebyshev-KAN 則展現了兼具實用性與效率的特點,不僅保留了 KAN 的準確度優勢,還大幅減少了參數數量與計算成本,因而在效率與準確度之間達成有效平衡。
Physics-Informed Neural Networks (PINNs) have become a promising framework for solving partial differential equations (PDEs), offering flexibility in integrating physical laws with data-driven models. However, MLPs have fixed activation functions on neurons, and KANs provide learnable activation functions on edges which aim to explore the opportunities for different architectures. This thesis systematically investigates the performance of three representative architectures—Multilayer Perceptron (MLP), Kolmogorov–Arnold Network (KAN), and Chebyshev Polynomial-Based KAN (Chebyshev-KAN)—across two- and three-dimensional Poisson and Navier–Cauchy equations and makes comparison of accuracy, convergence, parameter efficiency, and computational cost.

The experimental results reveal that MLPs provide stable accuracy and generalization; however, under high-complexity situations they require adjustments toward wider or deeper architectures. KAN models, by contrast, exhibit strong approximation capability and improved convergence when modifing the internal grid architecture of KAN models, yet incur higher training time and GPU memory usage. Chebyshev-KANs emerge as a practical method, which preserves accuracy advantages of KANs while significantly reducing parameter counts and computational costs, thereby achieving an effective balance between efficiency and accuracy.
URI: http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100986
DOI: 10.6342/NTU202504556
Fulltext Rights: 同意授權(全球公開)
metadata.dc.date.embargo-lift: 2025-11-27
Appears in Collections:土木工程學系

Files in This Item:
File SizeFormat 
ntu-114-1.pdf8.89 MBAdobe PDFView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

社群連結
聯絡資訊
10617臺北市大安區羅斯福路四段1號
No.1 Sec.4, Roosevelt Rd., Taipei, Taiwan, R.O.C. 106
Tel: (02)33662353
Email: ntuetds@ntu.edu.tw
意見箱
相關連結
館藏目錄
國內圖書館整合查詢 MetaCat
臺大學術典藏 NTU Scholars
臺大圖書館數位典藏館
本站聲明
© NTU Library All Rights Reserved