請用此 Handle URI 來引用此文件:
http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100986完整後設資料紀錄
| DC 欄位 | 值 | 語言 |
|---|---|---|
| dc.contributor.advisor | 陳俊杉 | zh_TW |
| dc.contributor.advisor | Chuin-Shan Chen | en |
| dc.contributor.author | 林晉宇 | zh_TW |
| dc.contributor.author | Chin-Yu Lin | en |
| dc.date.accessioned | 2025-11-26T16:21:48Z | - |
| dc.date.available | 2025-11-27 | - |
| dc.date.copyright | 2025-11-26 | - |
| dc.date.issued | 2025 | - |
| dc.date.submitted | 2025-10-09 | - |
| dc.identifier.citation | [1] T. Archibald, C. Fraser, and I. Grattan-Guinness. The history of differential equations, 1670–1950. Oberwolfach Reports, 1(4):2729–2794, 2004.
[2] V. I. Arnold. On functions of three variables. Doklady Akademii Nauk SSSR, 114:679–681, 1957. [3] J. Braun and M. Griebel. On a constructive proof of kolmogorov's superposition theorem. Constructive Approximation, 30(3):653–675, 2009. [4] C. Cesarano and P. E. Ricci. Orthogonality properties of the pseudo‑chebyshev functions (variations on a chebyshev's theme). Mathematics, 7(2):180, 2019. [5] P. L. Chebyshev. Théorie des mécanismes connus sous le nom de parallélogrammes. Mémoires des Savants Étrangers Présentés à l'Académie de Saint-Pétersbourg, 7:539–586, 1854. French. [6] R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, volume 31, pages 6571–6583, 2018. [7] S. S. Dragomir. A survey on cauchy– bunyakovsky– schwarz type discrete inequalities. Journal of Inequalities in Pure and Applied Mathematics, 4(3):Paper No. 63, 140 pp., 2003. Archived, originally published 2003. [8] S. R. Dubey, S. K. Singh, and B. B. Chaudhuri. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing, 503:92–108, 2022. [9] J. Fourier. Théorie analytique de la chaleur. Firmin Didot Père et Fils, Paris, 1822. OCLC.2688081. [10] C. Fraser. Review of The Evolution of Dynamics, Vibration Theory from 1687 to 1742, by john t. cannon and sigalia dostrovsky. Bulletin of the American Mathematical Society, 9(1):107–111, July 1983. [11] J. Glimm. A stone–weierstrass theorem for c*-algebras. Annals of Mathematics, 72(2):216–244, 1960. [12] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. [13] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989. [14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. [15] A. N. Kolmogorov. On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition. Doklady Akademii Nauk SSSR, 114:953–956, 1957. [16] I. Lagaris, A. Likas, and D. Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. IEEE Transactions on Neural Networks, 9(5):987–1000, 1998. [17] Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljačić, T. Y. Hou, and M. Tegmark. Kan: Kolmogorov–arnold networks. arXiv preprint, arXiv:2404.19756, 2024. https://ar5iv.org/html/2404.19756v5. [18] L. Lu, X. Meng, Z. Mao, and G. E. Karniadakis. Deepxde: A deep learning library for solving differential equations. arXiv:1907.04502, 2019. [19] S. Markidis et al. The old and the new: Can physics-informed deep learning replace traditional linear solvers? Frontiers in Big Data, 4:669097, 2021. [20] G. Martius and C. H. Lampert. Extrapolation and learning equations. In Proceedings of the 5th International Conference on Learning Representations (Workshop Track), Toulon, France, 2017. Originally on arXiv:1610.02995. [21] A. Noorizadegan, R. Cavoretto, D. L. Young, and C. S. Chen. Stable weight updating: A key to reliable pde solutions using deep learning, 2024. [22] A. Noorizadegan, Y. C. Hon, D.-L. Young, and C.-S. Chen. Enhancing supervised surface reconstruction through implicit weight regularization. Engineering Analysis with Boundary Elements, 180:106439, 2025. [23] A. Noorizadegan, D. Young, Y. Hon, and C. Chen. Power-enhanced residual network for function approximation and physics-informed inverse problems. Applied Mathematics and Computation, 480:128910, 2024. [24] T. Poggio, A. Banburski, and Q. Liao. Theoretical issues in deep networks. Proceedings of the National Academy of Sciences, 117(48):30039–30045, 2020. [25] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics‐informed neural networks: A deep learning framework for solving forward and inverse problems involving non-linear partial differential equations. Journal of Computational Physics, 378:686–707, 2019. [26] T. J. Rivlin. The Chebyshev Polynomials. Pure and Applied Mathematics, Vol.40. John Wiley & Sons, New York, 1974. [27] M. Schmidt and H. Lipson. Distilling free‐form natural laws from experimental data. Science, 324(5923):81–85, 2009. [28] S. S. Sidharth and R. Gokul. Chebyshev polynomial-based kolmogorov–arnold networks: An efficient architecture for nonlinear function approximation. arXiv preprint, arXiv:2405.07200, 2024. [29] S. Udrescu and M. Tegmark. Ai feynman: A physics‐inspired method for symbolic regression. Science Advances, 6(16), 2020. [30] G. F. Wheeler and W. P. Crummett. The vibrating string controversy. American Journal of Physics, 55(1):33–37, 1987. | - |
| dc.identifier.uri | http://tdr.lib.ntu.edu.tw/jspui/handle/123456789/100986 | - |
| dc.description.abstract | 物理資訊驅動神經網路(Physics-Informed Neural Networks, PINNs)已成為求解偏微分方程(Partial Differential Equations, PDEs)的有力工具,具有結合物理法則與資料驅動模型的彈性。然而,傳統多層感知器(Multilayer Perceptron, MLP)在神經元上採用固定的激勵函數;相較之下,Kolmogorov–Arnold 網路(KAN)則在邊上引入可學習的激勵函數,以探索不同網路架構的可能性。本論文系統性地比較了三種代表性架構——MLP、KAN 與 Chebyshev 多項式為基礎的 KAN(Chebyshev-KAN),並於二維與三維的 Poisson 方程與 Navier–Cauchy 方程上進行實驗,針對準確度、收斂性、參數效率與計算成本進行分析與比較。
實驗結果顯示,MLP 在低複雜度問題中能提供穩定的準確度與泛化能力,但在高複雜度情境下則需透過增加網路寬度或深度來調整架構。相較之下,KAN 模型透過調整內部網格架構展現了更強的近似能力與收斂性,但同時伴隨較高的訓練時間與 GPU 記憶體需求。Chebyshev-KAN 則展現了兼具實用性與效率的特點,不僅保留了 KAN 的準確度優勢,還大幅減少了參數數量與計算成本,因而在效率與準確度之間達成有效平衡。 | zh_TW |
| dc.description.abstract | Physics-Informed Neural Networks (PINNs) have become a promising framework for solving partial differential equations (PDEs), offering flexibility in integrating physical laws with data-driven models. However, MLPs have fixed activation functions on neurons, and KANs provide learnable activation functions on edges which aim to explore the opportunities for different architectures. This thesis systematically investigates the performance of three representative architectures—Multilayer Perceptron (MLP), Kolmogorov–Arnold Network (KAN), and Chebyshev Polynomial-Based KAN (Chebyshev-KAN)—across two- and three-dimensional Poisson and Navier–Cauchy equations and makes comparison of accuracy, convergence, parameter efficiency, and computational cost.
The experimental results reveal that MLPs provide stable accuracy and generalization; however, under high-complexity situations they require adjustments toward wider or deeper architectures. KAN models, by contrast, exhibit strong approximation capability and improved convergence when modifing the internal grid architecture of KAN models, yet incur higher training time and GPU memory usage. Chebyshev-KANs emerge as a practical method, which preserves accuracy advantages of KANs while significantly reducing parameter counts and computational costs, thereby achieving an effective balance between efficiency and accuracy. | en |
| dc.description.provenance | Submitted by admin ntu (admin@lib.ntu.edu.tw) on 2025-11-26T16:21:48Z No. of bitstreams: 0 | en |
| dc.description.provenance | Made available in DSpace on 2025-11-26T16:21:48Z (GMT). No. of bitstreams: 0 | en |
| dc.description.tableofcontents | 誌謝 i
摘要 iii Abstract v Contents vii List of Figures ix List of Tables xiii Chapter 1 Introduction 1 1.1 Background 1 1.2 Movivation and Objective 5 1.3 Thesis Structure 7 Chapter 2 Physics-Informed Neural Networks: Theory and Foundations 9 2.1 Introduction of Physics-Informed Neural Networks 9 2.2 Physics-Informed Neural Networks in Deep Learning 10 2.3 Summary 13 Chapter 3 Methodologies and Network Architectures 15 3.1 Multilayer Perceptron 15 3.2 Kolmogorov-Arnold Networks 16 3.2.1 KAN Architecture and Mathematical Formulation 18 3.3 Chebyshev Polynomial-Based Kolmogorov-Arnold Networks 21 3.3.1 Chebyshev Polynomials 21 3.3.2 Chebyshev Kolmogorov-Arnold Network 23 3.4 Summary 26 Chapter 4 Numerical Experiments 27 4.1 Case Studies 28 4.1.1 2D Poisson Equation 28 4.1.2 3D Poisson Equation 31 4.1.3 2D Navier– Cauchy Equation 35 4.1.4 3D Navier– Cauchy Equation 40 Chapter 5 Discussion and Conclusions 75 References 79 | - |
| dc.language.iso | en | - |
| dc.subject | 物理資訊驅動神經網路 | - |
| dc.subject | 偏微分方程式 | - |
| dc.subject | Kolmogorov–Arnold 神經網路 | - |
| dc.subject | 基於 Chebyshev 多項式的 Kolmogorov–Arnold 神經網路 | - |
| dc.subject | Physics-Informed Neural Networks | - |
| dc.subject | Partial Differential Equations | - |
| dc.subject | Kolmogorov–Arnold Networks | - |
| dc.subject | Chebyshev Polynomial-Based Kolmogorov–Arnold Networks | - |
| dc.title | 基於 MLP 與 KAN 模型架構的物理資訊驅動神經網路於多維問題分析之研究 | zh_TW |
| dc.title | Physics-Informed Neural Networks With MLP-Based and KAN-Based Models for Analysing Multi-Dimensional Problems | en |
| dc.type | Thesis | - |
| dc.date.schoolyear | 114-1 | - |
| dc.description.degree | 碩士 | - |
| dc.contributor.oralexamcommittee | 張書瑋;黃琮暉 | zh_TW |
| dc.contributor.oralexamcommittee | Shu-Wei Chang;Tsung-Hui Huang | en |
| dc.subject.keyword | 物理資訊驅動神經網路,偏微分方程式Kolmogorov–Arnold 神經網路基於 Chebyshev 多項式的 Kolmogorov–Arnold 神經網路 | zh_TW |
| dc.subject.keyword | Physics-Informed Neural Networks,Partial Differential EquationsKolmogorov–Arnold NetworksChebyshev Polynomial-Based Kolmogorov–Arnold Networks | en |
| dc.relation.page | 82 | - |
| dc.identifier.doi | 10.6342/NTU202504556 | - |
| dc.rights.note | 同意授權(全球公開) | - |
| dc.date.accepted | 2025-10-09 | - |
| dc.contributor.author-college | 工學院 | - |
| dc.contributor.author-dept | 土木工程學系 | - |
| dc.date.embargo-lift | 2025-11-27 | - |
| 顯示於系所單位: | 土木工程學系 | |
文件中的檔案:
| 檔案 | 大小 | 格式 | |
|---|---|---|---|
| ntu-114-1.pdf | 8.89 MB | Adobe PDF | 檢視/開啟 |
系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。
