Abstract
Operator networks have emerged as promising surrogate models, replacing computationally expensive numerical solvers for differential equations. Beyond achieving competitive accuracy with traditional solvers, the practical viability of this approach greatly depends on its training cost, which is comprised of ground truth data acquisition and network optimization. Physics-informed machine learning seeks to reduce reliance on labeled data by embedding the governing differential equations into the loss function; however, such models are often very challenging to train using physics constraints alone.
In this paper, we study how varying amounts of labeled data and architectural choices affect convergence and final performance in operator learning. Specifically, we compare an architecture developed for operator learning, i.e., Deep Operator Network, with a simpler MLP baseline across three training regimes: purely data-driven, purely physics-informed with no labeled data, and hybrid approaches that leverage both data and physics information.
Our experiments on the double-mass-spring-damper system indicate that the physics-informed Deep Operator Network converges faster to the same performance if small amounts of labeled data are used. For the MLP architecture, which is not as well-tailored to the underlying dynamics, a purely physics-informed approach fails. In this case, incorporating labeled data mitigates architectural deficiencies and always substantially improves convergence and performance.
In this paper, we study how varying amounts of labeled data and architectural choices affect convergence and final performance in operator learning. Specifically, we compare an architecture developed for operator learning, i.e., Deep Operator Network, with a simpler MLP baseline across three training regimes: purely data-driven, purely physics-informed with no labeled data, and hybrid approaches that leverage both data and physics information.
Our experiments on the double-mass-spring-damper system indicate that the physics-informed Deep Operator Network converges faster to the same performance if small amounts of labeled data are used. For the MLP architecture, which is not as well-tailored to the underlying dynamics, a purely physics-informed approach fails. In this case, incorporating labeled data mitigates architectural deficiencies and always substantially improves convergence and performance.
| Original language | English |
|---|---|
| Title of host publication | 1st Workshop on Differentiable Systems and Scientific Machine Learning @ EurIPS 2025 |
| Number of pages | 13 |
| Publication status | Published - 2025 |
| Event | EurIPS 2025 Workshop, DiffSys 2025: Differentiable Systems and Scientific Machine Learning - Copenhagen, Denmark Duration: 5 Dec 2025 → 5 Dec 2025 |
Conference
| Conference | EurIPS 2025 Workshop, DiffSys 2025 |
|---|---|
| Country/Territory | Denmark |
| City | Copenhagen |
| Period | 5/12/25 → 5/12/25 |
Fields of Expertise
- Information, Communication & Computing
Fingerprint
Dive into the research topics of 'Data and Modelling Assumptions in Physics-Informed Operator Learning'. Together they form a unique fingerprint.Projects
- 1 Active
-
CD-Laboratory for Dependable Intelligent Systems in Harsh Environments
Pernkopf, F. (Project manager on research unit) & Pernkopf, F. (Consortium manager resp. coordinator with external organisations)
1/01/23 → 31/12/29
Project: Research project
Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS