Physics-Informed Neural Networks

Embedding physical laws directly into neural network training — solving ODEs from sparse data without sacrificing physical fidelity.

PyTorch SciPy ODE Autograd Pendulum Kepler Orbit Conservation Laws
🧠⚛️
Data + Physics → Better Predictions
2 Problems
Pendulum ODE & Kepler Orbit
Autograd
ODE residuals via PyTorch backprop
10–20×
Better extrapolation than standard NN

What is a Physics-Informed Neural Network?


A Physics-Informed Neural Network (PINN) is a neural network that is trained to respect the laws of physics governing a system, even when training data is sparse. The idea was formalised by Raissi, Perdikaris & Karniadakis (2019) and has since been applied to fluid dynamics, heat transfer, structural mechanics, and many other fields.

The core concept is elegantly simple: instead of minimising only the data-fitting loss, the PINN also minimises the ODE/PDE residual — the degree to which the network's prediction violates the governing equations.

Standard Neural Network
  • Minimises data loss only
  • Has no knowledge of physics
  • Fits well in training region
  • Extrapolates poorly — wrong physics
  • Violates conservation laws
Physics-Informed NN (PINN)
  • Minimises data loss and ODE residual
  • Equations of motion baked into training
  • Fits well in training region
  • Extrapolates correctly — obeys physics
  • Approximately conserves energy, momentum
📐 The Universal PINN Loss Function: $$\mathcal{L}_{\text{total}} = \underbrace{\frac{1}{N}\sum_{i=1}^{N}\left(\mathcal{N}(t_i) - u_i\right)^2}_{\text{data loss}} \;+\; \lambda \underbrace{\frac{1}{M}\sum_{j=1}^{M}\mathcal{R}\!\left[\mathcal{N}\right](t_j)^2}_{\text{physics (ODE/PDE residual) loss}}$$ where $\mathcal{R}[\mathcal{N}]$ is the residual of the governing equation evaluated at collocation points $t_j$, computed via automatic differentiation through the network.

PINN Workflow

1
Problem Formulation
Define ODE/PDE, domain, initial & boundary conditions
2
Data Collection
Gather sparse observations (can be just a few points!)
3
Network Architecture
Fully-connected net with Tanh activation (smooth & differentiable)
4
Collocation Points
Sample physics constraint points across the full domain
5
Composite Loss
$\mathcal{L} = \mathcal{L}_{\text{data}} + \lambda\,\mathcal{L}_{\text{physics}}$
6
Autograd Derivatives
Compute $d^2\mathcal{N}/dt^2$ via PyTorch backprop for ODE residual
7
Train with Adam
Jointly minimise data fit and physics violation
8
Validate & Deploy
Verify conservation laws, compare with reference solution

Applications of PINNs

Fluid Dynamics
Navier-Stokes equations, turbulence prediction, flow field reconstruction from sparse sensor data.
Heat Transfer
Conduction, convection, radiation — temperature distribution without full numerical grid.
Structural Mechanics
Stress, strain, deformation under load using elasticity PDEs with limited measurement points.
Quantum Chemistry
Schrödinger's equation, molecular properties and electronic structure prediction.
Orbital Mechanics
Newton's gravity law — orbit reconstruction from partial observational data (this page!).
Medical Imaging
Image reconstruction and denoising with physics-based priors for MRI & CT scans.

Problem 1 — Simple Pendulum


A pendulum bob of mass $m$ suspended from a pivot by a rigid rod of length $L$ performs periodic motion under gravity. This is a classic nonlinear ODE — the ideal benchmark for PINNs because the exact numerical solution is readily available yet the small-angle analytical approximation fails at large amplitudes.

Simple pendulum animation

Fig. Pendulum oscillation (numerical integration)

Governing ODE

Newton's second law along the arc gives:

$$\boxed{\ddot{\theta} + \frac{g}{L}\sin\theta = 0}$$

This is a nonlinear, second-order ODE. No closed-form solution exists for arbitrary amplitudes — it requires numerical integration or a PINN.

ParameterValueMeaning
$L$0.025 mRod length
$g$9.81 m/s²Gravity
$\omega$≈ 19.8 rad/sNatural frequency
$\theta_0$$\pi/4$ radInitial angle (45°)

Small-Angle Approximation

For $\theta \ll 1\,\text{rad}$, we use $\sin\theta\approx\theta$, giving the linear harmonic oscillator:

$$\ddot{\theta} + \omega^2\theta = 0 \quad\Rightarrow\quad \theta_{\text{approx}}(t) = \theta_0\cos(\omega t)$$
⚠️ Important: At $\theta_0 = 45°$, the small-angle approximation fails noticeably — the exact and approximate solutions diverge within a fraction of a period. This is why both the exact (numerical) and approximate solutions are plotted side-by-side: the PINN should track the exact solution, not the approximation.

Standard NN — Data Only


A fully-connected network (3 hidden layers × 32 neurons, Tanh activation) is trained for 1000 epochs on 10 sparse data points sampled from the first 40% of the time window. The loss is pure data MSE:

$$\mathcal{L}_{\text{NN}} = \frac{1}{N}\sum_{i=1}^{N}\left(\hat{\theta}(t_i) - \theta_{\text{true}}(t_i)\right)^2$$
Standard NN training
❌ Standard NN failure mode:
  • Fits training data (orange dots) well
  • Has no idea what happens after $t \approx 0.4\,\text{s}$
  • Extrapolates as a smooth polynomial — not an oscillation
  • Violates energy conservation
💡 Why? The network has learned a smooth interpolant — it has no "memory" that the system is oscillatory. Without physics guidance, it cannot know that the pendulum continues to swing.

PINN Architecture


Same network skeleton — what changes is the loss function.

LayerSizeActivation
Input1 ($t$)
Hidden 132Tanh
Hidden 232Tanh
Hidden 332Tanh
Output1 ($\hat\theta$)Linear
Why Tanh? The ODE residual requires computing $d^2\hat\theta/dt^2$ via autograd. This demands twice-differentiable activations. ReLU has zero second derivative everywhere — it cannot represent curvature. Tanh is smooth, bounded, and its second derivative is well-defined and non-zero throughout.

PINN Loss Function — Pendulum


Data Loss
$$\mathcal{L}_{\text{data}} = \frac{1}{N}\sum_{i=1}^{N}\left(\hat\theta(t_i) - \theta_i\right)^2$$ Fits the 10 sparse observations (orange dots)
Physics Loss (ODE Residual)
$$\mathcal{L}_{\text{physics}} = \frac{1}{M}\sum_{j=1}^{M}\!\left(\frac{d^2\hat\theta}{dt^2}\bigg|_{t_j} + \frac{g}{L}\sin\hat\theta(t_j)\right)^{\!2}$$ Enforces the pendulum ODE at 30 collocation points across full domain
Combined: $$\mathcal{L}_{\text{total}} = \mathcal{L}_{\text{data}} + \underbrace{10^{-4}}_{\lambda}\,\mathcal{L}_{\text{physics}}$$

The autograd chain: forward pass $\to$ $\hat\theta(t)$ $\to$ torch.autograd.grad $\to$ $\dot{\hat\theta}$ $\to$ torch.autograd.grad $\to$ $\ddot{\hat\theta}$ $\to$ residual $\to$ loss.

PINN Results — Pendulum


PINN training

Fig. PINN training progress — the prediction converges to the exact nonlinear solution

✅ PINN achieves:
  • Correct oscillation beyond training window
  • Tracks exact numerical ODE (not the approximation)
  • ~10–20× lower MAE in extrapolation region vs standard NN
MetricNNPINN
Train MAE~0.003~0.003
Extrap. MAE~0.15~0.005

Full Pendulum Deep-Dive →

Key Code — Pendulum PINN Physics Loss


import torch, torch.nn as nn

k = (9.81 / 0.025)   # omega^2 = g/L

# Collocation points — physics constraint over full domain
x_physics = torch.linspace(0, 1.0, 30).view(-1, 1).requires_grad_(True)

for i in range(20000):
    optimizer.zero_grad()

    # Data loss
    loss_data = torch.mean((model(x_data) - y_data) ** 2)

    # Physics loss — enforce  d²θ/dt² + (g/L)sin(θ) = 0
    yhp  = model(x_physics)
    dθ   = torch.autograd.grad(yhp,  x_physics, torch.ones_like(yhp), create_graph=True)[0]
    d²θ  = torch.autograd.grad(dθ,   x_physics, torch.ones_like(dθ),  create_graph=True)[0]
    residual  = d²θ + k * torch.sin(yhp)           # should be 0
    loss_phys = torch.mean(residual ** 2)

    loss = loss_data + 1e-4 * loss_phys
    loss.backward()
    optimizer.step()

Problem 2 — Kepler Orbit Reconstruction


Can a neural network reconstruct a complete planetary orbit when it has only seen 40% of it? A standard NN cannot — but a PINN guided by Newton's universal law of gravitation can, because it "knows" that trajectories must obey $\ddot{\mathbf{r}} = -GM\mathbf{r}/r^3$.

🪐⭐

Two-body gravitational problem in 2D

Kepler's Three Laws (1609–1619)

LawStatement
1stPlanets orbit in ellipses with the Sun at one focus
2ndEqual areas swept in equal time (angular momentum conservation)
3rd$T^2 \propto a^3$ — period² $\propto$ semi-major axis³

These laws all follow from Newton's universal law of gravitation:

$$\mathbf{F} = -\frac{GMm}{r^2}\hat{r}$$

Governing Equations — Kepler Orbit


In 2D Cartesian coordinates $(x, y)$, with the central body at the origin:

$$\boxed{\ddot{x} = -\frac{GM\,x}{r^3}, \qquad \ddot{y} = -\frac{GM\,y}{r^3}, \qquad r = \sqrt{x^2+y^2}}$$

This is a 4D first-order coupled ODE system (state = $[x, y, v_x, v_y]$), solved numerically with scipy.integrate.odeint at $10^{-10}$ relative tolerance to provide the reference.

Initial conditions (at periapsis):
$$x_0 = a(1-e),\quad y_0 = 0$$ $$v_{x0} = 0,\quad v_{y0} = \sqrt{\frac{GM(1+e)}{a(1-e)}} \quad\text{(vis-viva)}$$
ParameterValue
$GM$1.0 (dimensionless)
Semi-major axis $a$1.0
Eccentricity $e$0.5
Period $T$$2\pi \approx 6.28$
Training dataFirst 40% of orbit (sparse)

Conservation Laws


The Kepler problem has two important integrals of motion (constants along any physical trajectory):

Total Energy
$$E = \frac{1}{2}(v_x^2 + v_y^2) - \frac{GM}{r} = \text{const}$$ Kinetic + gravitational potential energy is conserved
Angular Momentum (Kepler 2nd Law)
$$L = x\,v_y - y\,v_x = \text{const}$$ Equal areas in equal times — the planet moves fastest at periapsis, slowest at apoapsis
✨ Emergent conservation in PINNs: We never explicitly enforce $E = \text{const}$ or $L = \text{const}$. Yet because the PINN is trained to satisfy Newton's ODE, these conservation laws emerge automatically as consequences — a beautiful validation of the approach.

Standard NN — Orbital Failure


A standard NN trained on positions from the first 40% of the orbit attempts to continue the trajectory — but has no concept of closed ellipses.

❌ What goes wrong:
  • NN fits the training arc correctly
  • Beyond the training window, the predicted orbit spirals outward or flies off
  • Energy and angular momentum are not conserved
  • The orbit never closes — violates Kepler's 1st law
🛸💨

Standard NN prediction flies off into space — it has never been told that gravity pulls the satellite back.

PINN Architecture — Kepler


LayerSizeNotes
Input1 ($t$)scalar time
Hidden 1–364Tanh
Output2 ($\hat{x}, \hat{y}$)Linear

Larger (64 neurons vs 32) because the 2D orbit is more complex than the 1D pendulum angle.

Physics Loss — Newton's Gravity Residuals
$$\mathcal{L}_{\text{physics}} = \frac{1}{M}\sum_{j=1}^{M}\left[\left(\ddot{\hat{x}} + \frac{GM\hat{x}}{r^3}\right)^2 + \left(\ddot{\hat{y}} + \frac{GM\hat{y}}{r^3}\right)^2\right]$$ Both $x$ and $y$ components of Newton's law are enforced simultaneously. Each requires two sequential autograd calls (velocity then acceleration).

PINN Results — Kepler Orbit


✅ PINN reconstructs the full ellipse:
  • Trained on just 40% of the orbit
  • Correctly closes the ellipse for the remaining 60%
  • Periapsis and apoapsis positions match reference
  • Approximately conserves energy and angular momentum
MetricNNPINN
Train MAE~0.005~0.005
Extrap. MAE~0.4~0.01
Energy conserved?NoApproximately
Orbit closes?NoYes

Collocation strategy:

50 collocation points are uniformly distributed over the full time domain $[0, 1.5T]$ — including the 60% of the orbit where there is no training data. This is what allows the physics constraint to guide extrapolation.

Key equation enforced:

$\ddot{\mathbf{r}} = -GM\mathbf{r}/r^3$ at every collocation point

Conservation Law Verification


After training, we compute energy $E$ and angular momentum $L$ along the PINN-predicted trajectory (using numerical gradients of the output) and compare to the exact values.

PINN energy & angular momentum stay approximately constant in the extrapolation region — despite never being explicitly constrained. This is a direct consequence of satisfying Newton's ODE.
Standard NN shows large drifts in both $E$ and $L$ beyond the training region, consistent with its incorrect trajectory.
🔬 Physical interpretation: Noether's theorem tells us that conservation laws arise from symmetries of the equations of motion (time-translation symmetry → energy, rotational symmetry → angular momentum). By enforcing the ODE itself, the PINN implicitly respects these symmetries.

Key Code — Kepler PINN Physics Loss


GM = 1.0   # dimensionless

# Collocation points over full orbit (including unseen region!)
t_phys = torch.linspace(0, T_sim, 50).view(-1,1).requires_grad_(True)

for epoch in range(30000):
    optimizer.zero_grad()

    # ── Data loss ───────────────────────────────────────────────
    loss_data = torch.mean((model(t_data) - xy_data) ** 2)

    # ── Physics loss — Newton's gravity ─────────────────────────
    pred  = model(t_phys)            # shape (M, 2): columns = [x̂, ŷ]
    xp, yp = pred[:,0:1], pred[:,1:2]

    # velocities (1st autograd)
    vx = torch.autograd.grad(xp, t_phys, torch.ones_like(xp), create_graph=True)[0]
    vy = torch.autograd.grad(yp, t_phys, torch.ones_like(yp), create_graph=True)[0]

    # accelerations (2nd autograd)
    ax = torch.autograd.grad(vx, t_phys, torch.ones_like(vx), create_graph=True)[0]
    ay = torch.autograd.grad(vy, t_phys, torch.ones_like(vy), create_graph=True)[0]

    # gravitational acceleration from predicted position
    r        = torch.sqrt(xp**2 + yp**2 + 1e-8)
    ax_grav  = -GM * xp / r**3
    ay_grav  = -GM * yp / r**3

    # ODE residuals:  ẍ - ax_grav = 0  and  ÿ - ay_grav = 0
    res_x = ax - ax_grav
    res_y = ay - ay_grav
    loss_phys = torch.mean(res_x**2 + res_y**2)

    loss = loss_data + 1e-3 * loss_phys
    loss.backward()
    optimizer.step()

Side-by-Side Comparison


Feature Standard NN Pendulum PINN Kepler PINN
Physics encodedNone$\ddot\theta + (g/L)\sin\theta = 0$$\ddot{\mathbf{r}} = -GM\mathbf{r}/r^3$
ODE order2nd order, scalar2nd order, 2D vector
Network output$\hat\theta(t)$$\hat\theta(t)$$(\hat{x}(t),\hat{y}(t))$
Network size3 × 323 × 32, Tanh4 × 64, Tanh
Training data10 pts (40% window)10 pts (40% window)~27 pts (40% orbit)
Collocation ptsNone30 (full domain)50 (full domain)
Training epochs5,00020,00030,000
Training region MAE~0.003~0.003~0.005
Extrapolation MAE~0.15 ❌~0.005 ✅ 0.1 ❌
Conservation satisfiedNoApproximatelyApproximately

Code & Resources


Pendulum PINN Repository

Full Python script, Jupyter notebook walkthrough, training animations and README.

  • Simple_Pendulum_PINN.py — standalone script
  • Pendulum_PINN_Walkthrough.ipynb — 11-section notebook
  • nn.gif / pinn.gif — training animations
GitHub Repo Full Writeup
Kepler Orbit PINN Repository

Full Python script, Jupyter notebook with conservation law verification and orbital animations.

  • Kepler_PINN.py — standalone script
  • Kepler_PINN_Walkthrough.ipynb — 14-section notebook
  • nn_kepler.gif / pinn_kepler.gif — animations
GitHub Repo
Quick Start
# Install dependencies
pip install torch numpy scipy matplotlib pillow jupyter

# Pendulum PINN
git clone https://github.com/swarnadeepseth/Physics_Informed_NN-Pendulum
cd Physics_Informed_NN-Pendulum && mkdir plots
python Simple_Pendulum_PINN.py

# Kepler Orbit PINN
git clone https://github.com/swarnadeepseth/Physics_Informed_NN-Kepler
cd Physics_Informed_NN-Kepler && mkdir plots
python Kepler_PINN.py

References


  • Raissi, M., Perdikaris, P., & Karniadakis, G.E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems. Journal of Computational Physics, 378, 686–707. [Link]
  • Greydanus, S., Dzamba, M., & Yosinski, J. (2019). Hamiltonian Neural Networks. NeurIPS 2019. [arXiv]
  • Cranmer, M. et al. (2020). Lagrangian Neural Networks. ICLR 2020 Workshop. [arXiv]
  • Ben Moseley's PINN blog post: "So, what is a physics-informed neural network?"
  • Karniadakis, G.E. et al. (2021). Physics-informed machine learning. Nature Reviews Physics, 3, 422–440. [Link]