Figure 1: A typical neural correction-learning pipeline (Um et al., 2020) that uses a differentiable physics solver $\mathcal{P}_K$ in the loop. Black arrows show the forward pass, grey dashed arrows represent the backward pass, and elements in red represent the bottleneck. As the number of solver iterations $K$ grows, the cost of passes through $\mathcal{P}_K$ becomes severe.
Figure 2: PRDP reduces the training time of neural networks containing numerical solver components (c). The fidelity of iterative components is increased only if validation metrics of the network training plateau. This leads to savings by using fewer iterations in the beginning (PR savings in (b)) and by ending at a refinement level significantly below full fidelity (IC savings in (b)). The achieved validation error is identical (a).
PRDP's core idea revolves around balancing the compute-accuracy trade-off of iterative physics solvers. The nuancy is - we assert only the accuracy of the neural network being trained, not the physics solver. Hence, the physics solver is not converged to numerical precision, but is allowed to run only for about as many iterations as sufficient to gain significant training progress in the neural network.
How much physics refinement is sufficient?
We contribute an algorithm that determines physics refinement adaptively
based on the plateauing of training progress measured on a validation metric.
Figure 3: Left: the typical training progress of a neural network supported by PRDP. Right: a simplified flowchart representation of the PRDP control algorithm.
This throttling of the physics solver saves significant compute, especially in cases where physics represents a high proportion of the training cost. For instance, in our test problem of training a neural emulator on a 3D heat equation solver, PRDP saves about 78% in total training time.
@article{bhatia2025prdp,
title={Progressively Refined Differentiable Physics},
author={Kanishk Bhatia and Felix Koehler and Nils Thuerey},
journal={International Conference on Learning Representations (ICLR)},
volume={13},
year={2025}
}