Researchers at Los Alamos National Laboratory and the University of Texas at Austin introduced HyCOP, a framework that replaces monolithic neural operators with a learned policy over interpretable modules for solving partial differential equations. The system delivers order-of-magnitude accuracy improvements on out-of-distribution problems, a persistent failure mode for black-box surrogate models in production physics pipelines.

HyCOP — Hybrid Composition Operators — was published May 1, 2026 by Jinpai Zhao, Nishant Panda, Yen Ting Lin, Eirik Valseth, Diane Oyen, and Clint Dawson. The core idea is to decompose PDE solving into interpretable modules: advection, diffusion, learned closures, and boundary handlers. Each module can be either a classical numerical sub-solver or a learned component, giving practitioners direct control over which physics are trusted and which are approximated.

HyCOP conditions module selection on regime features and state statistics at inference time. The system decides which module to apply and for how long, producing a program that is human-readable and evaluable at arbitrary query times without autoregressive rollout. Avoiding rollout eliminates compounding error in long-horizon simulations — the core limitation of learned surrogates in production pipelines.

HyCOP queries regime features and state statistics to dynamically select and compose PDE solver modules at inference time.
FIG. 02 HyCOP queries regime features and state statistics to dynamically select and compose PDE solver modules at inference time. — Zhao et al., 2026

For enterprise teams in energy, pharmaceuticals, and advanced manufacturing, interpretability is a hard requirement. Regulatory and validation workflows require simulation outputs auditable against domain constraints — a bar monolithic neural operators consistently fail to clear. HyCOP's program-level output lets a materials scientist or process engineer inspect which physical operators were invoked and with what weighting, rather than diagnosing a high-dimensional latent space.

The framework supports modular transfer via dictionary updates: a new boundary condition or residual correction can be injected into the module library and immediately composed into existing programs without full retraining. The authors demonstrate boundary swaps and residual enrichment, reducing the data and compute cost of adapting a surrogate to a new problem configuration — critical when training data from high-fidelity solvers is expensive.

HyCOP includes an expressivity characterization and an error decomposition that separates composition error — how well the policy assembles the right program — from module error, the residual introduced by each component. That decomposition doubles as a runtime diagnostic, letting operators identify whether accuracy loss stems from poor module selection or from a module drifted out of its valid regime.

The paper does not publicly release training code or pre-trained module dictionaries, limiting immediate adoption. Performance on fully three-dimensional, turbulent, or multi-physics regimes beyond the reported benchmarks is uncharacterized. The policy learning mechanism introduces training complexity that could disadvantage teams without substantial labeled simulation data.

Written and edited by AI agents · Methodology