Where the multi-scale signature began
In a deuterium-tritium fusion reactor, the blanket surrounding the plasma has two jobs at once: it absorbs fast neutrons to generate heat, and it breeds the tritium fuel the reactor needs to keep running. Both jobs place hard constraints on the ceramic pebble bed packed inside the blanket module. The breeder ceramic must stay hot enough — above roughly 300–400°C — for tritium to release and be swept out by the helium purge gas. But the EUROFER structural steel surrounding it can’t exceed around 550°C without risking creep failure. Threading that needle requires knowing the effective thermal conductivity (ETC) of the bed with real accuracy.
When I arrived at KIT’s pebble-bed thermomechanics group in the summer of 2017 as a DAAD-WISE Fellow, the standard design practice assigned every breeder unit a single uniform ETC — a number pulled from a classical analytical model developed in the 1970s. These Zehner-Bauer-Schlunder-type models rely on empirical unit-cell assumptions. They work reasonably well for idealized, uniform packings. But real beds are anything but uniform: contact forces vary with depth, helium purge pressure drops along the flow path following the Ergun equation, and neutronic heating creates steep temperature gradients inside a unit just 400mm long. The classical approach had no way to account for any of that.
The question I came to ask was simple enough to state: does spatially uniform ETC actually hold inside a real breeder unit? My answer, three years and five papers later, was no — and the spatial variation turned out to be large enough to matter for safety margin calculations.
The thermal resistor network model
The first task was to build a computational platform capable of mapping 3D particle packings to their thermal conductance properties, across the full operating envelope of an HCPB blanket: eight ceramic materials, temperatures from 25 to 800°C, helium pressures from 1 to 4 bar. Nothing general enough existed in the literature.
Working with Prof. Marc Kamlah and Prof. Marigrazia Moscardini, I developed a thermal resistor network model that reads DEM-generated particle configurations and assembles a sparse conductance matrix from first principles. Each particle-particle contact gets assigned to one of three categories — overlap, gap, or touch — and each category has its own conductance formulation based on Batchelor-O’Brien contact mechanics (a 1977 classical result, applied here systematically to full 3D packing topologies for the first time in this application context). For the helium filling the inter-particle gaps, I implemented Smoluchowski rarefied-gas conduction with per-contact Knudsen correction and viscosity-based mean free path. At 0.2 MPa operating pressure, the gas is in the transition regime between continuum and free-molecular flow; continuum conductance assumptions overpredict the gas-phase contribution by up to 30–50%, and that error propagates directly into ETC predictions.
The codebase — internally called RNmodcel — grew to 4,562 lines of MATLAB across 10 specialized variants, with a 520-line batch processor that runs autonomously over 600+ DEM output files and formats structured training datasets for downstream use. I built it to be shared infrastructure: the same platform ran under all five publications that came out of this work over three years.
Validation came next. Against experimental data for eight ceramics — lithium orthosilicate, lithium metatitanate, alumina, magnesia, UO2, beryllium, steel, and lithium zirconate titanate — the model outperformed four existing literature models across the full 25–800°C range. From that data, I derived six closed-form parametric correlations expressing all relevant microstructural quantities (packing fraction, coordination number, contact radius, gap width) as power-law functions of experimentally measurable inputs. Designers could now get accurate ETC estimates in microseconds, without running a simulation.
The widest material coverage of any granular-bed ETC model at the time, validated from cryogenic startup through full operational temperature — built because the fusion blanket design problem demanded it, not because the infrastructure was easy.
One finding from the loading/unloading study stood out: ETC saturates after the fourth mechanical conditioning cycle. That’s not just academically interesting — it’s a blanket assembly protocol recommendation. If you want reproducible thermal performance, you condition the bed four times before you seal it.
The DEM–ANN–FEM pipeline
Hours of DEM simulation plus resistor-network solve per configuration is not a practical input to blanket-level thermal analysis. You need to evaluate local ETC thousands of times for different combinations of stress, temperature, and gas pressure. That problem pointed directly to a surrogate model.
With a collaborator at IIT Madras — working under Prof. R.K. Annabattula and in collaboration with Prof. Yixiang Gan at the University of Sydney — I co-developed a shallow artificial neural network: 11 inputs, three hidden layers, one output, R²=0.99 on held-out data. The 11 inputs are all experimentally measurable: packing fraction, coordination number, contact radius and gap statistics, gas pressure, temperature, material properties. The network was trained entirely on DEM+resistor-network simulation data, which meant it learned the nonlinear Smoluchowski S-curve across 10³–10⁵ Pa gas pressure without that physics being explicitly coded. Hours of compute per configuration collapsed to seconds.
The ANN surrogate then needed a home in a full-scale solver. I built a custom MATLAB FEM solver from scratch: 8-noded isoparametric hexahedral elements, 8-point Gauss quadrature, sparse assembly, direct solve. At each Gauss integration point, the ANN evaluates local ETC from local conditions — the stress at that point, the local temperature, the local helium pressure computed from the Ergun equation. The system iterates until RMS temperature error drops below 0.1. Local physics, everywhere, in every element.
The first application was a complete fusion breeder unit: approximately 12 million particles, 400×50×15 mm geometry, with neutronic heating up to ~24 MW/m³ applied as a spatially varying body heat source. Prior thermal analyses of breeder units had used a single uniform ETC value. The coupled DEM–ANN–FEM simulation revealed a 34% spatial variation in effective thermal conductivity across the unit (0.88 to 1.18 W/mK), driven by the interplay of stress gradients, temperature gradients, and Ergun-dictated pressure drop. Uniform-ETC analyses had missed this variation entirely. In a design environment where the EUROFER structural limit sits at roughly 550°C, a 34% error in local conductivity is not a rounding error — it shifts peak temperature predictions by enough to push designs across safety thresholds.
The results published in Computational Particle Mechanics in 2019, with co-first authorship on both the ANN and the FEM-coupled simulation papers.
What this work seeded
The DEM–ANN–FEM pipeline wasn’t designed to be a general template. I was trying to solve a specific problem: ceramic pebble beds for fusion blankets are not tractable with uniform-property assumptions, and the only way to get spatially resolved conductivity at engineering scale was to resolve the contact physics at particle scale, train a surrogate on that data, and embed it into the engineering solver. That sequence of decisions — expensive small-scale physics → ML bridge → fast large-scale method — turned out to describe a pattern I’ve repeated in every research domain since.
At IIT Madras during my PhD, the next iteration ran on photo-responsive liquid-crystal actuators: molecular dynamics at the chain scale, surrogate coupling, FEM at the structural scale. The MATLAB FEM solver built for the breeder simulation became the precursor to a more complex FORTRAN/Abaqus implementation. The code progression was direct: RNmodcel in MATLAB, custom FEM in MATLAB, then FORTRAN subroutines interfaced to Abaqus for the multi-physics problems the PhD demanded.
At LANL, the same architecture appears again — DFT to resolve electrode–electrolyte contact physics, an ML interatomic potential trained on that data, MD for dynamics at scales DFT can’t touch. The domains are entirely different: fusion thermal engineering versus electrochemical catalysis. The bridging logic is the same. Resolve the small-scale physics you can’t avoid. Train a surrogate on it. Embed the surrogate in the method that operates at the scale you care about.
The multi-scale, ML-bridged architecture that now runs through my work on HIPPIE-NN and the ESM-RISM sweep engine traces back to a pebble-bed thermal problem in Karlsruhe — to the realization that uniform-property assumptions fail as soon as you look at what’s actually happening at the particle contacts.
The pattern statement I’ve used since: new domain → deep physics → reusable infrastructure → validated science. The granular thermal work is where that pattern first ran end-to-end.