How It Works
Plant-specific machine learning for heat treatment
HRC Labs embeds machine learning into existing industrial systems to turn production data into predictive process intelligence.
The problem
Why conventional process knowledge is not enough
Heat treatment outcomes emerge from nonlinear interactions among chemistry, geometry, equipment behavior, time, temperature, atmosphere, and quench conditions. In practice, these interactions are difficult to fully capture with fixed equations or isolated experiments alone.
HRC Labs addresses this complexity by applying machine learning directly to plant data. Rather than replacing metallurgical understanding, the platform formalizes and extends it — learning how furnaces, materials, and workflows behave in actual production.
Once trained, these models can:
- Predict outcomes for proposed recipes in milliseconds
- Evaluate hundreds of candidate parameter sets computationally
- Quantify uncertainty alongside predictions
- Improve continuously as more runs are completed
Data integration
Building a foundation from production data
HRC Labs integrates historical and live production data from existing systems — furnace logs and setpoints, atmosphere and quench data, material chemistry, load configuration, and inspection results. This creates the foundation for modeling true plant-specific process behavior rather than relying solely on generalized assumptions.
Model development
Choosing the right model for the problem
Different problems require different model classes. HRC Labs uses multilayer perceptrons for nonlinear prediction, decision-tree frameworks for structured operational decisions, and hybrid physics-ML approaches where established metallurgical models already provide a useful baseline. The objective is not to replace metallurgical reasoning, but to make it more adaptive and empirically precise.
Operational deployment
Models that improve with every run
Once validated, models are deployed directly into operational workflows to support recipe development, process optimization, structured troubleshooting, and drift detection. Each new run improves the predictive layer over time, turning historical data into a compounding operational asset.
Case Study
Extending classical tempering models with machine learning
To demonstrate how machine learning can enhance established metallurgical frameworks, HRC Labs analyzed publicly available tempering data from Hollomon and Jaffe's 1945 study. This work illustrates a broader principle: machine learning is often most powerful not as a substitute for physics, but as a corrective and adaptive layer built on top of it.
The classical Hollomon–Jaffe framework
Hollomon and Jaffe proposed that tempering response could be expressed using a single severity parameter, combining time and temperature into one descriptor. This was an elegant and influential simplification, and it remains a useful physical baseline.
M(T, t) = T [c + log(t)]
Post-tempering hardness versus tempering severity in Hollomon and Jaffe's original framework.
Where the physical model begins to break down
HRC Labs extended the Hollomon–Jaffe framework by allowing its empirical constant to vary with carbon concentration. While this improved local performance, the formulation still struggled across distinct composition regimes. In the most severe extrapolation case, prediction error exceeded 10 HRC, highlighting the limits of a smooth global parameterization when transformation behavior changes nonlinearly across steels.
Extending the HJ collapse with carbon dependence improves local behavior but still fails across composition regimes.
Using machine learning as a corrective layer
To address this limitation, HRC Labs trained boosted decision trees to model the residual error of the physical formulation. Rather than replacing the metallurgical model, the machine-learning layer learned systematic corrections directly from data. This reduced error substantially — to roughly 2.6–2.9 HRC depending on inputs — and SHAP analysis showed that the correction structure was interpretable rather than arbitrary. Severity and minor alloying elements contributed meaningfully to the correction.
SHAP analysis shows that the residual model learned systematic, interpretable corrections rather than acting as a black box.
Why this matters in production
This tempering analysis is a small but important proof point for the broader HRC Labs approach. In real production environments, furnace behavior, fixturing, sourcing variation, and batch history introduce plant-specific effects that fixed models alone often cannot capture. Machine learning provides a practical way to learn those effects directly and turn them into faster, more confident engineering decisions.
Turn historical process data into operational intelligence
HRC Labs helps heat treaters deploy plant-specific predictive models that improve recipe design, process efficiency, and day-to-day decision-making.
Contact

