How Sizewell C is using AI to rethink project controls
Sizewell C
Project controls
Energy Projects

How Sizewell C is using AI to rethink project controls

Inside Sizewell C: how a live nuclear megaproject is using AI to redesign project controls and strengthen decision integrity.

How Sizewell C is using AI to rethink project controls
Written by
Colin Myer
nPlan evangelist and content creator. Passionate about major projects and the role they play in driving economic growth and raising standards of living. Ambitious infrastructure projects are awesome!

Very few nuclear megaprojects give the public privileged insight into their decision-making.

However at nPlan's recent AI Day, nPlan hosted Tommy Clarke (Head of Programme Controls) and Carolyn le Roux (PMO Director) from Sizewell C - one of the most consequential infrastructure projects in Europe - for a candid discussion about risk, credibility and the role of AI in nuclear delivery.

Sizewell C sits at the intersection of energy security, carbon reduction and industrial capability. It carries the explicit ambition of delivering cheaper and faster than Hinkley Point C - under intense scrutiny.

With nPlan CEO Dev Amratia asking the questions, Tommy and Carolyn engaged in a serious discussion about how decision-making must evolve on projects of this scale - and we've compiled some of the must-watch clips from the conversation (as well as some of the clips needed for important context!) into a series of blogs we believe project controls and delivery pros will find valuable when executing their own projects. This is Part I - let's dive in...

The scale and the responsibility

Before AI entered the discussion, the context was clear: Sizewell C is not a greenfield experiment. It is a replication nuclear build, backed by government and private capital, expected to outperform its predecessor.

The controls challenge is therefore twofold: convert years of estimating into a live baseline, and scale the organisation - culture included - to deliver against it.

The implications are significant.

A physical twin reduces uncertainty. Mature design reduces ambiguity. Procurement largely off the critical path reduces exposure.

But none of that eliminates delivery risk. It raises expectations.

Which makes the quality and speed of decision-making even more critical.

From watching the rear-view mirror to looking through the windscreen

The conversation then moved to a more fundamental question: what is the real opportunity for AI on a project of this scale?

Tommy’s description goes well beyond productivity.

He is outlining a shift in how planning and risk professionals operate on a live megaproject.

Instead of spending the majority of their time gathering data, reconciling reports and explaining variance, he envisions SMEs freed to work directly with delivery and construction teams - scenario planning, testing options, and preparing contingency pathways before issues escalate.

That changes the day-to-day role of project controls.

Planning and risk SMEs become forward-looking advisors rather than retrospective reporters. The emphasis moves from documenting what went wrong to stress-testing what could go wrong - and deciding in advance how to respond.

On a programme of this complexity, that shift is material. It positions project controls not as a reporting function, but as an active participant in shaping delivery outcomes.

The schedule as a strategic lever

If AI is about improving forward-looking decision quality, then the logical place to begin is where uncertainty, cost and delivery risk converge most visibly. On a nuclear megaproject, that convergence sits squarely within the schedule.

The programme schedule is not just a planning artefact. It is the mechanism through which cost exposure, productivity assumptions, sequencing logic and commissioning timelines are translated into board-level commitments. It is also the area where optimism bias and hidden assumptions can compound.

That is why Sizewell C chose to begin here.

The emphasis in this section is revealing.

Adoption was not impulsive. It followed market observation, conversations with other major clients, and what Carolyn described as a lengthy demonstration and testing period. That level of scrutiny reflects the governance standards required on a nuclear programme of this scale.

But the rigour did not stop at selection.

What stands out in Tommy’s explanation is that nPlan is not being used as a forecasting shortcut. It is being deployed as an independent challenge function. By generating a parallel dataset that can be compared against internal methodologies, it introduces structured tension into the controls process - highlighting convergence, surfacing divergence and reducing reliance on any single modelling approach.

Equally significant is the focus on inputs. Stress-testing assumptions before they reach the board transforms AI from a reporting tool into a governance instrument.

That discipline, applied both in adoption and in application, increases confidence in the conversations that follow.

Strengthening decision integrity

What ultimately stands out in this discussion is the seriousness with which both technology and governance are being treated.

Forward-looking controls, structured challenge, and rigorous input validation are practical mechanisms for improving decision quality. On a programme of this scale, even incremental improvements in clarity and confidence can materially influence outcomes.

When leaders of a live nuclear build speak openly about strengthening those mechanisms, it reflects a broader shift in how complex infrastructure projects are approaching AI - as part of a disciplined effort to improve delivery performance.

Part II of this series - with more video highlights of this discussion - are coming to this blog soon; watch this space!