
Introducing nPlan’s Activity Correlation Explorer
Explainability in AI: Our Next Steps
As a tech company which uses Predictive AI for forecasting construction project activities and milestones, one of the biggest challenges we face is explaining why our AI has come up with the insights it has. When a technology provides us with an output that agrees with our view of the world, people are less inclined to need an explanation. But when these things tell us something different, we naturally (and rapidly) demand explanations. At this point, AI and specifically Deep Learning can start to appear like a black box.
And this is perfectly understandable! In my time as a project team member, I was able to see and control the inputs into what I was doing, and how it related to the outputs, so I could see why something did or didn’t make sense to me. But often with AI, a user receives an output, decides whether it makes sense or not, and that’s it. Often that creates the perception that ‘the computer is wrong’, because we’re not able to interrogate further. It can feel like a computer saying ‘just trust me’, and humanity is not ready for that just yet.
So, going back to explainability, it’s really important at nPlan that we help the user understand how our AI got to the conclusions it got to. Welcome, then, to the Activity Correlation Explorer (this is our internal name for the feature; we’re working on something jazzier). This new feature looks at a key activity, and then tells you what proportion of our dataset of 750,000 past project schedules has informed the forecast for that activity, and how these activities have historically performed:

Not only does this help the user to see that their activities relate to our dataset, this also helps the user to understand why it is important, and to what extent. What I also like about it is the relationship to how people traditionally quantify discrete risk - there is a probability and an assessment of impact.
When you combine this feature with other recently released features, the benefits start to multiply. Look, for example, at Risk Matchmaker, where a user can upload their risk register to see how our insights match to the risks they’ve been worrying about about (or not worrying about):

Or indeed our delay causes suggester, which finds typical causes of delay for a highlighted insight:

In my mind, all of these features start to draw the picture of what risk there might be to a project in a way that is explained, so a user can understand and explain them to others. And most importantly, to manage risk before they become issues.
Explaining the outputs of AI will continue to be a tricky issue. With these features, nPlan is opening the black box of our dataset and ML features and linking them to the world of our users - who are discussing what’s important to make decisions that matter. And by making these outputs more familiar - linked to what our users do in their day-to-day jobs - their decisions will be powered by billions of hours of project experience at the press of a button (or a few presses).







