A Single Question

A friend always tells me that “most managers already know when a project is going bad. The status report just tells you how bad”.  

It is an observation that most executives have sharp instincts that register problems long before the data confirms them. It’s a recognition that belief almost always precedes evidence. 

So the challenge in complexity isn’t how to get better data, it’s how to surface what people already sense to be true. 

The Delivery Confidence Score 

The tool I’ve been using is a simple question.  

“How confident are you that this project will deliver what you want?” 

Ask everyone involved to score it out of ten, answer it however you like, interpret it any way you want. No right or wrong. 

Think of it like the Net Promoter Score (NPS) – the customer experience measure built on a single question: would you recommend this to a friend? NPS works because it bypasses complexity and goes straight to what someone actually believes. The Delivery Confidence Score does the same thing for projects. 

The question doesn’t focus on what you can prove, it asks what you believe. It is useful because intuition picks up on signals that are hard to measure – the tone of meetings, the pace of decisions making, explanations that don’t make sense. 

Looking for Variance 

The magic isn't in the number, but in the gap between scores. On a project in trouble, you’ll often see a split in confidence. For example, it could look like: 

Project/Technical team: 7–8: they believe things are broadly on track. 

Business users: 3–4: those who are the recipients of the result are getting worried that they don’t know enough about what is going on, worried and feeling left in the dark. 

Managers and sponsors: 5–6: unsure, somewhere in the middle, and hesitant to bad mouth the project. Starting to sense a problem, but not being able to name it yet 

These types of mixed results lead the discussion down two paths: 

1. Why is your score where it is? Are there any warning signs about how things need to change based on the score that you gave? 

2. Why are the scores different? Your team is looking at the same project from fundamentally different angles and getting very different results. The focus needs to be on how to reconcile those views or identify what more the project needs to be doing to get everyone aligned. 

How to Use It 

Confidence doesn’t shift fast, and asking too often turns a meaningful question into noise. Instead, build it into a periodic check-in. The point is to create a moment where people say what they actually think, not what the dashboard shows. 

The score itself is almost secondary. What matters is the conversation it starts. 

Most organisations have more project reporting than they know what to do with. And yet projects still surprise people at the end. How would you know if key stakeholders are not confident the project will deliver what they want? 

P.S. I’m exploring and expanding on this idea at the moment. If you’d like to delve into this concept or just want to chat about how it might apply to a project you're working on, I'd love to hear from you.

Next
Next

Do you make to break?