Your ERP Project Is Green.
The dashboard says everything is on track. Scope — controlled. Budget — within tolerance. Timeline — amber at worst, green most weeks. The steering committee gets its report. The sponsor nods. The project moves forward.
What could go wrong?
The Five Numbers Executives Are Taught to Trust
There is a standard model for tracking project health. It has been taught in project management courses, embedded in governance frameworks, and repeated in steering committee templates across every industry for thirty years. It goes like this: if you monitor five dimensions — scope, budget, timeline, quality, and risks and issues — you will know whether your program is in trouble.
This model is not wrong. It is incomplete in a way that looks like being right.
Executives who rely on it are not being careless. They are being rational. They are using the instrument they were given, reading the dials they were told to watch, and making decisions based on what those dials say. The problem is not the executives. The problem is what the instrument cannot see.
What the Reports Don’t Show
Here is what I have watched happen, more than once, in programs where every dimension stayed green:
The project reaches go-live. Months pass. The business benefits that justified the investment never materialise. No one can point to where they went — they were simply never operationalised. The project team, supposedly wound down, is still being pulled in to fix issues that should have been resolved before cutover. Data quality is poor enough that people have stopped trusting reports from the new system — which means the system is being ignored in the decisions it was meant to inform. Integration with other enterprise applications is partial or broken. And quietly, without announcement, end users have rebuilt their spreadsheets. The system is running. The workarounds are running alongside it.
The reports were green. The outcomes were not.
Why the Model Fails at the Finish Line
The five dimensions measure the delivery of a project. They do not measure the realisation of its purpose.
Scope asks: did we build what we said we’d build? It does not ask: was what we built actually adopted, embedded, and used to change how decisions get made?
Budget asks: did we spend what we planned to spend? It does not ask: did that spending produce the return it was supposed to produce?
Timeline asks: did we go live when we said we would? It does not ask: were the people, processes, and integrations actually ready when we did?
The model was designed to protect delivery. Organisations use it as if it protects outcomes. Those are different things, and the gap between them is where value leaks.
There is a structural reason this persists. Project governance is built around the delivery phase. The steering committee meets regularly while the program is in flight. Once go-live happens, the governance dissolves — right at the moment when the real work of embedding begins. The instrument gets switched off just when the most consequential reading needs to be taken.
The Cost of Finding Out Late
The consequences of a green-reported failure are not just financial, though they are always financial. They are institutional.
When the system goes live and the benefits don’t follow, what erodes first is trust — in the system, in the team that delivered it, and in the executive who sponsored it. End users lose confidence and find workarounds. Those workarounds become entrenched. The organisation pays twice: once for the system it bought, and again to maintain the shadow processes that replaced it.
The harder the sponsoring executive pushed for the project, the more reluctant they become to name what went wrong. Protecting the decision becomes more important than correcting it. And so the drift continues — not because no one sees it, but because naming it has a political cost that staying quiet does not.
The Question Worth Asking in the Boardroom
The right question is not: did the project deliver on its dimensions?
The right question is: what did we build this for — and is the organisation now living differently because of it?
That reframe shifts the unit of accountability. It moves from the project team to the executive sponsor. It connects delivery to outcomes rather than treating them as separate events. And it changes what governance needs to watch.
A holistic view of a technology program includes the traditional five dimensions, but it also includes the rationale that justified the investment, the post-go-live ecosystem that determines whether the system gets used, and the human change that has to happen before any of that is possible. These are not soft add-ons. They are the conditions under which the money spent returns as value.
If your governance framework doesn’t cover them, you are not flying blind — you are flying with instruments that only show part of the picture, while assuming they show all of it.
The Conversation to Have Before the Next Steering Committee
If you are currently sponsoring a technology program, or preparing to sign off on one, there are three questions worth sitting with:
What outcome did we commit to — and how will we know if we’ve achieved it? Not delivery. Not go-live. The outcome. If this can’t be answered in one clear sentence, the governance framework has a gap at its foundation.
What happens after go-live? Who is responsible for adoption, for integration, for the human change that has to accompany the technical change? If the answer is “the project team wraps up and the business takes over,” ask what that handover actually looks like in practice.
What am I not being shown? Every Steering Committee report is curated. Not dishonestly — but optimistically, and by people whose job it is to keep the program moving. An independent set of eyes, with no stake in the delivery outcome, will see things the internal view cannot.
Green is not the same as fine. The sooner that distinction is built into how programs are governed, the less expensive the lesson becomes.
