Your ERP Program Has Five Dimensions. Most Steering Committees Watch One.

What you’re monitoring, what you’re missing, and why the gap between them is where programs go wrong.

You get a status report every fortnight. It tells you the program is on track. Milestones are green. Budget is within tolerance. The vendor is confident. Your project team is busy.

And somewhere underneath all of that, something nags.

You can’t name it precisely. You’re not a technical person — that’s not your job. Your job is to make sure the organisation gets what it paid for. But the reports you’re receiving are written to manage your concerns, not to surface them. And you sense that.

That instinct is correct. The problem isn’t that your program is failing. The problem is that the monitoring structure you’ve been given is watching the wrong things — or more accurately, not enough of the right things.

Most steering committees monitor one dimension of an ERP program: delivery. Are we on scope? Are we on budget? Are we on time? These matter. But delivery is just one of five dimensions running simultaneously through every technology program. The other four are moving whether anyone is watching them or not.

This article walks through all five. For each one, I’ll tell you what you can see from the front of the room — and what you should also be able to see, but probably can’t.

1. Delivery: The one dimension everyone watches — and still misreads

What you can see: scope, budget, timeline, quality gates. The classic project triangle. Your status report covers all of this. You have a RAG status. You have milestone dates.

What you should also be asking: are those milestones still measuring what they were designed to measure?

This is the subtlest and most common failure in delivery oversight. Milestones drift in definition without anyone formally changing them. A sign-off that was supposed to represent genuine acceptance of completed work becomes a courtesy gesture — a date being preserved because renegotiating it would require a difficult conversation. A budget that was “within tolerance” in Month 4 is within a tolerance that has been quietly redefined since Month 2.

The scope question is the same. Scope changes are supposed to be formally governed — assessed, approved, logged, and their implications worked through. In practice, they are often absorbed. The program team accommodates a change to keep the vendor relationship smooth, or to avoid slowing things down. Nobody lies. Nobody formally agrees to change the scope. It just narrows.

What you’re left with is a program that is technically on track, delivering something materially different from what was approved.

The honest question here isn’t “are we on scope?” It’s: who has independently verified that what we’re building still matches what was approved — and that acceptance criteria are being met, not just signed?

2. Data and system readiness: The dimension that’s always ‘in progress’

You’ve heard the phrase “data migration is underway” so many times it no longer produces anxiety. It should.

What you can see: a data migration workstream. Status: in progress. Percentage complete: climbing slowly.

What you should also be seeing: a data quality profile that was done at the beginning of the program — showing where the gaps are, what clean looks like, and who owns the remediation. A named person accountable for getting your data into a state that the new system can actually use. A date by which that work must be done, working backwards from go-live with real margin.

Most programs don’t have this. They have a workstream. The workstream has tasks. The tasks are moving. But no one has formally confronted the question: is the data actually going to be ready — and what happens to the program if it isn’t?

The same pattern appears in security and privacy, architecture compliance, and scalability. These aren’t things that get done at the end. They are disciplines that run through the whole program. If they’re not being monitored as independent dimensions — with owners, standards, and governance — they get deferred. And deferred technical work has a way of becoming post-go-live crisis.

The honest question: not “is data migration in progress?” but what is the current quality of our data against the requirements of the new system, and what is the confirmed plan to close the gap before go-live?

3. Reporting and visibility: The dimension that determines whether you can see the others

This one is different from the rest. It doesn’t describe something happening in the program. It describes your ability to see what’s happening in the program. If this dimension is broken, everything else goes dark.

What you can see: status reports, RAG dashboards, steering committee presentations. They exist. They arrive on schedule. They have structure.

What you should also be asking: are these reports designed to surface risk, or to manage it?

There’s a structural problem in most program reporting. The people preparing the reports are either part of the delivery team, or dependent on it for their information. They are not independent. That doesn’t make them dishonest. It means they are subject to the same pressures the whole team is subject to: don’t slow things down, don’t alarm the executive, don’t escalate unless you have to.

The result is reports that accurately describe motion — tasks completed, meetings held, issues logged — without accurately describing meaning. Whether the decisions being made are the right decisions. Whether the issues being logged are the ones that actually matter. Whether the milestones being hit still represent what they were supposed to represent.

The decision audit trail is part of this. When scope changes, when budget revisions occur, when the timeline shifts — is there a written record of who decided and on what basis? Or do those decisions happen in conversations, and the steering pack just shows the new baseline as though it always existed?

The honest question: if something were seriously wrong in this program right now — would your current reporting structure surface it, or suppress it?

4. People and change: The dimension that’s always a slide in the steering pack

This is where most programs quietly fail. Not at the technology layer. At the human one.

What you can see: a change management workstream. A communications plan. Training scheduled. A go-live date with user preparation activities listed.

What you should also be seeing: evidence that the organisation is actually changing, not just being informed about change. Ongoing signals about how end users are responding — not anecdotally, but systematically tracked. A training program designed around the specific system going live, not a generic curriculum built six months before the configuration was finalised. A plan for what happens when staff turn over after go-live, which they will.

The gap between the project team’s confidence and the end user population’s readiness is one of the most reliable leading indicators of a troubled go-live. The project team has been living in this system for months. They know how it works. They believe in it. They have forgotten how foreign it will feel to someone encountering it for the first time, under operational pressure, with no runway to make mistakes.

The other thing to watch is whether the organisation is genuinely improving its processes through this program — or simply replicating the old ones in new software. ERP implementations have a way of allowing years of accumulated workarounds and broken processes to get quietly baked in to the new system. The technology becomes a permanent home for dysfunction that should have been addressed.

The honest question: if you asked twenty end users today what they know about the new system and whether they feel prepared, what would they tell you — and does your steering committee know the honest answer?

5. Strategic value: The dimension that was defined at the beginning and forgotten by Month 3

Your program was approved on the basis of a business case. That case made commitments — financial, operational, strategic. Efficiency gains. Improved reporting. Cost reductions. Better governance. Someone presented those numbers. Someone signed off on them.

What you can see: the program is delivering the agreed scope, within the agreed budget, against the agreed timeline.

What you should also be asking: is what’s being delivered still connected to what was promised?

Business cases get orphaned. Not through any decision to abandon them — just through drift. As scope gets absorbed, adjusted, quietly narrowed, the program shifts away from the original intent. The benefits that were promised start to depend on things that are no longer being built. Nobody notices, because nobody is watching the business case against the program with any regularity.

Benefits realisation is part of this. Most programs have a benefits owner in name. In practice, that person is waiting for post-go-live to start tracking anything — because the benefits were projected to materialise after implementation. But leading indicators exist. And if nobody is tracking them now, the post-go-live discovery that benefits aren’t materialising will be framed as a surprise. It isn’t a surprise. It’s a gap that was visible from the beginning to anyone watching the right things.

The honest question: can you connect what your program is currently building, right now, to the specific commitments made in the approved business case — and do you have a plan to actually measure whether those commitments were delivered?

What this adds up to

Five dimensions. Most steering committees have structured oversight of one. The others are either embedded in delivery reporting (where they can get quietly managed), deferred to post-go-live, or simply not being monitored.

The risk is not that your program is failing. The risk is that you won’t know it’s failing until the cost of correction has compounded.

Each of these dimensions needs a nominated owner, a monitoring standard, and a reporting rhythm that is independent from the delivery team’s incentives. Together, they give you one complete answer to one question that you should be able to answer with confidence at every steering committee:

What is actually happening in this program — and where is it drifting?

If you can’t answer that question cleanly, the issue isn’t the program. The issue is what you’ve been given to watch it with.

SP Singh is the founder of Bhani Consulting. He provides independent ERP oversight and advisory to WA local government, Aboriginal corporations, and mid-sized organisations. He is a WALGA panel member and has twenty years of experience inside enterprise software programs from both sides of the table.

If you’re mid-delivery and suspect something is drifting, the Executive Visibility Review gives you independent visibility into what’s actually happening — in three to five weeks.

Customer Experience

DOWNLOAD THIS EXCLUSIVE EBOOK!

Learn why awesome Customer Experience Is Necessity?

Struggling To Win New Customers? Revealing No.1 Culprit!

Exposing Hidden Complexities Of PreSales

5 Step Process To Improve Customer Experience

You have Successfully Subscribed!

Share This