
In recent years, local businesses, including state-owned enterprises, have repeatedly struggled to implement critical IT systems on time and on budget.
From retail giants experiencing warehouse bottlenecks to telecommunications and municipal services plagued by billing and operational outages, the warning signs are clear: systems drift, deadlines slip and costs spiral. The technology itself – ERP software, billing platforms or supply-chain systems – is often proven and widely adopted globally. What fails is programme oversight, planning and execution.
Behind headlines like this lies a pattern that organisations often overlook: large IT programmes rarely fail suddenly. Rather, they drift. Warning signs appear early but they are often ignored or misunderstood. Budgets expand gradually, timelines slip incrementally and by the time leadership recognises the scale of the problem, recovery costs far exceed the original investment.
South Africa has seen this pattern before in large infrastructure programmes. The construction of Eskom’s Medupi and Kusile power stations was marked by massive cost overruns and years of delay, reshaping the country’s energy landscape and negatively impacting the economy. These mega-infrastructure projects exhibit a strikingly similar underlying dynamic: early optimism, underestimated complexity and problems that compound quietly over time.
In IT, the risks can be even more difficult to detect. Across sectors, South African organisations have experienced the consequences of large-scale digital programmes that struggled to deliver as planned. Disruptions to systems such as eNatis, service interruptions affecting platforms administered by social grants agency Sassa and delays at airports due to emigration systems have demonstrated how technological instability can affect millions of citizens. In the private sector, Absa Group wrote off about R2.4-billion in software assets after projects failed to deliver expected value, and a major retail group is a facing a lawsuit due to IT systems implementation failures.
Optimism bias
These outcomes are not unusual in the global context. Research by Oxford University scholars Bent Flyvbjerg and Alexander Budzier describes what they call the “Iron Law of Megaprojects”: over budget, over time, under benefits — repeatedly.
One of the most powerful drivers of this pattern is optimism bias. Organisations routinely approve IT programmes based on overly confident projections. Budgets are often built around P50 estimates, which implies only a 50% probability of meeting cost and schedule targets. Yet these estimates are frequently presented as firm commitments to executives and boards.
Read: Move to cloud is fuelling an IT services spending boom in SA
A more realistic planning approach requires P80 confidence levels, which acknowledge uncertainty and provide a buffer against the unknown complexities that inevitably arise in large-scale digital programmes.
Software projects are particularly vulnerable because their progress is less visible than physical construction. A bridge that is only half built is obvious to everyone. A software platform, by contrast, can appear to be progressing well while serious design flaws or integration failures accumulate beneath the surface. When deadline approach, validation processes such as testing are often compromised, leading to operational failures.
Compounding the challenge is the extraordinary variation in software productivity. Research shows that some teams deliver more than 10 times the output of others working under similar conditions. Scope is often poorly defined, and defects frequently emerge months after critical design decisions have been made. As a result, projects can appear healthy even as value quietly drains from the programme.
By the time senior leaders recognise the drift, recovery may require major cost escalation or even complete programme redesign.
One practical method for improving planning accuracy is Reference Class Forecasting, developed through the work of Budzuer and Flyvbjerg and used extensively by the author. Instead of relying solely on internal projections, this approach compares proposed initiatives with the outcomes of similar projects completed elsewhere. By grounding forecasts in historical evidence, organisations can significantly reduce optimism bias and produce more reliable budgets and schedules.
Yet better forecasting alone cannot solve the deeper problem. Large IT programmes succeed or fail largely because of the capabilities and governance structures surrounding them. Effective oversight requires leaders who understand technological complexity, recognise systemic interdependencies and are able to detect early signals that a programme is drifting off course. Without these capabilities, even well-designed projects can unravel as unforeseen interactions between systems, teams and vendors accumulate.
Addressing this challenge requires more than better project management tools. Large digital programmes demand leadership capabilities that combine systems thinking, rigorous problem probing, cross-disciplinary learning and strong communication across technical and executive teams. When leaders develop these competencies, they are better able to identify hidden interdependencies, challenge unrealistic assumptions and detect early warning signals before a programme drifts beyond recovery.
Complex ecosystems
This is particularly important as South Africa accelerates its digital transformation. Government service platforms, financial systems, telecoms networks, logistics infrastructure and retail supply chains increasingly depend on complex software ecosystems. When these systems fail, the consequences extend far beyond a single organisation. They affect service delivery, economic productivity and public trust.
For decision-makers, the lesson is straightforward. Before approving the next large IT programme, three questions should be asked:
- What do comparable projects tell us about realistic costs and timelines?
- Is the budget grounded in evidence, or in optimistic assumptions?
- Can those responsible for governance identify problems early?
So often, early warning signs can be found. A whistle-blower flagged the retail failure well before business operations were impacted. Careful observation such as looking out for projects that spend very little time on feasibility and design. These tend to move quickly into build but often do so on an incomplete understanding. At the other extreme, projects that spend excessive time elaborating requirements and design are typically already struggling to reach stable conclusions. In both cases, delivery suffers.
These questions may appear simple, but they address the core reason many digital initiatives struggle.
The reality is that most IT programmes do not collapse overnight. They drift gradually away from their original objectives until recovery becomes prohibitively expensive. Organisations that succeed are those that recognise the drift early, while there is still time to correct course.
For South Africa, where digital capability will increasingly determine economic competitiveness, ignoring these warning signs is not merely costly. It is a strategic risk.
- The author, Bram Meyerson, is an executive member of the convocation at The DaVinci Institute and the CEO of Quantimetrics
