The empirics prove it, as software projects all around the world still continue to fail at disturbing rates—even after trillions of dollars wasted over the last two decades. Continuing problems are due to breakdowns in human creativity and imagination and unrealistic project ambitions. Moreover, the large-scale systems we often manage only increase the complexity of these challenges. Notably, numerous high-profile examples highlight the grave consequences of these failures, including Michigan’s MiDAS unemployment system and Australia’s Centrelink “Robodebt” initiative.
In the U.S., Canada, and the U.K., government officials have continually placed their faith in algorithms to make important government decisions. They don’t consider the inherent risks that come into play. Beyond the billions IT failures waste, this issue costs us. They impose tremendous emotional and social strain on workers and their families.
The Roots of Software Failure
The drivers of software failure are multifaceted. A chief culprit is an absence of achievable, explicit project objectives. Each year, prospective projects begin with truly ambitious aspirations. These dreams can be badly expressed or grow so intricate that they get caught in a feedback loop. Just as importantly, when we do not have the capacity to address risks, consequences can be disastrous.
Sadly, many vulnerable government projects have already succumbed to such traps. Michigan’s MiDAS system automatically flagged tens of thousands of low-income Michiganders for unemployment fraud using incorrect data. In the same vein, Australia’s Centrelink program incorrectly accused hundreds of thousands of people as being welfare cheats. These damages cases are indicative of a larger trend where government overreliance on automated decision-making systems results in disastrous consequences.
“Anyone can make a mistake, but only an idiot persists in his error.” – Marcus Tullius Cicero
Unfortunately, for all the technology advancements and dollars spent on IT projects, success rates haven’t measurably improved. This has led to a pattern where governments have very steeply invested resources, yet have neglected to pay for the failures of yesterday. A case in point is the much maligned Phoenix payroll system.
High-Profile Failures and Their Consequences
The Canadian federal government’s new Phoenix payroll system went operational in April 2016, only to have disastrous problems arise almost immediately. Moving forward, project managers waved aside thick documentation pointing to old projects’ failures. Because of this, nearly 70 percent of the 430,000 current and former federal employees eventually found themselves subject to paycheck mistakes. The problem carried over into fiscal year 2023–2024, with one-third of all employees still suffering paycheck errors.
The MiDAS system in Michigan generated a large outpouring of public anger. Yet it still accused thousands of doctors and other medical professionals of improper claims, all built on misleading data. This has caused immeasurable emotional and monetary anguish for countless Michiganders and their families. The recent Centrelink “Robodebt” program received intense community opposition. It inaccurately and inappropriately targeted people for welfare—welfare fraud, mind you!—enraging members of the public and killing any newfound faith in government systems.
These shortcomings have dangerous implications that last for decades. Not only do they lead to real financial impacts, but create an atmosphere of fear and uncertainty for workers who depend on trustworthy payroll and welfare infrastructures. This troubling landscape leads to the question of whether we should expect to only be able to trust algorithms with high-stakes decisions that impact people’s lives.
The Need for Realistic Approaches
As governments contemplate replacing outdated IT systems, there is a pressing need for a more realistic approach to software project management. Thousands of current Canadian gov’t IT systems are outdated and in need of replacement or modernization. There remains a huge danger that promising new efforts will quickly become failures as with Phoenix or MiDAS.
Without accountability for IT failures, there is a culture of repeated learning from mistakes. Government officials need to understand the role of the human factor when adopting these technological solutions. Or else they’ll continue to risk replicating the failures of history.
>Not only must officials demand more of technology, they must acknowledge that technology will never replace the judgment of a trained public servant. Trusting algorithms without the benefit of rigorous independent critical oversight can and likely will lead to disastrous effects that impact millions of lives.
“Why worry about something that isn’t going to happen?” – KGB Chairman Charkov


