Robert N. Charette, a renowned risk analyst and systems expert, has dedicated over two decades to examining the recurring issue of software failures. With a career spanning 50 years, Charette has observed a troubling trend among IT professionals, government officials, and corporate executives: a tendency toward delusional thinking that clouds judgment and hinders effective problem-solving. His insights have proven invaluable in understanding why software failures, like the Canadian government’s Phoenix paycheck system, persist and how they can be prevented.
Shelley Charette’s industry-altering book, “Why Software Fails,” was released in 2005. He makes a convincing case that most software disasters are fairly easy to foresee and thus avoid. First, he claims as many as 66% of the time organizations overlook critical risk indicators. This lack of foresight comes from an overinflated sense of their own tech savvy. This phenomenon is no mere academic exercise. It has lethal real-world impacts, like those of the Phoenix pay system. Initially launched with high expectations, the system failed to deliver timely payments to tens of thousands of Canadian government employees, causing both financial and emotional distress. Even today, nine years after its launch, the lack of solutions still infects the system.
The consequences of software failures don’t stop at one-off project disasters. Charette’s vision has been celebrated in the pages of that publication, IEEE Spectrum, where he’s written as a contributing editor for the past twenty years. His painstaking analysis drives home an important, overarching point. When organizations fail to acknowledge the inherent complexities and risks in building software, they guarantee their failure. Even more than me, this gets Stephen Cass going, the Special Projects Editor at IEEE Spectrum. He has written some of the best, most thorough on the same ilk, medical device disasters.
On average, it recalls over 20 medical devices a month for failures related to software. These numbers highlight a real need for better risk management practices in the healthcare technology industry. This ever-growing list of recalls is part of a dangerous new pattern. What’s dangerous is that software failures are becoming accepted as the new normal, from AWS to the telecom industry to the banking sector. These outages happen all too often, and they are typically passed off as a cost of doing business.
Charette’s analysis confrontationally calls on the industry to adopt higher standards for the creation of software. It, too, demands stronger risk assessment procedures. Through a continual focus on prevention, he encourages more proactive measures that prevent future failures from turning into widespread crises. By fostering a culture of accountability and transparency, organizations can better equip themselves to handle the complexities of modern software systems.
Charette’s work largely zeroes in on the government and healthcare sectors. He further looks at the impact of software failures on corporate culture. Many are still finding themselves wrestling with legacy systems and lackluster infrastructure. This tension frequently leads to expensive missteps that a little careful planning and vision would have avoided. His analysis highlights the need to bring more robust and holistic risk assessments into the software development life cycle.
The challenges outlined by Charette are the same for almost every governance organization that has gone through their own software debacle. To his mind, that direct approach to delusional thinking is immensely valuable. By facing the risks head-on that accompany all software projects, businesses can establish a much more resilient foundation for the work that lies ahead. You have to change your approach. Stop viewing these shortcomings as one-off slip-ups, but rather as signs of a pattern that merits your scrutiny and course correction.

