Chris Clearfield a former derivatives trader now writes on complexity and failure for major media publications. Andras Tilcsik is an Asscoiate Professor of Strategic Management at the Rotman School of Management at the Universit yof Toronto, and holds the Canada Research Chair in Strategy, Organizations and Society there. He has been named as one of the top 40 business professors under 40 and his course on organizational failure named as the best program on disaster risk management at a business school by the UN.
As the authors note, the Middle Ages were ‘the golden age of bacteria’, when Black Death and other diseases could travel, largely unimpeded because new trade routes opened up the world, but hygiene policies and medical knowledge had not advanced to prevent their spread. Today, the rise of computer power has created systems that are so technical and fast that we find it very difficult to see if and where they maybe flawed, and so when things go wrong, they can go very wrong, very quickly. It is the ‘golden age of meltdowns’.
The core observation of the authors is built on the research of former Yale and Stanford sociology professor Charles Perrow. Perrow created a matrix based on his analysis of what had gone wrong at the Three-Mile Island nuclear accident in 1979. The model plots complexity against what he terms ‘close-coupling’ of systems. Close-coupled systems are those that work like toppling dominoes. Once one has fallen there is not enough slack in the system to allow those controlling it to get in and stop things before the impact knocks-on creating further issues. The solution to ‘close-coupling’ is to add more slack, or loose-coupling’, into the system, this reduces short-term efficiency, but can avert disasters which can be considerably more expensive. The more complex the system the more places errors can occur and the more difficult it is to identify where something has gone wrong.
In today’s digital world, it is much more difficult to see where an error, or ‘bug’, is at work, than in a mechanical systems. This lack of visibility to controllers only adds to the problem.
The authors’ solution is, at its simplest, twofold: make systems as simple as possible, and design in as much visibility as possible; and ensure that teams who control those systems – whether that be Executive or Governing Boards or lower down the organizational ladder, are as diverse as possible with ‘strangers’ involved. Strangers being those who can bring a naive ‘beginners mind’ to the conversations and force the ‘experts’ to consider and explain the basics that often get over-looked amongst the complexity, and can dilute a progressive over-confidence taking hold.
A key model suggested to open up thinking around systems is the use of SPIES (Subjective Probability Interval EStimates), a simple probability-based approach for forecasting eventualities with varying degrees of confidence.
This is a highly readable book, packed full with entertaining and illustrative real-life examples and case studies. The authors clearly have mastered their patch – and explain the challenge well, whether their solutions are clearly enough explained to encourage wide take-up remains to be seen. Perhaps attending their program at Rotman is a necessary next step.
Title: Meltdown: Why Our Systems Fail and What We Can Do About It
Author/s Name/s: Chris Clearfield and András Tilcsik
Publisher: Atlantic Books, part of Penguin Random House
Publishing Date: April 2018
Number of Pages: 245
Author Knowledge Rating: 1-5 (based on their years of experience, academic expertise in subject areas, and exposure to cross-functional thinking in the area)
Readability: 1-5 score(1=dense and v academic; 5=frantic; page turner)
Appropriate Length: (1=could have been written in 25% of the length;5=could have been longer)
Core Idea Value: (1=nonsense (or entirely esoteric); 5=game-changer)