How We Prepare For the Worst Case Scenario (or Fail To)
Juliette Kayyem on Everyday and Extraordinary Disasters
In June 2010, the iPhone 4 was launched with much fanfare. Soon after, customers were going online to publicly denounce its signal strength. Lots of people were experiencing dropped calls and interrupted messaging. Apple responded in the worst way possible: blaming how customers held the phone or, as former CEO Steve Jobs wrote, describing it as a “nonissue.”
Eventually, Apple admitted the software failures and sought to remedy the errors for a phone that had failed to get even a recommendation from Consumer Reports. That Apple’s initial response—which drew withering condemnation and even a lawsuit—was pathetic and defensive is well known now. What isn’t more commonly known is how unsurprising the complaints were.
For Apple, the iPhone must work. There is no other product line so vital to the company. So what was wrong with the iPhone 4 that had not bedeviled previous versions? Turns out, nothing. The Apple official response would later explain that “gripping any phone will result in some attenuation of its antenna performance with certain places being worse than others depending on the placement of the antennas.” This particular error existed in previous iPhones. But for some reason, even though the company knew it, customers never complained and accepted the flaw. It was social media, an aggressive variable, that doomed the iPhone 4. An annoyance for one customer became a groundswell that soon became a major flaw threatening Apple’s most successful product.
The fact that customers had already experienced the flaw, however, wasn’t a sign that everything was fine; it was only a sign that customers would tolerate a certain amount of annoyance until they couldn’t any longer. Apple had normalized the problem, and therefore the public did as well. The dropped calls were near misses, not outliers. Those near misses tell us something that needs to be heard. They are often hints that the system has a default, an error, that if not corrected could lead to a great unraveling. For right-of-boom purposes and the goal of consequence minimization, the near misses aren’t necessarily a sign that the system is working. Normalizing them gives institutions and the general public a false sense that the system itself is resilient and the dam can always hold.
Can you hear me now? Things will not hold.
Normalization of deviance is the tendency to ignore near misses rather than acknowledge them as red-siren warnings that the system may be facing a meltdown. The phenomenon of normalization of deviance is important in the disaster management field. The phrase was first coined by the sociologist Diane Vaughan based on her work in studying business and institutional failures. Her findings looked at the time before disasters when the company or team should have known that doom was coming. Why did companies and government entities so readily ignore what, in hindsight, were obvious hints that something was disastrously wrong? She viewed the “near misses” less as a sign that the catastrophe was avoided and more that the catastrophe was waiting to happen.
Like other scholars, Vaughan was drawn to the Space Shuttle Challenger disaster. She, too, was focused on the O-ring, the event we now know was the trigger for the explosion. But she didn’t stop there. The O-ring was one of hundreds of instances in the buildup of the Challenger launch that were hinting, maybe even screaming, of systematic flaws. Each of them was sidelined by the NASA engineers and leaders. Each was normalized.
That normalization occurred because none of them, alone, caused immediate harm. The deviances were superficially benign. Vaughan describes, in her examination of catastrophes, “a long incubation period with early warning signs that were either misinterpreted, ignored or missed completely.” While the O-ring was the catastrophic failure, it was not to blame alone. Indeed, the ability to sideline the O-ring’s limitations was a sign of a bigger problem. It was ignored because the “boundaries defining acceptable behavior incrementally widened, incorporating incident after aberrant incident.” The normalization of deviance is that “gradual process through which unacceptable practice or standards become acceptable. As the deviant behavior is repeated without catastrophic results, it becomes the social norm for the organization.” The obvious warnings are dismissed because they don’t immediately cause harm. This is the near miss fallacy.
Near and recurring disasters actually illuminate opportunities for learning and preparation. They can also provide important feedback as we prepare for the next disaster. This is how Vaughan essentially describes the near miss fallacy. The language should sound familiar, as it aligns with the left- and right-of-boom framework. If an event, a near miss, does not immediately cause a catastrophe, then that near miss begins to be viewed as normal instead. But these near misses are only buying time before the “final disaster,” such as the O-ring disintegrating. As we know now, there will inevitably be a final disaster; the devil will return.
How an institution becomes so arrogant, or careless, or simply succumbs to groupthink varies depending on its culture and history. As a whole, institutions suffer from the assumption that the devil isn’t lurking; they focus on results (which may be good) rather than mistakes in the system that may be alerts to a potential error. We need to listen to the near misses because they are telling us that we are, more often than not, about as close to the right side of the boom as we ever want to be. We shouldn’t feel relief. We should, however, be grateful because the time we now have can help us get ready.
The disruptions described below reframe the conventional wisdom about these crises. In these cases, institutions were open to learning from prior near disasters to avert the most damaging consequences of the next one. They show how institutions learned from disasters that did not happen—in each instance, events could have been so much worse—to avoid catastrophes that seemed likely.
The obvious warnings are dismissed because they don’t immediately cause harm. This is the near miss fallacy.In 2015, sixty cases of E. coli were linked to fast-food chain Chipotle’s lettuce. It is difficult, to say the least, to run restaurants that are poisoning people. Chipotle was a fast-growing chain; by 2015, it had more than 1,900 locations based on a marketing campaign that spoke of the evils of industrial eating. Fresh. Fresh. Fresh. Soon, its market valuation was nearly $24 billion, based on the pitch that fast food and healthy were compatible—or maybe not. E. coli in lettuce is bad. Chipotle had a problem in its supply chain that was a reputation-ruining challenge. By the end of the outbreak, five hundred people had become sick from contaminated food. That number represents only those who went to a doctor and provided a sample. Over the course of the crisis, Chipotle lost about 30 percent of its valuation.
Such a massive E. coli outbreak, from Oregon to New York, meant that the contamination had occurred at one of the chain’s big suppliers. E. coli resides in fresh or undercooked foods; it cannot survive high temperatures. For Chipotle, this scientific fact meant that the very attributes that made it unique—the freshness in tomatoes, cilantro, or lettuce—were the culprit. Chipotle now had a real problem, but it reacted quickly because the company took previous near misses seriously. It had heeded those previous warnings. Chipotle was well positioned to respond to a massive E. coli outbreak. It had treated each and every customer complaint or sick employee as the sign of a potential catastrophic incident. It refined its response protocols accordingly.
I’m not here to defend an E. coli outbreak; clearly it shouldn’t have happened. But the company’s admittedly self-serving sentiment—that the bottom line could not survive questions by its customers about the safety of its product—drove a sophisticated response that mitigated much of its physical and economic harm. It closed more stores than were necessary, worked cooperatively with the CDC, acted overly cautious and conservative, and quickly restructured their food safety procedures. The company certainly tripped, but it did not fall. Chipotle made massive changes to its protocols for vendor and employee safety. It went public with those changes. It fessed up to its past vulnerabilities. It protected its brand. In 2021, the company had a value of $54 billion, making it one of the world’s top four hundred most valuable companies.
Why did companies and government entities so readily ignore what, in hindsight, were obvious hints that something was disastrously wrong?The company Evergreen’s Ever Given is one of the world’s biggest ships. Much to the glee of comics and GIF creators everywhere, it got wedged in the Suez Canal in March 2021. A wind shift, a bad maneuver, and the boat was stuck, its front lodged in the sandbanks and the back half tilted across the canal. The year 2020 had already exposed the vulnerability of global supply chains as demand for goods bumped up against the pandemic’s impact on manufacturing and distribution capabilities. The closure of the Suez Canal, many feared, was going to have serious consequences: a single ship had cut off the only lane for 12 percent of the world’s trade. The capacity to move energy, automobile goods, household needs, and luxury items was going to take a huge hit. But it didn’t. Later supply-chain disruptions in the end of 2021 and the beginning of 2022 can be blamed on global economics and the pandemic but not on the Ever Given.
The closure of the Suez was a reminder why it had originally been built in 1859. Virtually every container ship making the journey from the factories of Asia to the affluent consumer markets of Europe passes through the channel. So do large tankers laden with oil and natural gas. When Ever Given got stuck, nearly two hundred ships were waiting to get into the canal on either side; more were approaching.
There weren’t many options. But there was some guidance. There had been planning for the worst-case scenario because of threats and environmental challenges in the Suez area for decades. Companies had viewed some of these near misses and had drawn up plans for mitigating the harm. They had every rational reason to believe the Suez was vulnerable. From 1967 to 1975, the Suez Canal had closed due to the Six-Day War, and it did not open until after the Yom Kippur War. Israel took control of the eastern bank of the canal; Egypt was on the western side. During those years, fourteen ships were trapped in a part of the canal called, ironically, the Great Bitter Lake.
Those near misses tell us something that needs to be heard. They are often hints that the system has a default, an error, that if not corrected could lead to a great unraveling.They came to be known as the Yellow Fleet. They tied themselves together and set up a micronation of sorts with designated ships for fancy dinners, music, and even a church. For commerce to continue, ships moved to transit around the Cape of Good Hope, the southern tip of South Africa. The change added mileage, time, energy needs for the ships, and increased vulnerabilities due to wave currents.
In 2021, when the Ever Given blocked the route, it was not clear how long the Suez Canal would be closed. The viral images of a very small bulldozer trying to move sand to dislodge a much bigger boat seemed to suggest that the wait could be a while. It ended up being just a week, but the companies had done worst-case scenario planning. Time is of the essence with nearly ten billion in products transiting Suez every day, some of them perishables. Though the Cape of Good Hope adds weeks to the journey, even after Suez might open, companies knew that there would still be a backlog. The longer journey also meant shippers would spend more money on fuel consumption; navigational resistance due to bigger waves also increases fuel use and, unfortunately, carbon emissions. The Southern Ocean is a wild ride, and safety issues would also come into play. This is what happened with the 1967 closure, when the larger waves resulted in numerous safety incidents. The Cape of Good Hope is called the Graveyard of Ships for a reason.
The near misses in the recent past, and the long closure in 1967, motivated companies in the “just in time” supply chain to plan out contingencies should something actually shut the Suez Canal down. They really didn’t have to predict the Ever Given. Who could? But the contingency planning, motivated by the near misses, provided the metrics to determine whether they would take the leap to Good Hope: time, cost, safety. Companies made different calculations, but there was enough variety of response that there was an almost insignificant global impact with the canal being closed for a week. As hundreds of ships got stuck, major shipping giants like Maersk, Mediterranean Shipping, and Hapag-Lloyd started to move toward Africa. They had a plan. And we barely felt it downstream of the supply chain.
___________________________________
Excerpted from The Devil Never Sleeps: Learning to Live in an Age of Disasters, by Juliette Kayyem. Copyright © 2022. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.