An apt analog to William Langewiesche’s story of the 2009 crash of Air France Flight 447 is Bob Wachter’s account of the non-fatal overdosing of a pediatric patient at the UCSF Medical Center. Neither story is new, and I make no claims that my feeble insights below are novel.*
In fact, Wachter mentions Flight 447’s fatal crash as he explains how automation can create new vectors for disaster, even as it closes off old ones. Both he and Langewiesche provide evidence that automation—auto-pilots in aviation and clinical decision support and order fulfillment features in electronic medical systems—improves safety on average while courting danger in a subset of cases. This is not a knock on their work or the compelling anecdotes they use to drive their narratives, but it’s a plea for a bit of perspective as you read either story, and I highly recommend both.
At the heart of both is a cascade of errors that begins with a human (or humans’) misunderstanding of the mode in which an automated system is operating. Wachter offers a very nice example of such a “mode error,” which I’m certain you can relate to: ACCIDENTALLY TYPING WITH CAPS LOCK ON. The caps lock key toggles the keyboard mode such that all (or most) keys behave differently.
When typing, an inadvertent caps lock toggle can cause annoying mode errors, like failing to properly enter a password. When flying an aircraft or ordering medications for a patient, mode errors can be deadly, even if they’re usually annoyances that get remedied before disaster strikes.
The pilots aboard Flight 447 didn’t recognize that their plane had switched modes, relying less on auto-pilot and ceding more control to them. They misinterpreted this sudden grant of autonomy as a confusing set of malfunctions. Likewise, the physician that initiated the sequence of errors that landed Pablo Garcia in the ICU, but might have killed him, didn’t recognize a mode change: the electronic medication entry system had switched from interpreting entries in milligrams to milligrams per kilogram of patient weight, thus multiplying a 160mg dose by a factor of 39.
Failure of humans to recognize mode changes and failure of systems to make them more obvious but without exacerbating “alarm fatigue” are among the many ways automation can harm. It relies on humans’, often well-earned, trust in automation. When we ignore the warning signs that an automated system is telling us, in part that’s because that system has served us very well in the past, sparing us from far more errors than it creates. Despite their intent, the vast majority of alarms (car alarms, fire alarms, the flashing “check engine” light, and the like) are not signals of immediate danger, so our learned response is to treat them as such and to ignore them when possible. Though infrequently, this will sometimes be a mistake. It won’t always lead to disaster (because we have other means of obtaining the right information and correcting our first, false assumption), but it could do so.
Such assumptions are not unique to automated systems. I’m well aware that every wail from my children is not a signal of deadly distress. Their sounds of alarm don’t always mean what they think they mean. Likewise, the political candidate who warns the end of America if his opponent is elected is no longer alarming.
Our trust in (or conferring of) authority is not unique to the machine-human relationship either. Though I do trust many machines, I trust a great number of humans too. They’ve earned it. And yet they err, and their errors cause me harm, just as mine cause harm to others. Naturally, we should be aware of the harms of machines, of humans, and of the marriage of the two. We should strive to reduce the potential for grave error, provided we can do so in ways that don’t invite greater costs (by which I do not merely mean money).
A careful read of the accounts of Flight 447 and patient Pablo Garcia reveals the overwhelming benefits of automation in aviation and medicine, as well as the dangers that still remain. There is much more work to do, as both authors expertly document. Humans are highly imperfect. So are our systems designed to protect us from ourselves.
* Also, let me assure you that I understand the differences between aviation and medicine, as I mentioned previously. All recent posts on automation are so tagged.