The algorithm will see you now

The paper by paper by Janet Currie, W. Bentley MacLeod, and Jessica Van Parys, about which I posted this morning makes the point that physicians make mistakes. This is not surprising, because they’re human.

More controversially, I suspect, is their finding that “algorithms may be one way to improve care, at least for common situations like emergency treatment of heart attack.” This would seem to threaten physician autonomy. If it’s true that algorithms can improve care—even if in a few, circumscribed ways—is physician autonomy an important consideration nonetheless? (One concern might be that fewer people will want to be doctors. Another concern is that an algorithm might be “right” on average but not capture patients’ physiological or preferential heterogeneity. As Peter Ubel wrote in NEJM today, sometimes in guidelines values masquerade as facts. Are there other concerns?)

As I contemplated the future of algorithmic medicine, I listened to this episode of 99 Percent Invisible (a program to which you listen to as well, right? RIGHT!!!???):

“For however much automation has helped the airline passenger by increasing safety it has had some negative consequences,” says Langewiesche. “In this case it’s quite clear that these pilots had had experience stripped away from them for years.” The Captain of the Air France flight had logged 346 hours of flying over the past six months. But within those six months, there were only about four hours in which he was actually in control of an airplane—just the take-offs and landings. The rest of the time, auto-pilot was flying the plane. Langewiesche believes this lack experience left the pilots unprepared to do their jobs. […]

However potentially dangerous it may be to rely too heavily on automation, no one is advocating getting rid of it entirely. It’s agreed upon across the board that automation has made airline travel safer. The accident rate for air travel is very low: about 2.8 accidents for every one million departures.

Medical errors are far more common. As Aaron told us, “the wrong site is operated on in about 1 in a 100,000 procedures. Foreign objects are left in the body in about 1 in 10,000 procedures.” These are “never” events. They should not happen. One would think that exceedingly simple algorithms (like checklists) could prevent them. Is there a good reason not to try (or try harder) to implement them?

The comparison of medical errors to aviation errors is not new. The anesthesiologist community has adapted a system of error tracking and correcting from the aviation industry. To what extent does it include application of algorithms to roles humans used to fill? To what extent is that perceived as going too far, causing a degradation in skill that could be called upon when the algorithm fails?

Aviation is not medicine, of course. A key difference may be that planes are far more similar to one another in response to flight commands than are humans in response to medical treatments. The classic response to any algorithmic medicine (or guideline) is that it’s “one size fits all” and that a good physician knows how to treat an idiosyncratic patient. But what if all physicians are not equally good? What if patient idiosyncrasy invites and is used to justify greater variation in practice than is warranted?

Perhaps one day aviation and medicine will be more similar. Today, all planes of a given model are designed to be equivalent. Today, we don’t (yet) completely know what the different, roughly equivalent models of humans are. Advances in genomics might one day tell us.

Yet, well short of that day there may be areas in which more algorithmic, medical decision support can advance safety and improve outcomes. My guess is that more physicians will be working with algorithms in the future. To the extent it leads to better care, payers and patients might reasonably demand it. It will, however, be a long, long time before robots take docs’ jobs.

@afrakt

Hidden information below

Subscribe

Email Address*