The following originally appeared on The Upshot (copyright 2017, The New York Times Company).
In medicine, the term “evidence-based” causes more arguments than you might expect.
And that’s quite apart from the recent political controversy over why certain words were avoided in Centers for Disease Control and Prevention budget documents.
The arguments don’t divide along predictable partisan lines, either.
The mission of “evidence-based medicine” is surprisingly recent. Before its arrival, much of medicine was based on clinical experience. Doctors tried to figure out what worked by trial and error, and they passed their knowledge along to those who trained under them.
Many were first introduced to evidence-based medicine through David Sackett’s handbook, first published in 1997. The book taught me how to use test characteristics, like sensitivity and specificity, to interpret medical tests. It taught me how to understand absolute risk versus relative risk. It taught me the proper ways to use statistics in diagnosis and treatment, and in weighing benefits and harms.
It also firmly established in my mind the importance of randomized controlled trials, and the great potential for meta-analyses, which group individual trials for greater impact. This influence is apparent in what I write for The Upshot.
But evidence-based medicine is often described quite differently.
Many of its supporters say that using evidence-based medicine can address the problems of cost, quality and access that bedevil the health care system. If we all agree upon best practices — based on data and research — we can reduce unnecessary care, save money and push people into pathways to yield better results.
Critics of evidence-based medicine, many of them from within the practice of medicine, point to weak evidence behind many guidelines. Some believe that medicine is more of an “art” than a “science” and that limiting the practice to a cookbook approach removes focus from the individual patient.
Some of these critics (as well as many readers who comment on my articles) worry that guidelines line the pockets of pharmaceutical companies and radiologists by demanding more drugs and more scans. Others worry that evidence-based medicine makes it harder to get insurance companies to pay for needed care. Insurance companies worry that evidence-based recommendations put them on the hook for treatment with minimal proven value.
Everyone is a bit right here, and everyone is a bit wrong. This battle isn’t new; it has been going on for some time. It’s the old guard versus the new. It’s the patient versus the system. It’s freedom versus rationing. It’s even the individual physician versus the proclamations of a specialized elite.
Because of the tensions in that last conflict, this debate has become somewhat political.
The benefits of evidence-based medicine, when properly applied, are obvious. We can use test characteristics and results to make better diagnoses. We can use evidence from treatments to help people make better choices once diagnoses are made. We can devise research to give us the information we are lacking to improve lives. And, when we have enough studies available, we can look at them together to make widespread recommendations with more confidence than we’d otherwise be able.
When evidence-based medicine is not properly applied, though, it not only undermines its reasons for existence, but it also can lead to harm. Guidelines — and there are many — are often promoted as “evidence-based” even though they rely on “evidence” unsuited to its application. Sometimes, these guidelines are used by vested interests to advance an agenda or control providers.
Further, too often we treat all evidence as equivalent. I’ve lost track of the number of times I’ve been told that “research” proves I’m wrong. All research is not the same. A hierarchy of quality exists, and we have to be sure not to overreach.
There is a difference between statistical significance and clinical significance. Get a large enough cohort together, and you will achieve the former. That by itself does not ensure that the result achieves clinical significance and should alter clinical practice.
Finally, we have to recognize that even when good studies are done, with clinically significant results, we shouldn’t over-extrapolate the findings. Just because something worked in a particular population doesn’t mean we should do the same things to another group and say that we have evidence for it.
Years ago, Trisha Greenhalgh and colleagues wrote an article in the BMJ citing evidence-based medicine as “a movement in crisis.” It argued that we’ve moved too much from focusing on disease to risk. This point, more than any other, highlights the problem evidence-based medicine seems to have in the public sphere.
Too many articles, studies and announcements are quick to point out that something or other has been proved to be dangerous to our health, without a good explanation of the magnitude of that risk, or what we might reasonably do about it.
Big data, gene sequencing, artificial intelligence — all of these may provide us with lots of information on how we might be at risk for various diseases. What we lack is knowledge about what to do with what we might learn.
If evidenced-based medicine is to live up to its potential, it seems the focus should be on that side of the equation as well, instead of taking best guesses and calling them evidence-based. This, probably more than anything else, has made the term so widely mistrusted.