The crisis in evidence-based medicine

Evidence-based medicine is an

intellectual community committed to making clinical practice more scientific and empirically grounded and thereby achieving safer, more consistent, and more cost effective care.

But Trisha Greenhaigh and her colleagues argue that evidence-based medicine is in crisis. They identify five key problems. All five merit discussion, but I will focus on two of them:

  • Inflexible rules and technology driven prompts may produce care that is management driven rather than patient centred.
  • Evidence based guidelines often map poorly to complex multimorbidity.

“Management driven” means that the clinician is making decisions according to a rule rather than responding to the needs of the patient. “Evidence based guidelines mapping poorly” means that algorithms designed to guide treatment for a single disorder fail when you apply them to a patient with several disorders.

These problems might be two sides of the same coin. The authors argue that care should be patient-centred rather than driven by “inflexible rules.” That sounds right, but what actually is wrong with following an algorithm, if the evidence indicates that the algorithm leads to better decisions?

The problem is that most clinical algorithms are developed by reducing patients to a relatively small number of critical factors, for example, the diagnosis, the patient’s age, and a few facts about the patient’s clinical history. The algorithm may work well if those are the only relevant facts about the patient.

Unfortunately, many patients are “multimorbid,” meaning they have several serious and complex disorders, each of which brings an entourage of apparently relevant facts. Algorithms designed for one of those disorders may give advice that seems inapplicable or contra-indicated in light of the other problems.

You may think that the problem is that the evidence-based clinicians got off on the wrong foot by oversimplifying the patients when they developed their algorithms. Maybe so, but this is almost unavoidable. The reason is that evidence-based medicine might also be called “medicine guided by statistical learning from data” and it is hard to learn anything from statistical data unless you can simplify your problem to some degree.

Statisticians call this “the curse of dimensionality.” Suppose I have 100 patients to study and two relevant binary patient factors (‘dimensions’ in statistician-speak, for example, stage of disease and gender). My goal is to tune my algorithm (e.g., adjust the recommended dosage of a drug) so that it takes account of the unique combination of factors defining each patient subgroup. So there are four subgroups of patients and 25 patients per subgroup. Let’s say that that is enough cases in each group so that I can determine the proper dose for that group. (It really isn’t enough, so don’t try this at home.)

But two patient factors is likely way too simple. Suppose there were seven apparently relevant patient factors. I now have 128 possible subgroups and less than one patient per subgroup. I can’t learn anything about what works or doesn’t for these subgroups. So with only 100 cases, I have a hard choice: either simplify the problem by dropping dimensions, or give up on developing an algorithm.

Greenhaigh and her colleagues do not propose that we abandon an evidence-based approach. What they argue is that we need to study how best to apply algorithms, recognising that they are often too simple.

We need to gain a better understanding (perhaps beginning with a synthesis of the cognitive psychology literature) of how clinicians and patients find, interpret, and evaluate evidence from research studies, and how (and if) these processes feed into clinical communication, exploration of diagnostic options, and shared decision making. Deeper study is also needed into the less algorithmic components of clinical method such as intuition and heuristic reasoning, and how evidence may be incorporated into such reasoning.

I’m not confident that these inquiries would lead to improvements in patient outcomes, but it is essential to pursue these studies if only to help clinicians cope with the avalanche of data.

However, there is another strategy for attacking the evidence/complexity problem: Get massively more data. That’s the proposal of another movement for the reform of evidence-based medicine, the Institute of Medicine‘s Learning Health Care System initiative. A learning health care system is organized to gather and data from all routine clinical care and use it for learning. Having vastly more data will not allow us to escape the curse of dimensionality. There will always be another relevant dimension that defines a frontier where we run out of evidence. But algorithms that take into account n + 1 patient factors will be superior to those that take into account only n factors. So if massively more data lets us add another dimension to how well we understand patients, that’s a win.

We should develop learning health systems to get better algorithms. But we also need Greenhaigh’s reforms to evidence based medicine to learn how to better put these algorithms into practice and to refocus care on patients. The essay is interesting throughout and highly recommended.

@Bill_Gardner

For a contrary view about the curse of dimensionality, see here. For previous TIE writing on evidence based medicine, see here, here, and here.

Hidden information below

Subscribe

Email Address*