• Research is usually comprised of baby steps

    Yeah, there’s an election going on. I’m still going to bring you research. I saw another great study yesterday that you should know about. “Empirical Evaluation of Very Large Treatment Effects of Medical Interventions“:

    Context  Most medical interventions have modest effects, but occasionally some clinical trials may find very large effects for benefits or harms.

    Objective  To evaluate the frequency and features of very large effects in medicine.

    Data Sources  Cochrane Database of Systematic Reviews (CDSR, 2010, issue 7).

    Study Selection  We separated all binary-outcome CDSR forest plots with comparisons of interventions according to whether the first published trial, a subsequent trial (not the first), or no trial had a nominally statistically significant (P < .05) very large effect (odds ratio [OR], ≥5). We also sampled randomly 250 topics from each group for further in-depth evaluation.

    Data Extraction  We assessed the types of treatments and outcomes in trials with very large effects, examined how often large-effect trials were followed up by other trials on the same topic, and how these effects compared against the effects of the respective meta-analyses.

    I’ve complained many times that research is glacial work. Almost always, you’re trying to take baby steps. It also is expensive. So it can be frustrating, especially when the general public thinks every study should cure cancer. The problem is compounded each time some news story breaks about the “unbelievable leap forward” some lab or group has just made. So why don’t those announcements and press releases seem to bear fruit?

    The study above examined “blockbuster” results in over 3000 reviews. What they did is to look for trials with huge results. They also looked to see whether those trials were the first in the area, or were repeated studies of prior findings. They also looked at whether those findings held up after further testing.

    Let’s start here: Of the analyses they conducted, 9.7% had a large effect seen in the first published trial, 6.1% had a study with a large effect in a trial that didn’t come first, and 84.2% had no trials with large effects. Right off the bat, I hope you see that the vast majority of studies don’t show large effects.

    First trials with large effects were small. They had a median of 18 events in them. Not that subsequent large effect trials were bigger; they had a median of 15 events.

    Trials with large effects were significantly less likely to address mortality, and significantly more likely to focus on efficacy in the laboratory.

    Here’s the kicker, though. Almost 90% of first trials that showed large effects saw them fade in later trials. Almost 98% of subsequent trials with large effects saw that happen. Large effects rarely hold. And lest you think that large effect trials were just more likely to be checked up on, trials with large and non-large effects were just as likely to have subsequent published trials.

    It’s not all bad news. There were some results that held. Just over 9% of trials with a large effect maintained those results in meta-analysis. None of these, however, were studies that looked at mortality-related outcomes.

    In the entire Cochrane Database of Systematic Reviews (more than 3000 studied), only one intervention had a large effect on mortality that held up under further scrutiny. Extracorporeal oxygenation (ECMO) for newborns reduced mortality in infants with severe respiratory failure. You’re going to trust me that this isn’t the kind of thing you’d ever want to be commonly used.

    Although the media like to trumpet big, huge findings with promises of huge changes in real outcomes, those kind of results just don’t happen often at all in real research. Those results almost always fade in subsequent work. When they do hold up, they are usually not in big-impact areas like preventing death or curing disease.

    You should be skeptical of such things, not just because your gut tells you so. You should be skeptical because of findings like these.

    @aaronecarroll

    Share
    Comments closed
     
    • As someone whose father died of hospital acquired infections in a hospitalization in which he was never evaluated, let alone diagnosed or treated (but was subjected to unnecessary exploratory surgery), for the issue that brought him to the emergency room, I assure you that there are many places in US medicine where major improvements in mortality could be easily achieved. The worst part of it is that these places (e.g. doctors washing hands before seeing patients, not operating on the wrong leg, making an effort to avoid hospital acquired infections) are all of the “not rocket science” variety.

      (Many are discussed in The Medical Malpractice Myth and The Checklist Manifesto.)

    • Listen to this Radiolab segment from a while back on the declining effect size phenomenon (in various fields):

      http://www.radiolab.org/blogs/radiolab-blog/2011/may/03/cosmic-habituation/

      • So far as I know the “diminishing effect” effect has not been observed for climate change; the estimates of warming temperatures seem to remain stable rather than diminishing. I may be misinformed; does any reader happen to know more about this?

    • Well, in the past we *have* had examples of treatments with very large effects on mortality. Penicillin. Smallpox vaccine. Anti-hypertensives. Antivirals for HIV. So don’t give up hope.