Yeah, there’s an election going on. I’m still going to bring you research. I saw another great study yesterday that you should know about. “Empirical Evaluation of Very Large Treatment Effects of Medical Interventions“:
Context Most medical interventions have modest effects, but occasionally some clinical trials may find very large effects for benefits or harms.
Objective To evaluate the frequency and features of very large effects in medicine.
Data Sources Cochrane Database of Systematic Reviews (CDSR, 2010, issue 7).
Study Selection We separated all binary-outcome CDSR forest plots with comparisons of interventions according to whether the first published trial, a subsequent trial (not the first), or no trial had a nominally statistically significant (P < .05) very large effect (odds ratio [OR], ≥5). We also sampled randomly 250 topics from each group for further in-depth evaluation.
Data Extraction We assessed the types of treatments and outcomes in trials with very large effects, examined how often large-effect trials were followed up by other trials on the same topic, and how these effects compared against the effects of the respective meta-analyses.
I’ve complained many times that research is glacial work. Almost always, you’re trying to take baby steps. It also is expensive. So it can be frustrating, especially when the general public thinks every study should cure cancer. The problem is compounded each time some news story breaks about the “unbelievable leap forward” some lab or group has just made. So why don’t those announcements and press releases seem to bear fruit?
The study above examined “blockbuster” results in over 3000 reviews. What they did is to look for trials with huge results. They also looked to see whether those trials were the first in the area, or were repeated studies of prior findings. They also looked at whether those findings held up after further testing.
Let’s start here: Of the analyses they conducted, 9.7% had a large effect seen in the first published trial, 6.1% had a study with a large effect in a trial that didn’t come first, and 84.2% had no trials with large effects. Right off the bat, I hope you see that the vast majority of studies don’t show large effects.
First trials with large effects were small. They had a median of 18 events in them. Not that subsequent large effect trials were bigger; they had a median of 15 events.
Trials with large effects were significantly less likely to address mortality, and significantly more likely to focus on efficacy in the laboratory.
Here’s the kicker, though. Almost 90% of first trials that showed large effects saw them fade in later trials. Almost 98% of subsequent trials with large effects saw that happen. Large effects rarely hold. And lest you think that large effect trials were just more likely to be checked up on, trials with large and non-large effects were just as likely to have subsequent published trials.
It’s not all bad news. There were some results that held. Just over 9% of trials with a large effect maintained those results in meta-analysis. None of these, however, were studies that looked at mortality-related outcomes.
In the entire Cochrane Database of Systematic Reviews (more than 3000 studied), only one intervention had a large effect on mortality that held up under further scrutiny. Extracorporeal oxygenation (ECMO) for newborns reduced mortality in infants with severe respiratory failure. You’re going to trust me that this isn’t the kind of thing you’d ever want to be commonly used.
Although the media like to trumpet big, huge findings with promises of huge changes in real outcomes, those kind of results just don’t happen often at all in real research. Those results almost always fade in subsequent work. When they do hold up, they are usually not in big-impact areas like preventing death or curing disease.
You should be skeptical of such things, not just because your gut tells you so. You should be skeptical because of findings like these.