Publication bias is rampant because we’re lazy and unserious about science

[T]here seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.

— Drummond Rennie (JAMA, 1986)

Catherine DeAngelis, former editor-in-chief of JAMA, agrees with Rennie in a recent editorial in the Milbank Quarterly. Nevertheless, she goes on to defend peer review as an important check on the quality of scholarly work. The volume of health care scholarship is so vast, the study validation task that peer review is intended to accomplish is staggering. The Medline database contains 22 million references from 5,600 journals. In 2014 alone, 750,000 additional citations were added. But if so much terrible work is being published, as Rennie and DeAngelis suggest, peer review clearly isn’t a strong enough filter.

In addition to letting poor quality work through, one might worry that peer review—and the editorial system in which it is a component—is biased against negative findings. DeAngelis doesn’t believe that to be the case.

Some individuals argue that their studies, especially those reporting negative results, are less likely to be published by journals. I am not aware of any good studies that have shown this to be true, and I know of at least one that shows it to be untrue. While researchers are more likely to submit papers showing positive results, it is not clear that editors are more likely to publish them.

That last point is crucial. The study to which DeAngelis points does find that based on data on JAMA editorial decisions from February 1996 through August 1999, there was no statistically significant difference in publication rate of positive vs. negative findings. However, this needs to be compared to the actual difference in rates among all studies (published and unpublished), which we may not know precisely. It’s plausible that researchers uncover far more negative findings than positive ones, but that a lower proportion of negative findings are submitted for publication.

Ian Roberts and colleagues pick up the trail in the BMJ. They agree there’s a lot of shoddy work in journals. Moreover, half of all trials are unpublished, they write. Among the published ones, only selected outcomes are included. Their point: There’s a tremendous opportunity for publication bias, and this is a substantial problem for systematic reviews. Garbage in, garbage out, basically.

Pre-study registration—detailing the data collection and analysis to be done—at least offers an opportunity to check whether published studies include all outcomes that were intended to be analyzed. Unfortunately, Roberts et al. write, less than one-third of journals require registration.

There is no pro-science justification for this. We’re just being lazy. By this I don’t just mean we’re being lazy about registering study protocols, requiring pre-registration as a condition of publication, and checking that studies report all registered outcomes. We’re also being lazy about making that process more rational and reasonable for researchers. Perhaps there are circumstances in which study registration or checking a manuscript against its registered protocol isn’t worth the cost (e.g., low risk to patients). But there are no doubt also many circumstances in which it is worth it and we’re just not doing it. On what scientific basis does one condone this?

Roberts et al. conclude,

Thousands of articles have been published about publication bias. However, the challenge is not to describe the flaws in the current system but to create a better one, where decisions about healthcare are informed by valid and reliable evidence.

This is the hard work to which we now must turn.

@afrakt

Hidden information below

Subscribe

Email Address*