Causation is not correlation: Medical journals should revamp language rules for new analyses of old randomized trials

Jamie Daw (@jamie_daw) and Adam Sacarny (@asacarny) are both Assistant Professors in the Department of Health Policy and Management at the Mailman School of Public Health at Columbia University.

Randomized controlled trials (RCTs) are the gold standard for producing causal evidence on the impacts of policies and programs. Yet, RCTs are exceedingly rare in social and health policy. This is largely because the interventions of interest—large scale changes like providing insurance, changing physician payment mechanisms, or overhauling health care delivery models—are challenging to randomize from both a practical and ethical perspective.

Given the scarcity of randomized evaluations in health care, there is tremendous value in conducting follow-up studies of them, also called secondary analyses. Often this means linking RCTs (when they do happen) to administrative data to evaluate impacts on outcomes that were not the primary focus of the original experiment. We can gain important new insights from post hoc linkages, looking at new outcomes and studying them with longer follow-ups. In addition, these linkages are often inexpensive, making this research of even greater value.

The examples of secondary analyses of RCTs providing new insights span disciplines. The initial evaluation of a trial of deworming medication to children in Kenya showed that it raised school attendance. Twelve years later, the same researchers conducted a secondary analysis that showed it raised earnings and employment, too. The research team that conducted the initial evaluation of the Oregon Health Insurance Experiment, which randomized eligibility for Medicaid, went on to write eight more papers looking at a host of important health and non-health outcomes; secondary analyses conclusively showed, for example, that Medicaid raised use of the emergency department and improved voter participation.

However, editorial practices at many medical journals, including JAMA and its affiliate journals, dis-incentivize secondary studies of this sort by requiring the use of conservative, non-causal language usually reserved for observational studies. Studies of randomized trials end up describing an intervention’s “association” with an outcome – the same term a retrospective cohort study would use.

We saw this practice play out recently when Craig Pollack and collaborators published a pair of secondary analyses of the Moving to Opportunity (MTO) experiment, a 1990s-era trial that tested housing vouchers that encouraged low-income families to move to higher-income neighborhoods. These researchers drew upon other secondary studies of MTO showing that the vouchers increased the educational attainment of children under age 13 when their families entered the trial. They linked the trial to hospitalization data, finding signs that the vouchers also reduced inpatient hospital stays (though perhaps not emergency department visits) for these children as they grew up.

The title of that paper on hospital stays? Association of Receipt of a Housing Voucher With Subsequent Hospital Utilization and Spending.

The judicious use of “effect” vs. “association” is of great benefit to readers of medical journals, who should be able to rely on this language to know when a paper is potentially subject to confounding issues. This is precisely our concern: the use of “association” in secondary analyses of RCTs falsely equivocates a near gold-standard for evidence with observational studies that require much stronger assumptions (e.g. no unobserved confounding) for causal interpretation.

Secondary analyses of RCTs should be subject to high standards just like the primary RCTs on which they are based. In return, journals should allow for the use of causal language. These standards should include clear identification that a study is a secondary analysis in the title, requirements to pre-register and file pre-analysis plans before data analysis commences, and adherence to reporting guidelines like CONSORT. As limitations, secondary analyses should state the potential for false positives that comes from additional hypothesis testing. Laying out clear rules would have the benefit of encouraging authors to follow them – plenty of secondary analyses in the past likely did not meet these standards. Having clear rules would also ensure they are consistently applied, as our review of publications in JAMA network journals found at least one secondary study that used cause and effect language.

As scientists, we must emphasize and encourage rigorous evidence whenever possible. Otherwise, we risk implementing ineffective policies and programs—and forgoing effective ones—which comes at a cost to individuals and society, as well as to the legitimacy of the scientific community.

Hidden information below

Subscribe

Email Address*