• NEJM letters on the Oregon Medicaid study

    The following is jointly authored by Austin, Aaron, and Sam Richardson. Our letter to The New England Journal of Medicine (NEJM) was rejected on the grounds that our point of view would be adequately represented among the letters accepted for publication. Those letters are now published

    The letter that expresses ideas most similar to ours is by Ross Boylan:

    The abstract in the article by Baicker et al. states that “Medicaid coverage generated no significant improvements in measured physical health.” This is a misleading summary of the data reported in their article. The best estimates are that the Medicaid group had better outcomes than the control group according to most measures (see Table 2 of the article). The problem is that these findings are not statistically significant.

    So, the effects might have been zero. That is not the same as saying that they were zero, or even that they were small. Buried toward the end of the article is the statement, “The 95% confidence intervals for many of the estimates of effects . . . include changes that would be considered clinically significant.”

    Nevertheless, almost all the article, the related editorial, and related news reports, opinion pieces, and online discussions proceeded as if the effects had been found to be zero.

    If one objects, on the basis of a lack of statistical certainty, to the simple summary that the Medicaid group had better outcomes, then one should describe the substantive meaning of the confidence interval. An honest summary is that it is quite likely there were positive effects, though it is possible that they were zero or negative.

    Still, there is not one letter that dives deeply into the issues of power, as we have. (See also thisthat, and this.)

    Katherine Baicker and Amy Finkelstein, two of the original paper’s authors and leads on the wider study, wrote a response to the letters, which you can read in full at NEJM. One excerpt:

    In some cases, we can reject effect sizes seen in previous studies. For example, we can reject decreases in diastolic blood pressure of more than 2.7 mm Hg (or 3.2 mm Hg in patients with a preexisting diagnosis of hypertension) with 95% confidence. Quasi-experimental studies of the 1-year effect of Medicaid showed decreases in diastolic blood pressure of 6 to 9 mm Hg.

    Of course it is true that the study results reject, with 95% confidence, decreases in diastolic blood pressure mentioned in this quote. However, as Aaron wrote here and here, the prior work cited by the authors that suggests a 6-9 mm Hg drop in diastolic blood pressure was on a population of patients with hypertension. As he explained, and as I did again here, only a small fraction of the Oregon Health Study sample had high blood pressure:

    A key point is that blood pressure reduction should only be expected in a population with initially elevated blood pressure, which was the focus of the prior literature referenced above. In contrast, the headline OHIE result is for all study subjects, only a small percentage of whom had elevated blood pressure at baseline. Unfortunately, there is no reported OHIE subanalysis focused exclusively on subjects with hypertension at time of randomization. Depending on which metrics from the published results you examine, between 3% and 16% of the sample had elevated blood pressure at baseline. Taking the high end, 16% x 5 mm Hg = 0.8 mm Hg is in the ballpark of a reasonable expectation of the reduction in diastolic blood pressure the OHIE could have found (it was also the study’s point estimate) were it adequately powered to do so. Was it?

    No, which you can read about in full here. (And, no, power would still not be adequate even at twice this reasonable expectation.)

    We have high regard for the study and its authors. The limitations of power are functions of the sample well beyond their control. Nevertheless, we believe they need to be kept in mind for a complete understanding of the study’s findings.

    Comments closed