My reply to Jim Manzi

In a follow-up to my EconTalk discussion with Russ Roberts about the Oregon Health Insurance Experiment (OHIE), he interviewed Jim Manzi about it this week. Russ invited me to submit a written response to the interview, which you’ll find below. It is linked to from the EconTalk page for the Manzi interview. I have no substantial disagreement with most of the content of the interview. The purpose of this reply is to add more information, not to debate any points.

Like Jim, I did not have any numerically specific prior views about how much expansion of Medicaid would affect the physical health of non-elderly adults over two years. However, as Jim pointed out, the OHIE investigators suggested that we compare their diastolic blood pressure change results to findings from specific, prior work. Since my conversation with Russ, my colleague and physician Aaron Carroll examined that prior work and shared his thoughts in two posts. He concluded that, for a variety of reasons, we should not have expected the OHIE to reveal the size of change observed in those prior studies, the approximately 5 mm Hg in diastolic blood pressure change that Jim suggested as a rough average. You can read the details at the links for yourself.*

A key point is that blood pressure reduction should only be expected in a population with initially elevated blood pressure, which was the focus of the prior literature referenced above. In contrast, the headline OHIE result is for all study subjects, only a small percentage of whom had elevated blood pressure at baseline. Unfortunately, there is no reported OHIE subanalysis focused exclusively on subjects with hypertension at time of randomization. Depending on which metrics from the published results you examine, between 3% and 16% of the sample had elevated blood pressure at baseline. Taking the high end, 16% x 5 mm Hg = 0.8 mm Hg is in the ballpark of a reasonable expectation of the reduction in diastolic blood pressure the OHIE could have found (it was also the study’s point estimate) were it adequately powered to do so. Was it?

I worked with Aaron and fellow health economist Sam Richardson on this question. We found that the study had 80% power (the standard minimum for clinical studies) to detect a change in diastolic blood pressure of 2.82 mm Hg. Put another way, this means that the probability of failing to detect a true change of this size, the false negative rate, is 20%. For the more reasonable, expected 0.8 mm Hg change calculated above, the probability of a false negative is about 86%, or 14% power. This is underpowered by any reasonable criterion.

But that’s just diastolic blood pressure. What about another measure? In the discussion of their paper, the investigators calculated the reduction in glycated hemoglobin level one might have expected from the clinical literature, 0.05 percentage points. That’s well within the 95% confidence interval of their estimate and corresponds to a false negative rate of 75%, or 25% power. So, the study was underpowered for this measure too.

Aaron, Sam, and I have calculated power for other physical health measures reported from the OHIE and will share results soon. If you can’t wait, I have posted methods so you can do power calculations yourself. This is possible because power analysis methods are well-known and all necessary parameters are readily available in the published paper. The only difference between the methods I’ve posted and what Aaron, Sam, and I are doing is that we are incorporating a few higher-order nuances, like adjusting for the effect of the study’s survey weights.

Now for the lightning round. Here are a few quick responses to other aspects of the interview:

  • Jim noted that 40% of lottery winners didn’t apply for Medicaid coverage. As discussed, this might be due to an expectation of low value from Medicaid or lack of follow-through skills (jointly, low prudence). However, half of the 60% who did apply were deemed ineligible. The investigators report that this was largely due to income above the 100% FPL threshold, but other potential reasons include moving out of state, securing other coverage within a six month look-back period, or aging out of eligibility. One might reasonably presume a similar proportion (half) of those who did not apply would have also been ineligible. Perhaps some knew that to be the case and spared themselves the fruitless exercise of completing the forms. It seems reasonable to me that those capable of weighing the value of Medicaid would also know whether their incomes were too high, they moved out of state, they secured other health insurance coverage, or were too old. Therefore, it is likely that substantially fewer than 40% of the non-applicants suffered a lack of prudence. Judging from the proportion of applicants deemed ineligible, perhaps the number of imprudent non-applicants is closer to 20%. This is speculation, but no less plausible than that Jim or Russ offered.
  • The RAND Health Insurance Experiment was not a study of health insurance coverage since it did not include any uninsured subjects. It was a study of cost sharing, capped at $1,000 (circa mid-1970s dollars) for all participants.
  • The OHIE depression reduction result was not observed largely or entirely in the first month after enrollment. The investigators didn’t conduct a depression screen in a one-month survey, but did in later surveys, as Adrianna McIntyre explains. However, self-reported health did improve substantially in the first month.
  • To what extent the findings are informative about Obamacare’s Medicaid expansion would be an excellent topic of discussion. Neither Jim, Russ, nor I got into this question very deeply. It’s properly a question of external validity, not bias, which is something else.

In conclusion, I applaud Russ for devoting two episodes to the OHIE. It is an important study, both for its subject and methods, and it deserves at least that much attention. I also praise Jim for his addition of substantial value to the conversation. I hope I have helped clarify a few points.

* Aaron’s posts largely focus on systolic blood pressure, though diastolic is mentioned and is also included in the cited studies. Suffice it to say, the same issues of expected effect size and insufficient power arise for systolic blood pressure as I discuss for diastolic. I focused on diastolic because it is what Jim mentioned and the lead investigator emailed me about.

@afrakt

Hidden information below

Subscribe

Email Address*