Insurance and mortality for HIV patients (Medicaid IV)

An individual’s health status affects Medicaid enrollment (the ill are more likely to enroll). Medicaid enrollment affects an individual’s health status too (one can argue about which way, for the better or worse). The two are simultaneous. That makes inferring the causal effect of Medicaid on health outcomes difficult.

A few weeks ago I described the right way to tease out the causal effect of Medicaid enrollment on health outcomes:

There are undoubtedly studies that consider Medicaid vs. uninsured outcomes using the random variations provided by the natural experiment that is Medicaid. Characteristics of the program vary by state and year, making it a perfect set-up for such an analysis of this issue. This second I can’t point to a study. But I know where to look.

I’ve started to look and will begin to describe the relevant literature as I read the papers. I’m not going to filter or cherry pick papers based on their findings. All that matters to me is the quality of the methods applied. Feel free to send me links to papers you think qualify (look for peer-reviewed, natural or randomized experiments and/or instrumental variables approaches; the run-of-the-mill observational study that controls for observable individual characteristics won’t do). There may be many posts in this series of paper reviews. They’ll all be under the “Medicaid-IV” tag. When I think I’ve summarized them all, I’ll post a conclusion that reports on the full body of evidence.

Below I’ll discuss a 2001 paper in the Journal of the American Statistical Association by Dana Goldman et al., Effect of Insurance on Mortality in an HIV-Positive Population in Care. Before I get to the paper, just in case it isn’t clear, by exploiting the variations in state-year Medicaid eligibility I’m talking about instrumental variables (IV) analysis, about which I’ve written considerably.* The sense in which those variations are random is that an individual’s characteristics cannot affect them. As far as an individual is concerned, the Medicaid policy in effect in his state and at a particular time is random. But Medicaid policy does affect Medicaid enrollment (it affects private enrollment too), so it can be exploited to infer the causal effect of Medicaid (or insurance in general) on health outcomes free of the confounding effects of health on Medicaid.

Goldman and colleagues do just that using a nationally representative cohort of HIV-infected persons and sound IV methods. The abstract summarizes the highlights. It’s a bit of dense reading so if you wish to skip it just trust me that it communicates that the authors are following standard techniques for causal inference:

A naïve single-equation model confirms the perverse result found by others in the literature—that insurance increases the probability of death for HIV+ patients. We attribute this finding to a correlation between unobserved health status and insurance status in the mortality equation for two reasons. First, the eligibility rules for Medicaid and Medicare require HIV+ patients to demonstrate a disability, almost always defined as advanced disease, to qualify. Second, if unobserved health status is the cause of the positive correlation, then including measures of HIV+ disease as controls should mitigate the effect. Including measures of immune function (CD4 lymphocyte counts) reduces the effect size by approximately 50%, although it does not change sign. To deal with this correlation, we develop a two-equation parametric model of both insurance and mortality. The effect of insurance on mortality is identified through the judicious use of state policy variables as instruments (variables related to insurance status but not mortality, except through insurance). The results from this model indicate that insurance does have a beneficial effect on outcomes, lowering the probability of 6-month mortality by 71% at baseline and 85% at follow-up. The larger effect at followup can be attributed to the recent introduction of effective therapies for HIV infection, which have magnified the returns to insurance for HIV+ patients (as measured by mortality rates). (Bold mine.)

The reason to read the paper, or the first few pages of it anyway, is to get a sense of how to do Medicaid-health outcome studies properly. Importantly, the authors used arguably exogenous instruments–features of state Medicaid and AIDS drug assistance programs–and subjected them to power and falsification tests, which they passed. One can still argue that the instruments are not valid, but it would require an argument so contorted I cannot fathom what it could be.

The reason to take the study with a couple of big grains of salt is that there are a few potential and actual problems, not least of which is that the results I made bold above are not statistically significant. In that sense, the findings are inconclusive about whether or not insurance reduced mortality for HIV patients.

A second limitation is that it is not specifically a study of Medicaid. It’s a study of insurance, of any type. The authors lump patients with different types of insurance (public, private) together. That’s a big problem because characteristics of state Medicaid programs affect Medicaid enrollment and private coverage rates, but in opposite directions. It is also possible that Medicaid coverage and private insurance have opposite effects on outcomes. Ultimately, it is hard to draw policy conclusions with a study that mixes the two insurance types. If mortality improves is it due to public or private coverage? It’s impossible to tell. They acknowledge this limitation and correctly describe a more complex model that would separately identify the effects of public and private insurance on mortality. They wrote that such a model was a computational challenge. Today it would not be.

A final critique is that the preferred model specifications include a measure of disease burden, the lowest ever CD4 count as of the baseline year. To the extent that Medicaid causes poor outcomes (due to, say the poor quality care it could plausibly promote) it is possible that the lowest ever CD4 count is itself an outcome of insurance coverage. It’s a big no-no to include an outcome as a control variable. So, the authors need to make an argument that including lowest ever CD4 count is OK. They didn’t, and I don’t know enough about AIDS to make the argument for them.

* If you’re already puzzled, stop right here and go read some of my posts on IV and/or Steve Pizer’s tutorial paper. I am not exaggerating by suggesting that anyone who wants to understand research in social science and particularly anyone who is going to interpret that research for a wider audience really ought to take the time to understand the issues pertaining to IV, why it is used, and why many (though not all) observational studies that do not consider and deal with those issues are potentially flawed.

Hidden information below

Subscribe

Email Address*