• Insurance and mortality for HIV patients (Medicaid IV)

    An individual’s health status affects Medicaid enrollment (the ill are more likely to enroll). Medicaid enrollment affects an individual’s health status too (one can argue about which way, for the better or worse). The two are simultaneous. That makes inferring the causal effect of Medicaid on health outcomes difficult.

    A few weeks ago I described the right way to tease out the causal effect of Medicaid enrollment on health outcomes:

    There are undoubtedly studies that consider Medicaid vs. uninsured outcomes using the random variations provided by the natural experiment that is Medicaid. Characteristics of the program vary by state and year, making it a perfect set-up for such an analysis of this issue. This second I can’t point to a study. But I know where to look.

    I’ve started to look and will begin to describe the relevant literature as I read the papers. I’m not going to filter or cherry pick papers based on their findings. All that matters to me is the quality of the methods applied. Feel free to send me links to papers you think qualify (look for peer-reviewed, natural or randomized experiments and/or instrumental variables approaches; the run-of-the-mill observational study that controls for observable individual characteristics won’t do). There may be many posts in this series of paper reviews. They’ll all be under the “Medicaid-IV” tag. When I think I’ve summarized them all, I’ll post a conclusion that reports on the full body of evidence.

    Below I’ll discuss a 2001 paper in the Journal of the American Statistical Association by Dana Goldman et al., Effect of Insurance on Mortality in an HIV-Positive Population in Care. Before I get to the paper, just in case it isn’t clear, by exploiting the variations in state-year Medicaid eligibility I’m talking about instrumental variables (IV) analysis, about which I’ve written considerably.* The sense in which those variations are random is that an individual’s characteristics cannot affect them. As far as an individual is concerned, the Medicaid policy in effect in his state and at a particular time is random. But Medicaid policy does affect Medicaid enrollment (it affects private enrollment too), so it can be exploited to infer the causal effect of Medicaid (or insurance in general) on health outcomes free of the confounding effects of health on Medicaid.

    Goldman and colleagues do just that using a nationally representative cohort of HIV-infected persons and sound IV methods. The abstract summarizes the highlights. It’s a bit of dense reading so if you wish to skip it just trust me that it communicates that the authors are following standard techniques for causal inference:

    A naïve single-equation model confirms the perverse result found by others in the literature—that insurance increases the probability of death for HIV+ patients. We attribute this finding to a correlation between unobserved health status and insurance status in the mortality equation for two reasons. First, the eligibility rules for Medicaid and Medicare require HIV+ patients to demonstrate a disability, almost always defined as advanced disease, to qualify. Second, if unobserved health status is the cause of the positive correlation, then including measures of HIV+ disease as controls should mitigate the effect. Including measures of immune function (CD4 lymphocyte counts) reduces the effect size by approximately 50%, although it does not change sign. To deal with this correlation, we develop a two-equation parametric model of both insurance and mortality. The effect of insurance on mortality is identified through the judicious use of state policy variables as instruments (variables related to insurance status but not mortality, except through insurance). The results from this model indicate that insurance does have a beneficial effect on outcomes, lowering the probability of 6-month mortality by 71% at baseline and 85% at follow-up. The larger effect at followup can be attributed to the recent introduction of effective therapies for HIV infection, which have magnified the returns to insurance for HIV+ patients (as measured by mortality rates). (Bold mine.)

    The reason to read the paper, or the first few pages of it anyway, is to get a sense of how to do Medicaid-health outcome studies properly. Importantly, the authors used arguably exogenous instruments–features of state Medicaid and AIDS drug assistance programs–and subjected them to power and falsification tests, which they passed. One can still argue that the instruments are not valid, but it would require an argument so contorted I cannot fathom what it could be.

    The reason to take the study with a couple of big grains of salt is that there are a few potential and actual problems, not least of which is that the results I made bold above are not statistically significant. In that sense, the findings are inconclusive about whether or not insurance reduced mortality for HIV patients.

    A second limitation is that it is not specifically a study of Medicaid. It’s a study of insurance, of any type. The authors lump patients with different types of insurance (public, private) together. That’s a big problem because characteristics of state Medicaid programs affect Medicaid enrollment and private coverage rates, but in opposite directions. It is also possible that Medicaid coverage and private insurance have opposite effects on outcomes. Ultimately, it is hard to draw policy conclusions with a study that mixes the two insurance types. If mortality improves is it due to public or private coverage? It’s impossible to tell. They acknowledge this limitation and correctly describe a more complex model that would separately identify the effects of public and private insurance on mortality. They wrote that such a model was a computational challenge. Today it would not be.

    A final critique is that the preferred model specifications include a measure of disease burden, the lowest ever CD4 count as of the baseline year. To the extent that Medicaid causes poor outcomes (due to, say the poor quality care it could plausibly promote) it is possible that the lowest ever CD4 count is itself an outcome of insurance coverage. It’s a big no-no to include an outcome as a control variable. So, the authors need to make an argument that including lowest ever CD4 count is OK. They didn’t, and I don’t know enough about AIDS to make the argument for them.

    * If you’re already puzzled, stop right here and go read some of my posts on IV and/or Steve Pizer’s tutorial paper. I am not exaggerating by suggesting that anyone who wants to understand research in social science and particularly anyone who is going to interpret that research for a wider audience really ought to take the time to understand the issues pertaining to IV, why it is used, and why many (though not all) observational studies that do not consider and deal with those issues are potentially flawed.

    • “The reason to read the paper, or the first few pages of it anyway, is to get a sense of how to do Medicaid-health outcome studies properly.”

      I don’t know Austin. The paper makes no mention of the largest safety net provider for HIV patients – The Ryan White Care Act – and I would argue that, in the case of HIV patients, the safety net reimbursement program (when coupled with ADAP) might have better outcomes than the insured. Ryan White is not technically insurance, though it does prescribe a certain care delivery model with a quality element so in the case of HIV, it would be sort of sloppy to lump all the uninsured in one group. Some of those uninsured are going to Ryan White clinics and seeing specialists who see vastly more HIV/AIDS patients than their insured counterparts.

      Somewhat presciently, Ryan White providers are set up as medical homes by funding design. Someone better at study design than me should address the question if this study is actually evidence of the Ryan White care model being superior to traditional insurance or Medicaid.

      • @ThomasEN – I know nothing about The Ryan White Care Act and, from the paper, perhaps the authors are equally unaware. But that’s absolutely no reason to discount the paper’s methods. Doing so is like suggesting someone who correctly adds 2+2 has done so improperly because the individual doesn’t know how to spell “two.” One can appreciate the methods and still quibble with other details that have nothing to do with the methods, but that, well, has nothing to do with the methods.

    • Thanks for the response Austin.

      Yeah, I guess my issue is more with the premise than the methods. To get to the data they want; I don’t think the question should be – in the HIV world – a question of insured vs. uninsured, but rather access to care vs. no access to care. The Ryan White Care Act somewhat aggressively tries to improve outcomes, which certainly muddies the water of their conclusions.

      Or take pre-reform Massachusetts. The Uncompensated Care Pool was not technically insurance but the patients had access to community health centers and (limited) primary care. Presumably some of those CHC’s also had some PCP’s with a strong focus on improving outcomes. Though, as a safety net program, their patients would presumably also have been lumped in with those who had no access to care despite program participation.

      I think I worry that studies like this could be spun as a tacit defense of spending cuts when really, it maybe isn’t that insurance (public or otherwise) is bad, so much as the safety net systems potentially do a better job.

      • @ThomasEN – To the extent that uninsured HIV patients have improved access to health services via The Ryan White Care Act or otherwise, that would tend to weaken the effect of insurance on outcomes. Thus, it would be one possible explanation for the lack of significance of their results.