• The other Medicaid expansion natural experiment

    We’ve all heard about Oregon’s Medicaid experiment, but there was another one recently that deserves a bit of attention too: Wisconsin’s.

    Unlike Oregon, Wisconsin did not randomize people into eligibility for Medicaid. But it did do something that makes it a good target for study. In January 2009 and in Milwaukee County, the state automatically enrolled 12,941 childless adults with incomes below 200% of the federal poverty level (FPL) into BadgerCare Plus Core Plan, a less generous version of the state’s standard Medicaid program, as described by Thomas DeLeire and colleagues in a recent Health Affairs paper.

    The Core Plan’s benefit is similar to, but less generous than, Wisconsin’s existing Medicaid/Children’s Health Insurance Program (called BadgerCare Plus). Unlike BadgerCare Plus, the Core Plan has a restrictive drug formulary covering mostly generic drugs; includes dental coverage but only for emergency services; and has no coverage for hearing, vision, home health care, nursing home, or hospice services. Reimbursement for physical, occupational, speech, and cardiac rehabilitation is provided for a limited number of visits. The Core Plan does not cover inpatient mental health and substance abuse services, and it covers outpatient mental health services only when provided by a psychiatrist.

    The research candy here is that enrollment was automatic. This potentially goes a long way toward addressing confounding due to self-selection. This is not as good as a randomized design. There is no group randomized to control to serve as a comparison. So we don’t know for sure what the counterfactual is. In cases like this, there are two choices, compare pre-enrollment with post-enrollment for the same group of people or cook up a comparison group (e.g., a matched set of similar individuals not subject to the policy). Both are fine approaches, though each has limitations. In social science, you usually can’t have it all!

    That’s the set-up for the DeLeire et al. study, which is a pre-post design. Of the 12,941 automatic Core Plan enrollees, they were able to match pre- and post-enrollment encounter and claims data for 9,619 of them. Then they assessed how coverage was associated with changes in utilization of various types of care. Though they provide more detail in their paper, my chart below from a subset of their reported results provides an overview.

    Wisc Mcaid

    As you can see, outpatient, preventive care, and ED visits went up. This suggests better access to care. Other results point to potential limitations of the existing delivery system and the nature of the Core Plan benefit. There was also an almost 40% increase in ED visits for conditions that could have been addressed in less intensive settings (not shown in the chart). Also not shown in the chart was an increase in ED visits for mental health or substance use issues of 344%. Clearly some of these conditions could be better and more efficiently addressed in other settings.

    The results also show big reductions in hospitalizations and, importantly, hospitalizations that are preventable with outpatient care. This suggests the possibility of cost offsets, though the authors did not assess costs in the study. Also unstudied were changes in health conditions. Core Plan enrollees are using a different mix of care. But are they in better health?

    Other key points:

    • The “vast majority” of Core Plan enrollees were transitioned into an HMO 2-5 months after enrollment. 
    • At the time of the study Milwaukee County had a (since discontinued) General Assistance Medical Program that reimbursed hospitals and other providers for care for the indigent at Medicaid rates. Presumably many of those automatically enrolled in the Core Plan could or did receive care financed by this program when they were uninsured or for care that Core Plan didn’t cover.
    • It wasn’t clear to me from the paper (or I missed it) how the state identified the automatic enrollees. Was it upon encounter with the health system? Without knowing more about the selection process, it is hard to know for sure how well the study side-stepped confounding. Enrollment upon encounter or encounter of a certain type (e.g., ED visits) would bias the findings in a way that other types of enrollment methods wouldn’t.

    Notwithstanding its limitations, this is an interesting paper on an important policy shift. We don’t see too many natural experiments like this, so we should exploit them when they occur and take note of studies that do so.


    Comments closed
    • It’s too bad the paper didn’t study costs.
      I would think that the large reductions in hospitalizations (which are very expensive) would more than compensate for the cost of the increase in outpatient visits.
      Also, as you note, it would have been good to study health outcomes but I think we can infer better health by the drop in hospitalizations.

    • Do you know of a study that did a regression discontinuity design for this? If enrollment is automatic for incomes below 200% and prohibited for incomes above, this should make a nice discontinuity for causal inference, right?

    • This study is the right intersection of hard science and social science. It may not receive as much attention for the very reason it’s so good: it doesn’t try to make a causal inference for health outcomes.

    • The design of the Oregon Study which actually studied health benefits seems so much better than the inferior design of this study which was not randomized. Of course it didn’t study health benefits, but it did study hospital admissions.

      I have a lot of questions regarding data collection. As an example one question would be if they have a second category of hospital admissions. Medicare did this with their 23 hour admit. That would increase ED and reduce hospital admissions. The costs might be nearly the same even if one admission was twice as long as the other for the real costs are involved in day one +, the day of any surgery or procedures and the final day. I bring this up because the idea in reducing hospital admissions is to reduce costs. I also note that the populations were not at all similar.

      But, as you say one has to take the studies as they come.

      Austin, what did your power analysis show about each of the reported variables?

      • Results were statistically significant at the p < 0.001 level. Hence, power is fine for the effect sizes reported. Sorry I didn’t mention this. But you should assume that I only comment on statistically significant results unless I specify otherwise. No harm in asking though.

        • I asked so I would have a better idea from study to study this method of validation. If the study had decided to compare based upon gender, age group, disease would the calculation have indicated validity since the numbers would have fallen? Rough guess on your part, sparing yourself the time. My assumption is yes for gender, no for disease and maybe but likely for age group. Just curious so if I face a power calculation in the future I have a better idea.

    • Just a note that we are about to experience “automatic enrollment” on a grand scale as a result of provisions in the Patient Protection and Affordable Care Act. Maybe there will be a control group for comparison.

      Originally to be effective January 1, 2014, but delayed until regulations are proposed, adjusted and finalized, Secttion 1511 of the Patient Protection and Affordable Care Act of 2010 requires “large” employers of 200 or more full time employees to automatically enroll workers in health coverage (assuming the employer offers a plan). Almost all plans will have emerged from the grandfather by the time the regulations are finalized, so that they will have preventive services where costs are 100% born by the plan and all employers who offer coverage are likely to offer at least one plan that meets the “minimum value” requirements. .

      Not sure if there would be a natural control group – employees of smaller employers, employees who had already enrolled for health coverage, or employees who opted out of coverage.

      Unfortunately, by the time the regulations kick-in, the individual mandate penalty tax will likely be sufficiently significant to have prompted enrollment by many of today’s working, uninsured. So, one option might be, similar to the action taken by the agencies to encourage expading the offer of coverage to young adults prior to the effective date of the age 26 mandate, would be to have the agencies encourage employers to early-adopt automatic features. Their automatic enrollment experiences might actually benefit the agencies in their rulemaking, as well.

      A whole new facet of behavioral economics is about to commence.

    • The reduction in hospitalizations could be completely related to study design, if enrollees were identified based on recent hospitalizations (which seems likely based on their health status and their double “pre” hospitalization rate compared to the “comparison” group).

      The idea that a patient can be his/her own control pre/post is what leads to dangerous overestimation of intervention effects. The authors justify this by saying that their comparison group doesn’t show changes in use patterns. That group is also six times as large with substantially lower hospital use and chronic illness burden. Plus, their selection criteria may be entirely different. That matters. The researchers could’ve used hospitalization timing matching between individuals in the study group and individuals in the comparison group to establish some baseline understanding of regression to the mean effects for hospitalizations.

      Interpret with extreme caution.