Additional thoughts on the new Oregon Medicaid results

I started to answer individual comments, but they deserve their own post. So start by reading our piece yesterday on the new NEJM study. On to your questions/comments/shrieks:

1) This was a HUGE study. How can you say otherwise?

According to the paper, 12, 229 people responded to the surveys and were analyzed. So, yes, for outcomes that effected everyone (think financial hardship), it’s likely they were super-powered. But for many of the outcomes that were more murky, that’s not the case. Take A1C for instance. Only 5.1% of the control group had an A1C>=6.5. Let’s assume that the starting prevalence was the same in the intervention group. That means that only 624 people (312 in each group) actually had a high A1C in the study. That’s not anywhere near as big. Especially when you’re talking about an indirect intervention like insurance as opposed to actual health care.

2) You can’t do a power calculation after the fact!!!

I’m not asking for a post hoc power calculation. I want the a priori one. You see, with only 600 or so participants with an A1C in the high range, I want to know what they were thinking ahead of time.

If my study is too small, then even if I see a difference that I think is meaningful, I might not be able to prove that it is statistically significant. So when I’m designing a study, I decide what is a clinically meaningful result. I then figure out what I can likely expect in terms of variability in the individual readings I might measure. Then I figure out how many subjects I need in order to know that if I get the clinical results I expect, they will be “detectable” by my analysis. That’s the calculation. If my sample is too small, then even if I find a clinically meaningful result, it might not be statistically significant.

3) You don’t understand statistical significance!!!

I assure you I do. When your point estimate is clinically meaningful but your results are not statistically significant it usually means that the variability was larger than expected, there really was no effect, or you were underpowered to detect the difference. See (2). I can’t tell which of these are true because I don’t know if the study was powered to detect the point estimate differences they found.

(I should add here that some are upset by the fact that our post said p=0.07 is close to significant. I (Aaron) am more of a purist when I’m using frequentist statistics, so I would agree and not say that. Austin is more of a Bayesian and doesn’t think that’s quite as blasphemous. But I recognize this is a Shibboleth for people who think they truly understand statistics, so I’m acknowledging it.)

4) Obamacare promised us it would save tens of thousands of lives a year!!! He lied.

Stop. This was Medicaid for something like 10,000 people in Oregon. The ACA was supposed to be a Medicaid expansion for 16,000,000 across the country. If 8 people’s lives in the study were saved in some way by the coverage, the total statistic holds. No one measured that. This is silly.

5) You’re using financial hardship and other stuff as a smokescreen.

No, I’m not. The reason I have insurance, and likely you do as well, is to protect me and my family from financial ruin. When I get sick, I don’t sit at home and let the insurance take care of me. I get off my butt and use the health insurance as the means by which to get health care. Medicaid is about access. It’s just the first step in the chain of events that leads to quality.

That said, I still maintain that we have never subjected Medicare or private insurance to this standard. Just Medicaid?

6) The results were bad anyway. Blood pressure moved a point down? That’s nothing!

Average blood pressure was a bizarre thing to measure. You have to remember that most people who get health insurance are healthy. They’re not going to get “healthier”. The average blood pressure in the control group was 119/76. That’s normal! You would only expect that it might improve in those with a high blood pressure. So I might have looked for an effect in those patients with hypertension. 16.3% of people in the control group had a systolic over 140 or a diastolic over 90. In the Medicaid group, that dropped to 15%. If they wanted to look at average pressures, why didn’t they single out the hypertensive people? I don’t know.

I’ll update this as more things occur to me.

@aaronecarroll

Hidden information below

Subscribe

Email Address*