• The individual mandate may work, if we let it

    My highly refined radar for policy-relevant facts is pulling in a new signal. OK, I admit, the radar has another name: Sarah Kliff. She reports that a new market survey from consulting firm Oliver Wyman* found that,

    [w]hen asked to choose between paying a penalty and purchasing coverage, 76 percent of the uninsured said they’d rather purchase coverage. That would reduce the number of people without insurance to 5 percent of the population and have 25 million Americans purchasing through the exchanges, just slightly higher than the 24 million that the CBO projected.

    Kliff finds this simultaneously surprising and not. I agree with her. My gut says, “Wow!” and my brain says “Duh!” Maybe the “wow” part comes from months (and months, and months) of gloom and doom about how health reform will play out. Maybe I was starting to believe it just couldn’t work. To be fair, this survey does not prove it will. It only illustrates that at least the uninsured are receptive to it.

    But, here’s the “duh” part. As Kliff notes, Massachusetts, under a state law similar to the ACA, has a high take-up rate. More than that, there is evidence that the mandate had a lot to do with the law’s success. As the following figure from an early 2011 paper by Chandra, Gruber, and McKnight shows, even though premium subsidies were available to low-income individuals in the Massachusetts exchange in early 2007, it’s the mandate that fully kicked in by the end of 2007 that caused a big spike in enrollment. So, the mandate matters.

    Subsidies matter too, and the ACA will provide them to a broader range of income groups (up to 400% of the federal poverty level) than does the Massachusetts law (up to 300% of the FPL). Finally, the penalty for not complying with the mandate will be higher under the ACA than in Massachusetts. So, all signs point to substantial participation by the uninsured.

    The last “duh” is that this is, in fact, the principal aim of the law. A lot of work went into crafting something that would appeal to and assist the uninsured. In that light, it really isn’t all that surprising that it appears as if it will do just that.

    * Anybody know if the firm has released methods for their survey?

    UPDATE: Asked about survey methods.

    Share
    Comments closed
     
    • The immediate reaction to this survey compared to the immediate reaction to the McKinsey survey has been very interesting to watch.

      • Yes. And I meant to ask what we know about the methods on this one. Post updated.

        You know why the reactions differ though, right? One was an outlier. The other is new information, though consistent with other things we know, as the post described. Having said that, if the recent one proves to be faulty, I will discount it accordingly.

    • As was discussed back at the link you provide from January, this is about a very small number of people absolutely and a small subsegment of the population of Massachusetts on a percentage basis. And it is not even the subsegment you would want to survey to find out about mandates working to adequately fund the pool –the young and the middle class.

      Looking at that January link reminded me that this is also the post where you cited the state report that led to open enrollment periods on the Massachusetts exchange. You interpreted the report to say adverse selection was low (or good–not sure what adjective you used). The legislature interpreted it to say it was high and enacted open enrollment which began — effectively — last month. It leads me to wonder how many of the people cited each month on the above chart are repeats (on and off the system in order to go see a doctor) and also what the net gains were for the exchange each of these months (these new enrollees minus people who dropped their insurance). Just this year during the first effective open enrollment the Exchange was urging people to drop their coverage during open enrollment even if they had just started it in the spring. Perhaps the same things was going on in December 2007.

    • I said on my earlier post that the exchange urged customers to “drop their coverage” during the recent open enrollment period. I meant “change their coverage.”

    • I wouldn’t necessarily call the McKinsey survey an outlier. It was dramatically different than what the other models predicted, but those were models, they were not empirical evidence. I think the McKinsey study may have overstated the likelihood of employers dropping coverage too, but we don’t really have any foolproof methods for predicting what will happen. Much of this is just intuition and speculation. In many cases if employers acted rationally they would drop coverage, but they don’t always act rationally, and competitive pressure and status quo bias play big roles as well.

      Personally I discount the applicability of Massachusetts experience to what we’ll see nationwide with PPACA. I think the MA market is too different from the rest of the country to generalize to the whole country. I could be wrong of course, but like I said about employer dumping, we just don’t have a lot of experience we can use as a guide.

      I think that is what’s missing, is that not everyone is willing to acknowledge the uncertainty and the extent to which we’re entering uncharted territory. We don’t have great empirical evidence for what will happen to the employer market, so when people saw this McKinsey study they immediately attacked their methodology and even their integrity because the results did not agree with their prior convictions, whereas this one confirms their previously held beliefs so they don’t even mention the fact that this too is just a survey without published methodology. It’s not as if we have a strong body of evidence and research about what happens when you mandate health insurance coverage. Even the MA experience may not be a good guide due to aforementioned differences in markets, and the lack of any real enforcement of the PPACA mandate.

      • Yet we do have the MA experience which does line up with what the models predict. While you may discount it, it still exists. If the methods here pan out, we have the McKinsey outlier, and everything else.

        Steve

      • Just read through that whole thing, the majority of it is discussing models, like I mentioned. It’s not that I don’t believe or trust the models, but they have their limitations. And like I said, given the lack of much empirical evidence we can use, it is difficult to know how much weight we can give to the models. I use the Lewin model, which has similar results of very little employer dumping, nearly every day. But I recognize that the model is only as good as the data going in and the methodology/assumptions, and we can’t say definitively how employers will react.

        The one survey they link to is from Mercer, and it has some high level summarized results, but again no methodology or examples of the questions they asked. I don’t say that to discredit the survey. I just find the reactions to differing results very interesting. McKinsey was attacked, accused of having an agenda, and accused of asking biased questions to get a certain response, simply because people didn’t like the result. Mercer comes out with a result that they like and it’s held up as a evidence that McKinsey is wrong. Why is Mercer’s survey credible, when we know less about the methodology than we do McKinsey’s? Because it agrees with the models? It’s somewhat circular.

        Again, I don’t claim to know the right answer here, I just find the reactions to this very telling.

        • No, McKinsey was treated as it was here because they stated their results are not meant to be predictions. That says it all. And it’s their words.

          • Yes, their results are not predictions, and neither would the results of any similar surveys be predictions. Which just brings us back to employer surveys and speculation on employer behavior vs. microsimulation models which are trying to predict outcomes of an entirely new regulatory and competitive marketplace for insurance. It’s anyone’s guess as to which will be the better predictor. I have opinions that push in both directions, but we’re all just guessing. I just found all of the reactions to this survey today interesting in contrast to what happened when McKinsey’s was released. I do not think one can make a strong case that the difference in those reactions was justified.

            McKinsey was not given the benefit of the doubt and all kinds of accusations of possibly flawed and leading questions were hurled at them (all of which turned out to be baseless once they did release their questions), yet with a similar lack of details on methodology and the wording of questions we’re to trust in this OW survey. I know and respect a bunch of people at Oliver Wyman, and I’ve no reason to believe their methods were not sound. I just wonder if they would have received the McKinsey treatment if the results had been unfavorable to PPACA proponents.

        • I am actually too lazy to go back and confirm this, but as I recall, McKinsey refused to release their methodology at first. They released a report that was an outlier compared with prior estimates. Then, they refused to release their methods. How could that not provoke a response? If Austin published a study “proving” that high deductibles increased the costs of care, but refused to release the methods, what would your response be?

          Steve

          • You do remember correctly, they refused to release the methods at first, and once they did all fears were put to rest. I think the refusal was calculated: they knew the methods were sound, and they wanted to let people look silly by continuing to make all kinds of bogus accusations. If people had said things like “these results are surprising and don’t match other estimates, we’ll withhold judgment until we see their methods” it would have been one thing. But instead a bunch of people went on the attack and accused McKinsey of all sorts of tomfoolery.

            I’ll turn the question around: if you did a study with controversial results but you felt very confident used sound methodology, and people immediately accused you of trickery and questioned your integrity, wouldn’t you be tempted to let it play out for a while knowing that ultimately you’d be vindicated and their attacks would look foolish? I know I would. They were basically being accused of outright lying. If that were me I’d let the mudslingers keep slinging for a while.

            But again, I do not intend this to be a defense of the McKinsey report or an endorsement of its results. I too am skeptical of it and unsure how this will play out. But we have about as much information on this Oliver Wyman report as we did the day the McKinsey report came out, and the reactions are interesting to say the least.

            • We have covered the McKinsey study and this post was not about it. I’ve entertained a further discussion of it here and said exactly what I care to say about it. Moreover, you’ve agreed with what I’ve said. I think we’re done.

          • They did release their methods, eventually, and Aaron blogged about them here. Along with that, they stated that they did not intend their survey to be be predictive.

    • And I’m not at all surprised that this was oversold a bit by Think Progress. Quite disappointingly, they have become very unreliable when it comes to presenting things fairly and accurately. I try to get to the primary sources whenever I see something reported by TP these days.

      • I’m very weary of these kinds of statements. Your swipe is quite broad and all too easy to make. One could say something similar about just about anybody, including me or you. It just isn’t that helpful.

        Is there something about the numbers they present that you think are wrong? I referred to Volsky’s post because it presented the estimates in an easy to digest form: good for a quick summary. My point that McKinsey was an outlier, even higher than Hotz-Eakin, which I’ve discussed elsewhere on the blog. Do you disagree with that point?

        • It’s broad because in a number of different areas I’ve seen TP get things very very wrong, including some cases where the error is so egregious that it’s very difficult not to assume it is intentional, an accusation which I am not usually inclined to make. I disagree that you could say that about just anybody, including me or you. It’s not a “swipe”, it’s a simple factual statement that I’ve seen them either get facts wrong or misrepresent facts enough times that I always try to verify whatever I read there.

          The problem with the post is that is primarily comparing the results of microsimulation models with a qualitative survey (no, I do not disagree that it is an outlier compared to these models). These models definitely have value, but also some severe limitations (as do qualitative surveys). The post also calls the Avalere report “a comprehensive review of employer surveys”, but the only source provided is a link to a Mercer survey with no information on methodology or questions. I don’t think anyone disputes that the McKinsey survey was dramatically different than the models which mostly get the same results. And it’s of course quite possible (likely even) that there are well-defined surveys which also dramatically differ from McKinsey, but one would not be able to tell that from the evidence given, and in the context of “McKinsey didn’t give us their methods so we don’t believe them” it’s rather ironic that we don’t see the methods for these other surveys either, but are expected to accept their results.

          Really, in a roundabout way it gets right back to my main point, this was not about defending McKinsey or their survey, just pointing out how people react differently based on how the results compare to their preconceived ideas.