I am adding a note to this post for two reasons. The first is that people I have respect have pointed out to me that it is unlikely that Megan wrote the title for her piece, as editors usually choose the titles. That’s happened to me many times. The title, coupled with the sentence saying “we don’t have the uninsured problem we thought we had” made me, and many others, read this in a way it’s possible McArdle did not intend. Had I emailed her to ask (following my own advice at the end), I might have learned this before I posted.
I owe her an apology, both private and publicly for making those mistakes. I have edited this piece to reflect that i’m responding to an argument about interpreting the number of uninsured, not what she may have intended, and to make it less snarky, which was wrong to begin with.
I’ve sometimes been hard on Megan McArdle in the past. I think she has, at times, ignored obvious answers because they were inconvenient to her argument. Other times, she has cherry picked evidence in order to make counter-intuitive claims, like that insurance doesn’t matter. Of course, if this was the case, then the rich (who should know how to make decent investments) would forego it. Does that happen?
Yesterday, she wrote a piece at Bloomberg that could be interpreted as asking if estimates for the uninsured are overblown. She uses data from the Medicaid Expenditure Panel Survey:
A third possibility is that we don’t have the uninsured problem we thought we had. Most of the estimates we have for the uninsured population are really pretty crude. For one thing, we tend to treat the U.S.’s roughly 48 million uninsured as if they were part of a discrete group, like Mormons or people who know how to play the tuba. But in fact, people change insurance status all the time. If you look at data from the Medical Expenditure Panel Survey, you’ll see that a lot of people are uninsured for at least a month, but if you look at who is uninsured for as long as two years, that number falls by two-thirds. If you extend the reference period out to four years, just 7.6 percent of the population counts as “uninsured.” That is not a negligible number, but it is less than half of the 48 million we think of as uninsured. And it’s heavily skewed toward immigrants and the young:
MEPS says the number of people who have been uninsured for four years or more is 7.6% of the population. That’s about 20.4 million people. But why did she pick four years?
The census (which is often quoted when talking about the numbers of uninsured) doesn’t ask if you’ve been uninsured for four contiguous years. It asks if you’ve been uninsured for the past year. It’s obvious that fewer people will have been uninsured for four contiguous years than for one year. If we were going to compare apples to apples, we’d want to look at the census one-year estimate and a MEPS one-year estimate. Two documents down from the one she picked, is this one. It reports that, most recently, the number of people who were uninsured for an entire year was 38.7 million people. In 2009, that number was higher, more than 41 million. The census that year had it at 46 million.
Are there differences between those numbers? Sure. Are they monstrous? No.
But which should policy makers use? If only someone had an analysis explaining the differences between the various ways to measure uninsurance, including CPS, MEPS, NHIS, and SIPP. Oh, wait, someone does. I highlighted it here. From that report (emphasis mine):
It is clear that the estimates of the uninsured may vary depending upon the data source and data adjustments. The decision of which survey to use may depend on the purpose of the analysis. For credible state-level estimates, the CPS is the only source for all 50 states. Larger sample sizes enable CPS and NHIS to produce more reliable estimates for subgroups of the population (i.e. children, low-income workers, etc.). MEPS and SIPP are the best sources for examining changes in individuals’ insurance status over time and NHIS, MEPS, and SIPP can provide point-in-time estimates of the uninsured.
By the way, that document was written in 2005, when George W. Bush was President.
MEPS provides insurance status by month, for every month of the year. A researcher can use that to examine both short (one month) and long (one year) spells of uninsurance. By combining data across multiple years, he or she can examine longer spells. Therefore, you’re going to get different estimates in the literature because researchers are asking different questions and using the data differently.
McArdle continues with this:
Our projections are based on what we thought we knew about the uninsured. But what we thought we knew has turned out to be wrong before. In the three years between Obamacare’s passage and the time it went into full effect, the law created temporary high-risk pools for those with pre-existing conditions. The Congressional Budget Office projected they would cover 400,000 people, but they ultimately attracted only a quarter of that, even though they relaxed the criteria and undertook aggressive outreach programs. Where were the other 300,000 people who were willing and able to buy insurance but couldn’t get a company to sell it to them? It’s possible that they simply never existed.
There are tons of reasons those high risk pools fell short of expectations. Some of us even predicted the results. I entitled one post “Told you so“.
Plus, let’s take a breather and actually look at CBO documents. Did they think that 50 million would take advantage of new insurance in the exchanges and Medicaid expansion? No. Even projected out in to the next decade, it looks like they believe that about 25 million will gain coverage. Does that sound reasonable based on all the data above, including McArdle’s?