The following originally appeared on The Upshot (copyright 2015, The New York Times Company). A version of this article appeared in print on June 23, 2015, on page A3 of the New York edition.
People who have health insurance have less health-related financial stress. That’s a not-so surprising finding from a recent survey from the Centers for Disease Control and Prevention.
There’s good reason to expect the Affordable Care Act to reduce financial strain. Exposure to health care costs fell for those who gained coverage, as it has for those whose coverage became more generous, too.
But even those families whose health insurance coverage didn’t change may have benefited. In 2013, 32.2 percent of uninsured families had problems paying medical bills, but that dropped to 31.2 percent in 2014. There may have been less need for people to pitch in if their formerly uninsured family members obtained coverage.
Another possibility is that those who obtained coverage may have been in a better position to financially assist family members who still lacked it. This could partly explain why the financial condition of even the uninsured improved after the Affordable Care Act’s coverage expansion. Other factors, like an improving economy, could also help explain the changes.
The C.D.C. looked at data from more than 370,000 people collected through the National Health Interview Survey. It found that in the six months after the introduction of the Affordable Care Act in January 2014, the percentage of people under age 65 who were in families having problems paying medical bills was lower than it had been before — 17.8 percent vs. 19.4 percent in 2013. Smaller reductions in financial strain from medical bills had occurred in prior years, perhaps because of slow improvements in the economy after the end of the Great Recession.
The C.D.C.’s findings are consistent with another recent survey by the Commonwealth Fund, as reported by my colleague Margot Sanger-Katz. It found that the percentage of adults experiencing trouble with a medical bill or medical debt declined to 35 percent in 2014 from 41 percent in 2012.
Coverage expansions that predate the Affordable Care Act were also associated with reductions in health-related financial difficulty. After Oregon expanded its Medicaid program by lottery in 2008, out-of-pocket medical expenses exceeding 30 percent of income fell more than 80 percent, according to analysis published in The New England Journal of Medicine.
Massachusetts’ 2006 coverage expansion law, which resembles the Affordable Care Act in many respects, was also associated with better financial conditions for families, a Federal Reserve Bank of Chicago study found. For instance, when coverage expanded, the two-year bankruptcy rate fell by 20 percent, credit balance past due fell by 22 percent, fraction of debt past due fell by 10 percent and credit scores improved by 0.4 percent.
Yet the evidence is clear. Though it doesn’t offer complete financial security to everyone, health insurance expansion has decreased financial strain. Financial security, after all, is the point of insurance. Though we might expect more from health insurance expansion — like improvements in health as well — at least it reduces financial strain, even if incompletely.
There’s been a lot of talk about “narrow” networks in ACA plans, which trade off limited provider coverage for lower premiums. Using a new integrated dataset of physician networks in plans on the federal and state marketplaces, our latest LDI/RWJF Data Brief describes the breadth of physician networks across all silver plans sold in 2014. Using consumer-friendly “t-shirt” sizing, we find that more than 40% of networks can be considered small or x-small, including 55% of networks in HMOs and 25% of PPO networks.
That’s the first paragraph of Janet Weiner’s post, which describes the findings in greater detail, including some nice charts. I don’t want to steal her thunder, or that of her co-author (Dan Polsky) and sponsors. So, click through and read.
Let me just go a bit meta: If you’re not a researcher, you may be among those that think work like this is fairly straight forward. “What? You just grab the data, do some manipulations in Excel, make some charts. Boom! A data brief.”
Well, no. The problem is, data like these don’t exist in research-ready datasets. The investigators had to painstakingly scrape them from hundreds of web pages and documents, all in different formats, requiring different approaches. I know from talking to Dan that it was an enormous undertaking.
And now you should be annoyed. Here’s what you should be thinking: “Why on earth is it so hard for researchers, let alone consumers, to obtain network information? Shouldn’t this be readily available and easily comparable across plans? How are people to meaningfully shop on exchanges without it?” You should be enraged about this!
Ah, now you’re with me, and us. This work was not only hard, but incredibly important. It’s a bit weedy to those of us who are NOT shopping for exchange coverage and, perhaps, NOT worked up about policy nuances pertaining to doing so. But if you can put yourself in the mind of someone who is one of these things, you can see what a big deal this is.
A medical service will not get reimbursed by a public program or private insurer without a proper billing code. Where do these billing codes come from?
In a prior post, I wrote about Current Procedural Terminology (CPT) billing codes—which correspond to physician services—and the American Medical Association (AMA) committee that governs them. In it, I mentioned that Medicare’s Healthcare Common Procedure Coding System (HCPCS) is based in part on CPT codes. In particular level I HCPCS codes are CPT codes.
But there are level II and level III HCPCS codes too. What are they and where do they come from? Let’s dispense with the easy part first. Level III codes were discontinued at the end of 2003 and had corresponded to “specific programs and jurisdictions” of Medicaid programs, Medicare contractors, and private insurers. Since they don’t exist anymore, let’s move on.
Level II HCPCS codes are for non-physician services, including drugs, devices, medical supplies, ambulance services, and the like. Since these are not CPT codes, and the AMA governs CPT codes, who is in charge of level II HCPCS codes?
The workgroup adds permanent level II codes under the following criteria: If the product is a drug, it must have FDA approval. If the product is not a drug, it must have been on the market for three months. In either case, the product must have three percent or more of the “outpatient use for that type of product in the national market.” This justifies the need for temporary, miscellaneous codes. Those are how products are billed and reimbursed before they are ready for permanent codes, during which time they can establish three months of market use and build up toward three percent of market share in their classes.
HCPCS Workgroup meetings are open to the public. It and the AMA’s CPT Editorial Panel govern nearly all medical procedure and product billing codes (excluding dental and drugs). In principle, just because something has a code doesn’t mean it’s covered by a public program (like Medicare) or a private insurer. I am unaware of an analysis that measures what proportion of permanent codes are reimbursable by, say, Medicare, but I’d love to know the answer. My guess: >95%.
You’ve just invented a new medical procedure. It’s going nowhere unless doctors can get paid for it. Why should insurers and public programs do that? Who decides? And how much should they pay? The American Medical Association (AMA) is on it.
Like me, you’re probably aware that there’s an AMA committee that effectively decides how much Medicare pays for each billable service physicians deliver to its beneficiaries. (If not, read this and/or this.) Like me, you may have been unaware that there’s another AMA committee that decides whether a physician service becomes billable in the first place. What it needs is a Current Procedural Terminology (CPT) code.
The AMA’s CPT Editorial Panel meets three times a year to assign CPT codes to new and emerging technologies. It has 17 members, 11 of whom are specialty society-nominated physicians. The Blue Cross and Blue Shield Association, America’s Health Insurance Plans, the American Hospital Association, and the Centers for Medicare and Medicaid Services (CMS) also get seats at the table. The last two seats are filled by members of the CPT Health Care Professionals Advisory Committee, which is itself primarily comprised of physicians nominated by the national medical specialty societies.
CPT codes comprise part of Medicare’s Healthcare Common Procedure Coding System (HCPCS, the rest of which I’ll discuss in a subsequent post). Though I can’t point to any document that backs me up, I’d be surprised if any medical system or insurer in the US did not track care and reimburse based on CPT/HCPCS. Medicaid is required by law to do so. The VA uses them too, which I know from personal experience.
So, the AMA’s CPT Editorial Panel wields considerable power. Without a CPT code, use of your new medical technology is not going to get reimbursed by an insurer or public program. Without reimbursement, it’s highly unlikely many physicians will use it. No use, no sales.
There is a) at least one Institutional Review Board approved protocol of a study of the procedure or service being performed, b) a description of a current and ongoing United States trial outlining the efficacy of the procedure or service, or c) other evidence of evolving clinical utilization.
Category I codes require more evidence of efficacy, and category III codes can convert to category I when that evidence is available. You can read more about the establishment of CPT codes at the links above or here and here. The next CPT Editorial Panel meets in October. It’s open to the public, but a confidentially statement must be signed to attend.
With few exceptions, we learned that Medicare Advantage plans pay provider rates at or close to fee-for-service Medicare rates. Similarly, Medicaid managed care plan payments are close to the relatively low Medicaid fee-for-service rates.
MA plan hospital prices are not tied to prices in the non-Medicare market, which is consistent with what we have heard from plans and other market participants. Non-Medicare physician payment rates also appear to have at most a modest relationship to MA bids, suggesting that physician payment rates may be partly anchored to FFS prices.
I never know when I’m going to need to document that claims of hospital cost shifting are still pervasive. So that my future self can easily find some, here are a few quotes from a report by HCTrends, which I’ve posted here.
“In southeastern Wisconsin, cost shifting is responsible for 35 percent of the overall commercial rates paid.”
“Cost shifting is a hidden tax on employers that affects their ability to compete economically.”
“A 2014 Milliman analysis conducted for the Greater Milwaukee Business Group found that cost shifting accounted for 35 percent of the commercial rate paid for hospital services in 2012. Milliman estimated that Medicare and Medicaid underfunding accounted for almost two-thirds of the cost shift, adding about $782 million to commercial rates in 2012. Bad debt and charity care accounted for the remaining third.”
“Medicare, however, will pay less than half that amount due to specific budget cuts mandated by the Affordable Care Act and the sequester, and an assumed productivity adjustment implemented as part of the ACA (see Chart 2). Since its inception in FY2012, the productivity adjustment has reduced the market basket update by between 0.5 and 1.0 percentage points each year.”
“Revenue reductions or payment rates that fail to keep pace with inflation force health care providers to find more efficient ways to deliver care while simultaneously improving the quality of care delivered. If those initiatives do not completely offset their government revenue shortfall, providers make up the difference by increasing the rates charged by the business community – a process known as ‘cost shifting.’ The degree to which a hospital can leverage the business community to subsidize government health programs depends on the market dynamics between health care providers and insurers.”
“Cost-shifting is real and represents a hidden tax on employers that can threaten their competitiveness.
“Cost-shifting is not a 1:1 proposition: Every $1 in government funding is not offset by a $1 increase in private payer funding. Some of it is absorbed by providers through cost-savings and other efficiency initiatives. But after years of flat or declining government revenues, hospitals have little choice but to offset these revenue losses by increasing commercial rates.”
Registering a study means specifying, in a public database like ClinicalTrials.gov, what the study’s sample criteria are, what outcomes will be examined, and what analyses will be done. It’s a way to guard against data mining and bias, though an imperfect one. A boost for trial registry was provided in 2004 by the International Committee of Medical Journal Editors (ICMJE) when it mandated registration for clinical trials published in member journals, listed here.
Publishing a study means having it appear in a peer-reviewed journal. Few people will ever look at a trial registry. Many more, including journalists, will read or hear about published studies. So, what gets published is important.
Not everything gets published. Many studies have examined trial registration incompleteness and selective publishing of registered data. [Links galore: 1, 2, 3, 4, 5, 6, 7, 8]. Perhaps as many as half of trials for FDA-approved drugs are unpublished even after five years post-approval. This is concerning, but what does it really mean? Does it imply bias? If so, is that bias different by funding source (e.g., industry vs non-industry)?
Trial registry data can be changed. That weakens the de-biasing, pre-commitment role registration is supposed to play. But sometimes changes are reasonable. After all, if you haven’t done any analysis yet and you think of a better way to do it, it’d be dumb to just blindly keep going with your registered study. You should do it the right way, and you should change your registered approach. However, changing registry data after the study is done, e.g. to match what you did, is a lot more sketchy (or could be). All changes in ClinicalTrials.gov are stored, so one can try to infer whether its being gamed.
A study examined changes in ClinicalTrials.gov registered data for 152 RCTs published ICMJE journals between 13 September 2005 to 24 April 2008. It doesn’t make the registry look very good. The vast majority (123) of examined trials had changes in their registries.* Most commonly changed fields were for primary outcome, secondary outcome, and sample size. The final registration entry for 40% and 34% of RCTs had missing secondary and primary outcomes fields, respectively, though more than half of the missing data could be found in other fields. Already that’s a concern because it makes the registry hard to use if data are missing or in the wrong place. (I want to emphasize here that I’m not blaming investigators for this. Maybe they deserve the blame. But maybe the registry is also hard to use. I’ve never used it, so I cannot say.)
The study found that registry and published data differed for most RCTs including on key secondary outcomes (64% of RCTs), target sample size (78%), interventions (74%), exclusion criteria (51%), and primary outcome (39%). Eight RCTs had primary or secondary outcomes registry changes after publication, six of which were industry sponsored. That’s concerning. But six or eight is a small number relative to all trials examined, so let’s not freak out.
Another study looking at all ~90,000 ClinicalTrials.gov-registered interventional trials as of 25 October 2012 assessed the extent to which registry entries had primary outcome changes and when changes were made, stratified by study sponsor.* It found that almost one-third of registered trials had primary outcome changes, changes were more likely for industry-sponsored studies, and industry sponsorship was associated with changes made after study completion date. I think we should be at least a bit concerned about that. (Again, maybe there are perfectly reasonable explanations, but it warrants some concern.)
What gets registered? When we’re talking about trials aimed at FDA-approval, there are different types.* There are pre-clinical trials in which drugs are tested, but not in humans. Then there are several phases (I, II, III) of clinical trials that ramp up in terms of number of humans in which the drug is used and change in relative emphasis in looking for safety vs. efficacy. (As you might imagine, safety is emphasized first.) Post-market trials (phase IV) look at longer-term effects from real-world use. Because trials cost money, it’s likely that drugs that make it to later trials tend to be more promising (i.e., are more likely to provide positive more effects).
From a set of registered trials, only a subset of which are published in the literature, how does one assess publication bias? The easy way is to look at the subset of matched published and registered trials to see what registered findings reach the journals. Do they skew positive? The hard way seems impossible: What about studies that are registered but never published? Do those harbor disproportionately negative findings? We can’t really know, but there’s a clever way to infer an answer.
If I’m not mistaken, pre-clinical trials are also called the NDA phase, for new drug application, which examine new molecular entities (NDEs). In the NDA phase, drug manufactures are required to submit all studies to the FDA. I infer, from what I read, that this is not true of other phases. Therefore, the NDA (or pre-clinical?) phase offers a nice test. Which subset of results sent to the FDA get published? We might infer that the estimate applies to other trial phases, those for which we can’t see a full set of results.
A study of all efficacy trials (N=164) for approved NDAs (N=33) for new molecular entities from 2001 to 2002 found that 78% were published. Those with outcomes favoring the tested drug were more likely to be published. Forty-seven percent of outcomes in the NDAs that did not favor the drug were not included in publications. Nine percent of conclusions changed (all in a direction more favorable to the drug) from the FDA review of the NDA to the paper. Score this as publication bias. And don’t blame journal editors or reviewers: the authors wrote that investigators told them studies weren’t published because they weren’t submitted to journals.
But is this an industry-driven bias? A Cochrane Collaboration review examined and meta-analyzed 48 published studies from 1948 (!!!) through August 2011 on the subject of whether industry-sponsored drug and device studies have more favorable outcomes, relative to non-industry ones. Industry-sponsored studies were more likely to report favorable results and fewer harms.
This sounds like industry-sponsorship might produce a bias, but it could be that industry just tends to look at more favorable drugs, and does more late-phase trials.
Another study looked at this. It examined 546 ClinicalTrials.gov-registered trials of anticholesteremics, antidepressants, antipsychotics, proton-pump inhibitors, and vasodilators conducted betweeen 2000 and 2006 to assess association of funding source with favorability of published outcomes. Industry-funded trials were less likely to be published (32% for industry vs 56% for non-industry). Among the 362 (66%) of published trials, industry-sponsored ones were more likely to report positive outcomes (85% for industry-, 50% for government-, and 72% for nonprofit/non-federally-funded). Industry-funded trials were more likely to be phase 3 or 4, so maybe that explains higher favorability of findings.
Nope. Industry-funded outcomes for phase 1 and 2 trials were more favorable as well (see chart below).
Another study, however, found no association of funding source with positive outcomes. It looked at 103 published RCTs on rheumatoid arthritis drugs from 2002-2003 and 2006-2007.
A study looked at the extent to which ClincalTrials.gov-registered studies were published.* Its sample of 677 studies drew from trials registered as of 2000 and completed by 2006. Just over half the trials were industry sponsored, with 18% government- and 29% nongovernment/nonindustry-sponsored. Industry-sponsored trials were less likely to be published than nonindustry/nongovernment ones (40% vs 56%), but there was no statistically significant difference compared to government-sponsored trials. In fact, NIH-sponsored trials were only published 42% of the time.
I think that’s worth emphasizing: We should be suspicious of all publication oddities and omissions, not just those that are associated with industry. A lot of NIH-sponsored findings—most of them—never see publication too.*
* As Aaron has reminded me, not all ClinicalTrials.gov-registered studies are for drugs or devices. Many non-industry studies, for example, concern aspects of health care delivery that post far less risk to patients. It may not make sense to analyze these alongside those for drugs and devices, which place patients at higher risk. It also may matter less if such studies change their registries, publish all their findings, or are even registered at all. In other words, given constraints on investigator resources, we might reasonably hold drug and device trials to higher standards than others.