[C]onsider a healthy consumer facing the risk of developing Parkinson’s disease in the years before the discovery of treatments that reduced the disease’s impacts on quality of life. Suppose we measure the quality of one year of life as some percentage of a year spent in perfect health. In the absence of a treatment, contracting Parkinson’s might reduce quality of life from, say, 80% of perfect health to 40%. Consider the introduction of a new medical treatment that costs roughly $5,000 per year and increases quality of life for Parkinson’s patients from 40% to 70%. If the value of perfect health for one year is $50,000, this increase in quality of life is worth $15,000 annually but costs only $5,000 annually. The traditional approach in health economics compares these two numbers to arrive at the net value of the treatment, which in this case would be $10,000 annually.
First of all, I want to flag the use of the term “value” or “net value” here. It’s consistent with what Uwe Reinhardt endorses: the difference—not ratio—of benefit and price. (Click through to the full text of his remarks.) Cost-benefit ratios (or their reciprocols), for instance, are also called “value” by some, but, as Reinhardt noted, that’s weird and inconsistent with the notion of “value” generally used in economics.
Notice that this calculation neglects the way the medical treatment’s introduction also compresses the variance in the quality of life between the Parkinson’s and non‐Parkinson’s states. Prior to the availability of treatment, Parkinson’s was a gamble that lowered quality of life by 40% of a perfectly healthy year, or a loss of approximately $20,000 per year; the treatment transforms the disease into a new gamble that lowers quality of life by just 10% of a perfectly healthy year, or a loss of just $5,000 per year. This compression in quality of life outcomes generates value for consumers who dislike risk.
It is true that the reduction in the variance of health outcomes is mitigated by an increase in the variance of healthcare spending. Before the availability of treatment, the individual may have faced no financial risk from falling ill with Parkinson’s; after its introduction, she faces the risk of a $5,000 per year expenditure. However, if the treatment is priced to generate consumer surplus, the ex post improvement in health outcomes will outweigh its financial cost. Thus, it should come as no surprise that this medical treatment lowers total risk in our example. Prior to the development of treatment, Parkinson’s imposes a risk of losing $20,000 in reduced health. After development, the risk of disease is transformed into a $5,000 financial risk plus a $5,000 health risk. In sum, this medical treatment cut the total risk of Parkinson’s in half. Furthermore, the nascent financial risk associated with purchasing treatment can be mitigated or even eliminated by health insurance.
I’ve put in bold a key assumption (or focus) that the authors apply in their analysis. They are considering only treatments that are priced such that consumer surplus is positive, in the absence of insurance. Given the widespread take-up of insurance, how many treatments are really priced this way? I guess it depends on whose consumer surplus one examines. For many treatments and many patients (but not for all), prices for the uninsured are above that which would generate positive consumer surplus for the uninsured. That fact is the source of moral hazard. Put it this way, at current prices, what’s the market for Sovaldi or proton beam treatments look like without insurance? I think they’re priced precisely to account for insurance. Indeed, I think these products wouldn’t exist without insurance, which is tantamount to saying that there’d be no technology-sustaining consumer surplus positive price.
Am I raising a limitation of the work here? I don’t know. (I will admit to not tracing this through all the math. Think of this as a point I’d raise in a seminar, and then I’d like to hear others who know the work better tell me what’s if what I’ve raised is important. UPDATE: Lead author Darius Lakdawalla responded to this. You’ll find that response below. It’s a good one.)
Even if a consumer has no health insurance, technology can reduce the physical risk she faces. In the Parkinson’s example, she faced a health risk of $20,000 prior to the technology but just a $10,000 risk after it, even if no health insurance is available. Adding health insurance to the analysis would cause the risk to fall even lower, to just $5,000. [… P]roviding consumers with access to better medical technology by encouraging medical innovation may reduce risk more efficiently than providing them with health insurance.
Their conclusion (after analysis),
New medical technologies provide substantial insurance value above and beyond standard consumer surplus. Under plausible assumptions, the insurance value is roughly equal to the conventional value. Accounting for risk thus doubles the value of medical technology over and above conventional calculations.
The ability of medical innovation to function as an insurance device influences not just the level of value, but also the relative value of alternative medical technologies. The conventional framework understates the value of technologies that treat the most severe illnesses, compared to technologies that treat mild ailments. This helps explain why health technology access decisions driven by cost‐effectiveness considerations alone often seem at odds with public opinion. For example, survey evidence suggests that representative respondents evaluating equally “cost‐effective” technologies strictly prefer paying for the one that treats the most severe illness.
I really like this because it aligns how humans tend to feel about the value of medical technologies with economic analysis, explaining why standard cost-effectiveness approaches seem wrong to us. This observation is what gives rise to the rule of rescue.
UPDATE: Here’s Lakdawalla’s response:
In fact, this is not a strong assumption, even for a high-cost drug like your Sovaldi example. To take one example, even the UK’s notoriously stingy health technology assessment agency thinks Sovaldi meets that bar quite easily.
To understand why, it helps to be a bit more literal about the issue. Drugs that generate surplus in the sick state generate a health benefit whose value exceeds the full price of the drug. That is, the gain in quality-adjusted life-years (QALYs) multiplied by the value of a QALY exceeds the full price of the drug. This is the same as saying that the cost-effectiveness ratio of a drug exceeds the value of a QALY. In the case of Sovaldi, the UK concluded that its cost-effectiveness exceeds $50K. Since a QALY is almost surely worth more than that, it follows that Sovaldi generates surplus in the sick state, even when its full price is considered, and even according to the UK.
One caveat is that drugs are priced to hit cost-effectiveness thresholds in markets that perform this analysis — like the UK — but not necessarily in the US. However, most of the time, this ends up being largely a wash. Let’s stick with the Sovaldi example to illustrate. Sovaldi costs $58K in the UK. Large private insurers in the US are probably paying 10-30% more than this, depending on their size and bargaining leverage. This is a pretty typical price differential between UK and US payers. However, the UK’s threshold of $50K/QALY is almost surely much less than 30% below the revealed preference willingness to pay for a QALY in the US. (For example, the labor literature says the value of a statistical life-year is about $200-300K. We have some work showing that metastatic cancer patients are willing to pay about $300K per life year. Etc.) Thus, on balance, Sovaldi is generating surplus in the sick state even at US prices.
Of course, the spirit of your point is still correct, because there are non-trivial numbers of drugs that fail to meet this bar. In addition, if sick people were better insured against the financial risk of illness, more drugs would generate surplus in the sick state, because the willingness to pay for health would go up among the sick. This is the sense in which financial insurance and medical technology are complements.
Antibiotic resistance is a complex social problem, with alarming global implications. That’s why it is exceedingly good news that the White House released the full National Action Plan today. We’ve known that antibiotic resistance is a problem for more than 70 years, but today is the first time any Administration has taken the threat this seriously. It’s the boldest move by any President on this issue. Ever.
Do I wish it had gone further? Sure, but I’m an academic researcher. I always have new questions to explore. The report needs more heft on what happens to reimbursement after FDA approval, for instance.
But look at the solid targets across many areas, the goals set in agriculture, the emphasis on global and regional coordination, and the significant attention to diagnostics. Not just a good first step, but a dozen good steps.
Congress should fund this as an insurance policy against a post-antibiotic era.
Almost two years ago, John Green did a Vlogbrothers video on why health care costs so much in the United States. It relied heavily on TIE’s series on the same topic, and if you haven’t seen it, then you’re odd, because it’s had almost 6.5 million views:
Needless to say, this video – more than any other thing – led to Healthcare Triage being possible. Anyway, this week, John did a video on “whether Obamacare is working” 5 years later, and it also relies on some Upshot and other TIE-related material. It’s worth a watch as well:
I’ve been on record, both here and on Twitter, being skeptical a doc fix might ever pass. I’ve also been skeptical, both at the Upshot and in talks, that this Congress could pass a CHIP extension. Evidently, the House is doing everything in its power to prove me wrong:
The House overwhelmingly approved sweeping changes to the Medicare system on Thursday, in the most significant bipartisan policy legislation to pass through that chamber since the Republicans regained a majority in 2011.
The measure, which would establish a new formula for paying doctors and end a problem that has bedeviled the nation’s health care system for more than a decade, has already been blessed by President Obama, and awaits a vote in the Senate. The bill would also increase premiums for some higher income beneficiaries and extend a popular health insurance program for children.
The legislation, which passed on a 392-to-37 vote, embodies a rare and significant agreement negotiated by Speaker John A. Boehner and the House Democratic leader, Representative Nancy Pelosi of California, two leaders who are so often at odds with each other.
It’s been so long since I’ve see a bipartisan effort to pass anything substantial that I really don’t know how to process it. I’m literally stunned.
1. “Your piece is bullshit.”—Though compelling, this is not evidence based, whereas my piece was.
2. “Cost shifting is a thing because hospital margins vary by payer.”—For instance, see this (PDF). That margins vary by payer is fully acknowledged by everyone writing about cost shifting, including me in my piece. Payer-specific margins are evidence of price discrimination or cross-subsidization. They’re not, by themselves, evidence of cost shifting.
That hospitals charge different payers (health plans and government programs) different amounts for the same service even at the same time is a phenomenon well known to economists as price discrimination (Reinhardt 2006). That hospitals charge one payer more because it received less (relative to costs or trend) from another also is widely believed. This is a dynamic, causal process that I call cost shifting, following Morrisey (1993, 1994, 1996) and Ginsburg (2003), among others. Price discrimination and cost shifting are related but different notions. The first depends on differences in market power, the ability to profitably charge one payer more than another but with no causal connection between the two prices charged. The second has a direct connection between prices charged. In cost shifting, if one payer (Medicare, say) pays less relative to costs, another (a private insurer, say) will necessarily pay more. Whereas cost shifting implies price discrimination, price discrimination does not imply that cost shifting has occurred or, if it has, at what rate (i.e., how much one payer’s price changed relative to that of another).
Price discrimination is rampant: airline seats, hotel rooms, theater tickets and many other goods and services sell for different prices to different purchasers. Are they all cost shifting? Is the price I pay higher because you got a better deal in all these circumstances? That’s implausible, but if you want to believe it, you need to demonstrate that causal connection with more than pointing to payer-varying margins. Doing otherwise is confusing correlation with causation.
3. “Cost shifting has happened before. It’s not impossible.”—This is acknowledged in my piece. The most recent work doesn’t support cost shifting, however. Things were different at other times and can always change in the future. Also, an average effect may mask market-specific variations.
4. “You ignored cost shifting from the uninsured.”—My piece did not address this, that is true. It’s a different topic. The problem is, the literature on cost shifting from the uninsured is much thinner and of lower methodological quality than that for Medicare/Medicaid cost shifting. I summarized it in my Milbank Quarterly paper. See also this post.
As such, I don’t think there’s enough strong, empirical work on this to make confident, evidence-based statements. However, one could very reasonably argue that cost shifting from the uninsured is subject to the same economics as is cost shifting from Medicare/Medicaid. Hence, on that basis, I would conclude it’s likely very small to nonexistent.
5. “Hospitals could just refuse to accept Medicare and Medicaid patients. I heard a rumor that could happen.”—I’m skeptical hospitals could or would do this, but I could be wrong. Maybe a small number will try it. Nevertheless, this doesn’t really affect my cost shifting argument. Plus, I could hardly be held accountable to rumors. What I think is more likely is the following point, the best among those I received.
6. “It could be that lower public rates, combined with rigid social or legal norms about adequate care (limiting cost cutting), will drive hospitals out of business. This could lead to greater consolidation in the industry, increasing hospital market power. In turn, that could lead to higher private prices.”—This is a great theory, and one I’ve pondered. (You can see elements of it written into my Milbank Quarterly and HSR cost shifting papers.)
By the way, I think that most hospitals trying to deny care to Medicare and Medicaid patients (per idea #5, above) would go out of business, which turns idea #5 into idea #6.
In any case, this theory of increasing consolidation, in part driven by lower public rates, has some weak support. As I wrote in my HSR paper on cost shifting, some have estimated that 15% of hospitals could lose profitability due to planned reductions in Medicare payment rates. Some would undoubtedly close for this reason, which would increase consolidation in the industry. And that, would likely push private prices upward. This is all projected and speculative.
One thing I like about this theory, though, is it highlights the important, mediating factor: market power. Anyone wishing to understand cost shifting needs to pay close attention this. Too much discussion of “cost shifting” ignores it, inviting magical thinking—like hospital costs are fixed and simply must be shifted, without regard to the market power necessary to do so. Market power is the more useful concept. (See the work of Stensland et al and Michael Morrisey.)
Anyway, there is zero direct, hard evidence for the causal chain expressed in this response. The individual who offered it to me admitted as much. On the other hand, there is evidence consistent with the theory that when public rates go down, so do private ones, as my piece described.
Even if the causal chain offered is possible, there are reasons to think it’s not the likely (or only) set of dominoes to fall. Perhaps our social or legal norms about adequate care are not yet binding because there’s so much waste in the system (there’s evidence of that!). It’s likely that many hospitals can become more efficient when prices are cut before they go bust. (See, again, Stensland et al and also Romely et al.)
Ultimately, I have to go with the evidence here. Cost shifting seems not to be happening, according to the most recent, high quality work. Prior to that, yes, it did occur, but at a relatively low rate. Once upon a time, 30 years ago, cost shifting was huge. That’s never happened since, and it’s high time we stopped thinking that massive cost shifting is inevitable. Responses to my piece illustrate that the cost shifting idea is strangely hard to shake, despite the evidence.
The following originally appeared on The Upshot (copyright 2015, The New York Times Company).
To hear some hospital executives tell it, they have to make up payment shortfalls from Medicaid and Medicare by charging higher prices to privately insured patients. How else could a hospital stay afloat if it didn’t?
This would be impossible if hospitals were compensating for lower Medicare revenue by charging private insurers more. (Under different market conditions in prior eras, but not today, a few studies found some evidence that hospitals made up shortfalls from one payer with higher prices charged to another. Some are reviewed in a paper by me in Milbank Quarterly, and older ones are summarized in work by Michael Morrisey.)
The theory that hospitals charge private insurers more because public programs pay less is known as cost shifting. What underlies this theory is that a hospital’s costs — those for staff, equipment, supplies, space and the like — are fixed. A procedure or visit simply takes a certain amount of time and requires a specific set of resources. Therefore, if Medicare, say, does not pay its full share of those costs, a hospital is forced to offset the loss with higher prices demanded of private insurers.
The cost shifting theory goes back decades. But economists have long been skeptical of it, pointing to two key weaknesses. One is that it assumes hospital costs are immutable. We should be just as suspicious of such claims in health care as we would be for any other industry.
Jeffrey Stensland, Zachary Gaumer and Mark Miller — who serve on the commission that advises Congress on Medicare payment policy — offered a different view in a 2010 article in Health Affairs. Hospital costs, they said, can change and do so in response to market forces. They found that hospitals that face little competition are less efficient and have higher costs. With few competing hospitals to turn to, private insurers have little choice but to cover those high costs. But Medicare’s prices are fixed and are therefore low relative to the high costs of these inefficient hospitals.
Conversely, hospitals in more competitive regions are more efficient and can earn a profit on Medicare prices. But, because of competition, they must charge lower prices to private insurers. Put it together and it is hospitals’ underlying costs, driven by competition — not cost shifting — that lead to differences in prices charged to insurers and Medicare shortfalls or profits. This theory was conveyed in a report to Congress in 2011.
Another weakness of the cost shifting theory is that it runs counter to basic economics. Hospitals that maximize profits, or even maximize revenue to fund charity care, would not raise private prices in response to lower public ones. In fact, such a hospital would already be charging the highest possible prices to all payers. And, instead of raising them to one insurer if another paid less, they’d do exactly the opposite. Prices charged to two types of customers would move together, not in opposition, for the same reason it does so in other industries.
If a theater finds that bulk ticket purchasers are unwilling to pay as high a price as expected — perhaps because demand by tourist groups and corporations is down — it wouldn’t raise ticket prices for individual purchasers. Because it had filled fewer seats than anticipated from bulk sales, it would reduce prices to others in order to increase sales volume. With seats to fill, when bulk purchasers pay less, so do individual ones. Likewise, retailers charge lower prices to clear inventory, not higher ones to make up for less revenue from early purchasers. Economists have shown that the same logic applies to hospitals: They shift volume from Medicare and Medicaid to privately insured patients by lowering private prices in response to lower public ones — a spillover effect.
Though hospitals don’t seem to cost shift, it remains true that they do cross subsidize. That is, more profitable customers and services enable the provision of less profitable ones. That’s often confused as cost shifting, but there’s a key difference. Cross subsidization isn’t a dynamic process. If one customer becomes less profitable, that doesn’t automatically cause the hospital to charge another more, as the cost shifting theory demands.
The evidence is clear: Today, hospital cost shifting is dead, and the spillover effect reigns. A consequence is that public policy that holds or pushes down Medicare and Medicaid prices (or their growth) could put downward pressure on the prices hospitals can charge to all its customers and, in turn, on the premiums we pay to insurers.
It’s natural, then, that hospital executives continue to promote the idea of cost shifting. The widespread belief they encourage — that it promotes higher premiums — could foster support for larger public payments. It may be a politically useful argument, but it is an economically flawed one.
Importance Little is known about the deadoption of ineffective or harmful clinical practices. A large clinical trial (the Normoglycemia in Intensive Care Evaluation and Survival Using Glucose Algorithm Regulation [NICE-SUGAR] trial) demonstrated that strict blood glucose control (tight glycemic control) in patients admitted to adult intensive care units (ICUs) should be deadopted; however, it is unknown whether deadoption occurred and how it compared with the initial adoption.
Objective To evaluate glycemic control in critically ill patients before and after the publication of clinical trials that initially suggested that tight glycemic control reduced mortality (Leuven I) but subsequently demonstrated that it increased mortality (NICE-SUGAR).
Design, Setting, and Participants Interrupted time-series analysis of 353 464 patients admitted to 113 adult ICUs from January 1, 2001, through December 31, 2012, in the United States using data from the Acute Physiology and Chronic Health Evaluation database.
Main Outcomes and Measures The physiologically most extreme blood glucose level on day 1 of ICU admission defined glycemic control as tight control (glucose level, 80-110 mg/dL; to convert to millimoles per liter, multiply by 0.0555), hypoglycemia (glucose level, <70 mg/dL), and hyperglycemia (glucose level, ≥180 mg/dL). Temporal changes in each marker were examined using mixed-effects segmented linear regression.
So here’s the deal. Some years ago, a study came out which said that there was a benefit to keeping people in the intensive care unit under tight glycemic control. This meant that we had to monitor people’s glucose levels closely and keep them between 80 and 110 mg.dL. The rationale for this is based on lab data and observational studies that showed that having tight control was associated with less hyperglycemia, fewer infections, and a greater chance of survival.
When a large randomized controlled trial was finally done, it showed that providing tight glycemic control to mostly surgical patients in ICUs led to 1 life saved for every 29 patients treated. That’s pretty awesome. So this became recommended practice.
Of course, this being medicine, soon we were providing tight glycemic control not only to critically ill surgical patients, but also non-surgical patients. Cause that’s what we do.
Later, another study was done, called the Normoglycemia in Intensive Care Evaluation and Survival Using Glucose Algorithm Regulation (NICE-SUGAR) study. This was the biggest multinational RCT to examine tight glycemic control in a varied cohort of medical and surgical ICU patients. It showed that tight glycemic control increased (not decreased) the risk of severe hypoglycemia and increased 90-day mortality.
Needless to say, these new data made everyone pause. They also led to some big changes in international guidelines modifying their recommendations for the management of blood glucose in critically ill patients. The original study showing a benefit was published in 2001. The bigger study showing harm was published in 2009.
This research looked at how practice changed before and after the publication of the first study and the second study from January 1, 2001 through December 31, 2012.
So before the publication of the first trial, about 17% of admissions to the ICU had tight glycemic control, 3% had hypoglycemia, and 40% had hyperglycemia. After publication of the first trial, each quarter there were 1.7% more increases with tight glycemic control, 2.5% more with hypoglycemia, and 0.6% fewer with hyperglycemia.
This is consistent with what we’d expect a slow, but steady adoption of tight glycemic control to do.
However, after the publication of the second trial, there was no change in the percent of patients with tight glycemic control or hyperglycemia. Here’s how tight glycemic control changed over time:
There are a few things worth noting here. The first is that it is hard to change physician behavior. Even with the first trial, it took years for people to adopt the use of tight glycemic control. But what’s even more important to see is that as hard as it is to get them to do something, it may be even harder to get them to stop doing something.
Tight glycemic control is more involved; it requires more activity, more intervention. It feels like you’re caring for patients. Regular monitoring, especially after years to tight glycemic control, feels like ignoring patients and leaving them in danger. It’s harder to do.
Unfortunately, doing more often does harm. It also usually costs more money. There’s very little incentive for industry to encourage this type of research. It’s a public good, and we’ve got to invest public money in it.
Due to the Affordable Care Act and other recent laws and regulations, funding for substance use disorder (SUD) treatment is on the rise. In the 2000s, the Veterans Health Administration (VA) implemented several initiatives that increased funding for SUD treatment during a period of growth in demand for it. A key question is whether access to and intensity of treatment kept pace or declined. Using VA SUD treatment funding data and patient level records to construct performance measures, we studied the relationship between funding and access during the VA expansion. Overall, we observed an increase in access to and intensity of VA SUD care associated with increased funding. The VA was able to increase funding for and expand the population to which it offered SUD treatment without diminishing internal access and intensity.
Last week I talked to you about dietary cholesterol, and how the existing randomized controlled trials warned us that they wouldn’t work. Now, it appears those guidelines might be changed, decades later. Cholesterol isn’t the only recommendation that is controversial. So are the ones on fat. Prepare to get annoyed. This is Healthcare Triage.