• Healthcare Triage: Guns, Suicide, and Legislating the Doctor-Patient Relationship

    Guns are one of those topics that really divide Americans. It’s hard to have a calm, evidence-based discussion. But one area where we really need to be able to do that is in the pediatrician’s office. Why? That’s the topic of this week’s Healthcare Triage.

    For those of you who want to read more:

    Much of this was adapted from a piece I wrote last year at The Upshot at the NYT.

    @aaronecarroll

    Share
    Comments closed
     
  • ACO payment issues and alternatives

    Two papers raise a few problems with how ACOs are paid and offer some remedies: (1) A paper by Rudy Douven, Thomas McGuire, and J. Michael McWilliams published in Health Affairs earlier this year and (2) a white paper by Michael Chernew, Thomas McGuire, and J. Michael McWilliams. Below are notes for each in turn, a mix of quotes and my own paraphrases and comments.

    (1) The Douven paper

    • “As of early 2014 over 360 provider organizations had contracted with Medicare as ACOs in the Pioneer program or the Shared Savings Program.” This number is over 400 now.
    • “[W]e estimate that for every dollar increase in spending in the last year before an ACO starts a new three-year contract, the ACO will get back between $1.48 and $1.90 during the contract period.” This stems from how the ACO benchmarks (to which actual spending is compared for the purposes of calculating bonuses and penalties) are established. Spending in the three years before the contract set the benchmark. But the final year before the three-year contract is weighted heavily (60%) in that calculation, incentivizing ACOs to overspend in that year, which can lead to bonuses collected in the subsequent three years.
    • The benchmark is adjusted year-to-year: “The benchmark is [] adjusted annually by a national inflation factor to establish the spending target in each contract year. It is also adjusted for year-to-year changes in the case-mix of patients served by the ACO.”
    • The benchmark is rebased with each three year contract. This means an ACO that saved a lot is penalized in the next contract with a lower benchmark. An ACO that didn’t save gets more breathing room. See any problems with that?
    • Suggested improvements include equally weighting three years of spending to establish the benchmark; using more years in the calculation; not rebasing benchmarks with each successive three-year contract renewal but at some later, unspecified time; establishing benchmarks based on other, perhaps similarly efficient providers rather than on the organizations own, historical spending; blending the current benchmark (or some variant) with spending by other ACOs in the same market or markets. These ideas are not all mutually exclusive and benchmark calculation approach could vary by performance (i.e., whether an organization is gaining or losing efficiency).
    • It probably goes without saying that no approach is perfect: each has strengths and limitations, which you can read about in the paper. But these ideas are likely to be improvements over the existing benchmark calculation. (I wonder why the existing calculation is so obviously flawed. Or was it not obvious when it was set? How did it get established as it did? Were these scholars consulted at the time? I do not know the history.)
    • “Basing payments on cost performances on peer groups has worked well in Medicaid payments to psychiatric hospitals and psychiatric units in New Hampshire, accommodating systematic differences in casemix while maintaining incentives for cost-effective care.”

    (2) The Chernew white paper

    • “As is true of virtually every Medicare payment area, the regulatory framework needs to evolve as experience accumulates.”
    • “[T]he aim of the ACO programs is to create incentives which are strong enough to encourage providers to change behavior, but not so stringent that providers will not participate.”
    • “Despite general evidence of success, organizations have been leaving the Pioneer program. In July 2013, nine Pioneers left this ACO model after the preliminary results for the first performance year were released. In August 2014, another Pioneer dropped out, followed by three more in September shortly after the second year performance results were announced, leaving 19 remaining Pioneer ACOs. The apparent paradox of generally positive results but declining participation in the downside risk model may signal shortcomings in the program structure.”
    • “[A] large organization may have the option of becoming an ACO or developing an MA plan. Such an organization, whose spending exceeds the local MA benchmark based on local FFS spending, would have an incentive to become an ACO. The more efficient organizations would have an incentive to create MA plans.” I had not considered the ACO vs MA tradeoff before. This is particularly interesting, and complicated.
    • “[I]f an ACO reduces utilization (say avoids an MRI) such that Medicare spending drops by 1000 dollars, the revenue drops only by $1000*(1- the shared saving percent) and costs drop by the variable cost of the MRI (assume $400). Thus the profit of such a program is the avoided variable cost ($400) – (1-shared saving percent)*$1000. If the shared saving percent is 50%, the net program actually loses $100. That is because the MRI had contributed $600 to the bottom line ($1000 revenue less $400 variable cost). When the MRI is not done, the provider loses that $600 but only gets back $500.” I had not factored into my thinking that only some of the cost of care is variable. In particular, start-up costs to establish an ACO and redesign practices are fixed and potentially large. They need to be recouped. This is important.
    • “Specifically, empirical estimates suggest variable costs could be as low as 16% of total costs.” Yow! All of this supports the idea that the proportion of savings shared with organizations may be too low.
    • “Profitability is greater in organizations with more patients in the ACO because spillover losses are less and savings are generated on more patients.” Let’s unpack that: Consider the likelihood that an ACO can only practice in one way. It doesn’t treat a non-Medicare patient any differently than a Medicare one. That means if it reorganizes to reduce revenue from Medicare (some of which could be made up for with a bonus from Medicare), it loses revenue on non-Medicare patients too, a spillover effect. But it receives no bonus on the non-Medicare patients, so this non-Medicare revenue loss is a pure loss. On the one hand, we want spillover effects, to the extent they reflect more efficient care. On the other hand, there’s no incentive from Medicare for them. (Private insurers should appreciate them, but the vast majority aren’t paying in an ACO-like manner, though some are.)
    • The paper includes suggestions for reform. The preferred approach articulated is to set benchmark updates after an initial period based on some preset growth rate, modified by initial efficiency. That is, updates should grow more slowly for less efficient organizations and faster for more efficient ones. This approach severs the link between benchmark updates and prior savings. Updates could be set within this framework to balance rate of convergence toward common risk adjusted benchmarks within a market and encouraging participation, even by less efficient organizations (which is where most of the savings will come from long term). In the future, a payment neutral system between ACOs and Medicare Advantage could be considered.

    @afrakt

    Share
    Comments closed
     
  • Medical alphabet

    Via Healthcare IT News:

    med alphabet

    @afrakt

    Share
    Comments closed
     
  • Healthcare Triage News: Medicare and the Doc Fix

    We’ve got a permanent doc fix. It’s all about the sustainable growth rate. Confused? We’ll help. This is Healthcare Triage News.

    @aaronecarroll

    Share
    Comments closed
     
  • Interpreting the latest ACO study

    Below I list some findings, and what I think they mean, from the recent study on Medicare Pioneer ACOs by Michael McWilliams and colleagues. My thoughts are informed by emails exchanged with Michael.

    After one year (i.e., in 2012), Pioneer ACOs saved money (1.2%) and maintained or improved in various measures of quality of care. As much as you can take any findings about Pioneer ACOs to the bank, you can take these. No, they’re not based on a randomized design or natural experiment. No, there’s no highly plausible instrumental variable design, nor one I could imagine. So of course there are plenty of threats to a causal interpretation. But, given the constraints, the authors used about the strongest possible approach (difference-in-differences with controls) and did a large number of very strong sensitivity analyses and falsification tests. The working assumption that the results are causal is plausible and interesting, so let’s go with that.

    Even ACOs that had dropped out of the Pioneer program had achieved savings. The good news is they didn’t drop out because they were failing to save money. The bad news is that they dropped out even while saving money, not exactly a good policy outcome. If the model is going to sustain itself on self-selection, this is a serious concern. One problem is that as benchmarks fall over the years, ACOs are expected to save more and more. Perhaps that’s not realistic, or the pace of change is too fast. Other approaches that don’t attempt to save so much so soon may attract and maintain more participants, which could end up saving Medicare more overall. (I may post about such approaches at another time. I have a bit of reading to do first.)

    ACOs with higher initial spending achieved greater savings than initially lower-spending ACOs. It is, perhaps, not the wisest thing to do to penalize already relatively efficient ACOs. At the same time, it is, perhaps, not the wisest thing to do to expect relatively inefficient ACOs to become too efficient too quickly. Of course we’d like optimal efficiency tomorrow. But, again, in a voluntary program, too much pressure just forces organizations out. (One could argue whether the program should be voluntary. Perhaps someday it won’t be. We’re not there yet.) To put it another way, participation by inefficient organizations is especially valuable. They have the headroom to achieve gains more rapidly than more efficient organizations. But press too hard and they will leave. It’s a delicate balance.

    There were no difference in savings achieved by ACOs that are financially integrated groups of physicians and hospitals versus those that are independent physician groups. Contrary to many claims, consolidation between physicians and hospitals is not necessary to reduce costs and maintain or improve quality. However, such consolidation increases market power with respect to private insurers, raising prices. These findings suggest that consolidation serves no useful purpose except to the consolidating organization itself. We should remain very wary of any claims that it does so.

    Meta. This is an important study. CMS is contemplating how to tweak the program, so it’s particular well timed, as is consideration of other ACO payment approaches. Note too that this study is after one year. How Pioneer ACOs, and others, perform long term is much more important than short term results. Good research takes time. So, we will have to wait.

    @afrakt

    Share
    Comments closed
     
  • Are there people speaking out against faulty science? Yes.

    Yes. There are. It’s just that we don’t promote them the same way.

    Julie Belluz has a great piece in Vox today about Dr. Oz. It’s long, it’s detailed, and it’s worth your time. It’s her last paragraph that caught my eye, though:

    There are not enough people speaking out against faulty science in health — what Caulfield calls the “slow drift toward a faith-based approach.” As a gifted researcher and doctor, and a charismatic communicator, Oz had the potential to be a voice of reason in this moment of confusion. Instead, he’s leading America adrift.

    I disagree. There are lots of people. The media just doesn’t highlight or promote them the same way.

    The Food Babe gets a NYT profile. So does Dr. Oz. I bet if I searched, I could find one for Jenny McCarthy. There are many, many huge, long form pieces at other sites on these people, too. There are very few such profiles for those who try and defend science.

    Why? There’s no money in it. Nor fame. Following science means no get-rich-quick-schemes. It means no false promises. It means telling people things they often don’t want to hear.

    Still, lots of people choose that route. Lots of people write accurately about science and how we need to stick close to findings. They’re never considered visionaries. They don’t make the lists of most important people (even for health). They don’t get celebrated in the same way.

    I’m not writing this because I’m bitter, or because I want to be famous. I’m absolutely thrilled with what we’ve got here at TIE, and I couldn’t be more grateful for the platforms I have from which to speak and write. I just wish that more in the media would recognize that there are amazing science and health experts out there, and that they would spend as much time writing about and promoting them as they do the people they know are doing it poorly.

    @aaronecarroll

    Share
    Comments closed
     
  • Treating hepatitis C: literature update

    The following is a guest post by Allan Joseph, a medical student at the Warren Alpert Medical School of Brown University and TIE research assistant. You can follow Allan on Twitter: @allanmjoseph. Links to Allan’s previous posts on hepatitis C can be found here, a post that also contains a glossary of terms.

    In the months since my summer 2014 series on hepatitis C (HCV) and the drug sofosbuvir (better known by its trade name, Sovaldi), researchers have continued to pump out studies examining HCV treatment. Since much of that research has implications for policy and public health, I thought I’d do a roundup of some of it, organized by the themes that have come out. Let’s jump right in:

    Patients co-infected with HIV and HCV

    HIV and HCV co-infection is a really big problem. Five million people worldwide have both infections, while 1 in every 3 HIV-positive patients in America has a chronic HCV infection. Moreover, co-infected patients are much more likely to progress to liver failure than patients monoinfected with HIV — in part because their treatment regimens are more complicated than monoinfected patients. All that is to say that sofosbuvir and its companions in the new wave of HCV drugs could make a big difference in the lives of co-infected patients — but until recently, we haven’t had any data on whether these drugs would work as well in co-infected patients as in the monoinfected.

    The Lancet published a medium-sized trial of sofosbuvir and ribavirin in about 275 co-infected patients across Europe and Australia, reporting SVR rates of about 85-90% depending on the genotype. That’s just about as good as the trials of sofosbuvir in monoinfected patients that led to sofosbuvir’s approval. Around the same time, JAMA published two smaller trials (50 and 63 patients) of other treatment regimens in the coinfected — a ledipasvir-sofosbuvir combination marketed under the brand name Harvoni, and a multi-drug regimen consisting of ribavirin and the ombitasvir/paritaprevir/ritonavir/dasabuvir combination marketed under the brand name Viekira Pak. Both JAMA studies reported very high SVR rates — well over 90%, though both were small and did not contain a control group.  (As a refresher, “SVR” stands for Sustained Virological Response, which is the absence of HCV genes in a patient’s blood 24 weeks after stopping treatment. It’s the proxy measurement for “cure,” but they’re not quite the same thing.)

    The real importance here is that none of these regimens use peginterferon, the drug that made previous treatment so difficult to adhere to. Peginterferon was previously a mainstay of treatment, but it was less efficacious in the coinfected. Taken together, these studies suggest that the gap in efficacy is now a thing of the past. In fact, the accompanying editorial in JAMA points out that the most recent guidelines for treating HCV state explicitly that HIV-positive patients should be treated the same as monoinfected patients. That’s a big deal from a public-health perspective

    Costs

    Costs have always been the most controversial part of the new treatments for HCV, as I discussed at length in my summer 2014 series. Recently, Charles Ornstein shed light on how these costs have started to take shape — in the first year these new drugs have been on the market (not even a full year!), they accounted for $4.5 billion in Medicare Part D spending, most of which is from the federal government and, in turn, taxpayers.

    On the academic side, the Annals of Internal Medicine published three new papers that attempt to estimate the cost-effectiveness of these new treatments. These papers rely on various simulation models that I’m far from an expert in, so I’ll let someone more qualified dissect their methods and assumptions. Taken together, the three papers suggest that these new drugs will increase healthcare spending by a significant amount, but that in most cases, the drugs are also so much better that they’re worth the extra spending. To put it in terms often used on TIE, the studies argue that the drugs are generally cost-effective, but not cost-saving. That’s essentially the conclusion I had arrived at last summer, with some strong caveats.

    Interestingly, one of the Annals studies estimated that sofosbuvir might in fact be cost-saving from a societal perspective if it cost about 25% less than it currently does. There are, in fact, two forces that could make that happen. First, and more well-known, is the fact that various pharmacy-benefit managers are cutting deals with Gilead (maker of Harvoni and Sovaldi) and AbbVie (maker of Viekira Pak) exchanging first-line status for what are likely some significant discounts. Second, and perhaps lesser-known, is the idea that the drugs might be so effective that doctors could prescribe far shorter treatment plans, cutting prices down. The Lancet recently published a small proof-of-concept study suggesting that a three-drug regimen could result in 6-week courses of treatment rather than 12 (though the addition of a third drug would reduce cost-savings), and we’ve already known that the ledipasvir-sofosbuvir combination can be used for 8 weeks instead of 12. These aren’t perfect substitutions (they likely have higher relapse rates), but if further research confirms these findings, they might reduce costs nonetheless.

    New drugs

    I could have put the last two studies I wanted to highlight under either of the categories above, so instead I decided to give them their own. The Lancet published the results of two Phase 2 clinical trials of a combination of two new drugs named grazoprevir and elbasvir, found here and here. If the studies’ results hold up in phase 3 trials (far from a given) that are scheduled to end in about 18 months, there will be another, about-equally-effective treatment option on the market within a few years. That has two implications — first, these drugs appear to also be effective in the co-infected, which has public-health implications as I outlined above. And second, these drugs are made by Merck, which might mean a third company will join Gilead and AbbVie in competing for market share in the HCV market. Though some time away, that too will have implications for costs in the long run.

    Hepatitis C is a special case when it comes to pharmaceutical competition — and that’s a post to be written another time — but in this case, it appears that the competition won’t be slowing down anytime in the near future.

    @allanmjoseph

    Share
    Comments closed
     
  • In NEJM: Protection or Harm? Suppressing Substance-Use Data

    This post is jointly authored by Nicholas Bagley and Austin Frakt.

    Yesterday evening, the New England Journal of Medicine released a Perspective piece that we co-authored on the recent suppression of Medicare and Medicaid data to researchers. (For our earlier coverage, see the posts collected here.) As we explain, the data suppression is both unnecessary and harmful:

    What if it were impossible to closely study a disease affecting 1 in 11 Americans over 11 years of age—a disease that’s associated with more than 60,000 deaths in the United States each year, that tears families apart, and that costs society hundreds of billions of dollars? What if the affected population included vulnerable and underserved patients and those more likely than most Americans to have costly and deadly communicable diseases, including HIV–AIDS? What if we could not thoroughly evaluate policies designed to reduce costs or improve care for such patients?

    These questions are not rhetorical. In an unannounced break with long-standing practice, the Centers for Medicare and Medicaid Services (CMS) began in late 2013 to withhold from research data sets any Medicare or Medicaid claim with a substance-use–disorder diagnosis or related procedure code. This move—the result of privacy-protection concerns—affects about 4.5% of inpatient Medicare claims and about 8% of inpatient Medicaid claims from key research files (see table), impeding a wide range of research evaluating policies and practices intended to improve care for patients with substance-use disorders.

    The timing could not be worse. Just as states and federal agencies are implementing policies to address epidemic opioid abuse and coincident with the arrival of new and costly drugs for hepatitis C—a disease that disproportionately affects drug users—we are flying blind.

    While NEJM was preparing the piece for publication, ResDAC, released new Medicare data indicating that the suppression is even more extensive than we wrote. For 2013, Medicare suppressed 6.43% of all Medicare inpatient claims; for 2014, that figure rose to 6.8%. (The figures for Medicaid in our piece remain the same.)

    Eric Goplerud, speaking to Alcohol and Drug Addiction Weekly in January, suggested that SAMHSA is planning on proposing a rule change this year that would allow CMS to restore access to the affected data. We hope so. The issue is much too urgent to ignore.

    @afrakt & @nicholas_bagley

    Share
    Comments closed
     
  • No more doc fixes.

    The Senate approved the permanent doc fix 92-8. 92-8!!!

    I’m a big boy. I can admit when I’m wrong:

    There’s no way President Obama doesn’t sign this. The SGR will be repealed. I won’t have to write about the doc fix anymore. Docs everywhere rejoice. And, once again, respect the pretty impressive lobbying power of the AMA. Like them or hate them, they know how to get the job done.

    @aaronecarroll

    Share
    Comments closed
     
  • Why Survival Rate Is Not the Best Way to Judge Cancer Spending

    The following originally appeared on The Upshot (copyright 2015, The New York Times Company).

    In 2012, a study published in Health Affairs argued that the big money we spend on health care in the United States is worth it, at least when it comes to cancer. The researchers found that the survival gains seen in the United States equated to more than $550 billion in additional value, more than the difference in spending.

    This research depended on survival rates. A new study was recently published in the same journal, but using mortality rates. That study found that cancer care in the United States might provide significantly less value than that in Western Europe.

    Which should you believe? It’s worth exploring these two studies, and their metrics of choice, to get a better understanding of whether what we are spending in the United States really is worth it.

    Mortality rates are determined by taking the number of people who die of a certain cause in a year and dividing it by the total number of people in a population. For instance, the mortality rate for men with lung cancer in the United States, according to the Seer database, is 61.6 per 100,000 people.

    Survival rates describe the number of people who live a certain length of time after a diagnosis. The five-year survival rate for people found to have lung cancer is 16.8 percent.

    These numbers describe very different concepts. But almost all of the research you might find in this area uses survival rates as the metric. One reason is that it’s much easier to measure. You enroll people upon diagnosis, follow them for a set number of years, and measure how many survive. Mortality rates are more of a population metric. They describe the population as a whole, and they’re much harder to measure accurately.

    Moreover, the survival rate is the information patients want. When patients learn they have cancer, they want to know the likelihood that they will live a certain amount of time. That’s what a survival rate will tell them. Mortality rates won’t mean anything to them at all.

    But there are two problems with survival rates. The first is what’s known as lead-time bias. In reality, you can decrease the mortality rate only by preventing people with the disease from dying, or preventing them from getting it in the first place.

    You can improve the survival rate, however, by preventing death, preventing people from getting sick, or making the diagnosis earlier. That last factor can make all the difference.

    Here’s the example I always use to explain this concept: Let’s consider a hypothetical illness, thumb cancer. We have no method to detect the disease other than feeling a lump. From that moment, everyone lives about four years with our best therapy. Therefore, the five-year survival rate for thumb cancer is effectively zero, because within five years of detection, everyone dies.

    Now, let’s assume that we develop a new scanner that can detect thumb cancer five years earlier. We prevent no more deaths, mind you, because our therapy hasn’t improved. Everyone now dies nine years after detection instead of four. The five-year survival rate is now 100 percent.

    But the mortality rate remains unchanged, because the same relative number of people are dying every year. We’ve just moved up the time of diagnosis and potentially subjected people to five more years of therapy, increased health care spending and caused more side effects. No real improvements were made.

    But if we just looked at survival rates, we would think we made a difference. Unfortunately, that happens far too often in international comparisons, as the United States often does much more screening than other countries and then justifies it through improved survival rates.

    The second problem with using survival rates is overdiagnosis bias. Let’s say that a certain number of cases of thumb cancer that are detectable by scan never progress to a lump. That means some subclinical cases that would never lead to death are now being counted as diagnoses.

    Since they were never dangerous, and we’re now picking them up by scans, they’re improving our survival rates. But they do nothing for mortality rates because no fewer people are dying.

    These two factors are important to consider when you compare ways of caring for cancer, especially when there are differences in the ways diagnosis and screening occur. For many cancers, we’ve been diagnosing significantly more cases, but making little headway in mortality rates.

    The first Health Affairs study I mentioned used survival rates of 13 cancers in 10 countries in Europe. The researchers took the amount of money spent on cancer care and determined how much was spent to achieve the better survival rates seen in the United States. They concluded that the increased spending on care was less than the value achieved.

    But the increased value was achieved by looking at survival. Moreover, almost all the gains were because of findings in two cancers: breast cancerand prostate cancer. These are the two most hotly debated in terms of whether we are screening too aggressively and diagnosing too much in the United States. Both of these factors would greatly affect lead-time bias and make the use of survival rates unappealing.

    The more recent Health Affairs study went back to the drawing board and started over with mortality rates. It was also a wider study. The researchers included 20 countries in Western Europe. They also added lung cancer, which was left out of the 2012 study, but which is the largest cancer killerin the developed world.

    The differences in mortality rates between the United States and Western Europe are nowhere near as large as the differences in survival rates. Even so, the United States often outperforms Europe. From 1982 to 2010, it’s estimated that we averted almost 67,000 deaths from breast cancer compared with Western Europe. We averted almost 60,000 deaths from prostate cancer and almost 265,000 deaths from colorectal cancer.

    But at what cost? The researchers found that the incremental cost of each year of quality adjusted life, or QALY, gained for colorectal cancer was $110,000. For breast cancer, we spent more than $400,000 per QALY gained. For prostate cancer, we spent almost $2 million per QALY gained.

    We often focus on breast, colorectal and prostate cancer because we do better with those diseases. But we don’t with all cancers. Over the same period, the United States had more than 1.1 million more deaths from lung cancer than Western Europe. Because we still spent more on care for this disease, we had a negative cost of about $19,000 per QALY gained. We also had negative costs per QALY gained for other cancers, including melanoma(about $137,000) and cervical cancer (about $855,000).

    As I’ve written before, discussions of cost effectiveness are difficult to have in the United States. I am sure there are many people who believe that $400,000 isn’t too much money to give a woman with breast cancer an additional year of quality adjusted life. But this is money we can’t then spend on other treatments or other therapies that might do more good for more people.

    We should have these conversations, and we should do them with the right data. When it comes to preventing death, we need to consider mortality rates, not survival rates, or we may be getting far less for our money than we think.

    @aaronecarroll

    Share
    Comments closed