• On what efficiency enhancements would most health care experts agree?

    I posed a version of this question on Twitter. By “efficiency enhancement” I mean something that increases health at a cost we think is worth it or decreases costs without sacrificing more health than we think is worth it. (The ideal is greater health for lower cost, of course.) Answers, mine and some of those from others, are below, and in no particular order.

    Before you read the answers, it’s important to appreciate the form of the question. It does not ask what efficiency enhancements I or you like. It asks for those that we think most health care experts would agree on. We might think they’re bogus, but we think most such experts don’t. It’s also vague. What’s “most” and who’s an “expert”? Still, I think it’s interesting to try to contemplate where we might find common ground. See also my list on what most health economists might agree with.

    On what efficiency enhancements would most health care experts agree?

    • Focus effort on tertiary prevention
    • Focus effort on transitions of care
    • Payment bundling
    • More competition (Single payer advocates might disagree. But they should pay attention to competition in the NHS.)
    • The right kind of management (Look here.)
    • Checklists
    • Hand washing
    • Greater health literacy
    • Clearer/more complete practitioner-patient communication
    • Clinicians working at the top of their license/training (Meaning, don’t use more expensive practitioners where cheaper ones would do just as well, if not better.)
    • Streamlining payment administration
    • More comparative effectiveness information and use thereof
    • Less poverty
    • Price transparency
    • More public health investment
    • Pay more attention to social and environmental determinants of health

    If you think about some of these too much, they become tautological with “efficiency enhancement.” For instance, by “focus on transitions” we kinda mean “act so as to produce fewer errors and problems in transitions.” But that’s kind of like saying, “make it better” or “make it more efficient.” So, I think the right interpretation of some of these is as places to focus investment. If we invest more in care transitions (or tertiary prevention or health literacy or public health, etc.), we think better efficiency will follow. That is, they have positive ROI, we think, or we think most experts think.

    We should definitely debate these. We should do that on Twitter, of course.

    @afrakt

    Share
    Comments closed
     
  • Balancing privacy, research, and care delivery

    The following originally appeared on The Upshot (copyright 2015, The New York Times Company).

    Researchers who want to study Medicare or Medicaid patients with substance-use disorders — and illnesses disproportionately affecting them like H.I.V. and hepatitis C — are, at best, working with biased data. At worst, they’re flying blind.

    That’s because agencies within the Department of Health and Human Services, without public notice and because of patient privacy concerns,decided in 2013 to remove researchers’ access to certain types of Medicare and Medicaid data. Without these data — all relating, even tangentially, to patients with substance-use disorders — health researchers fear they will be hampered in their quest to improve care.

    Consider patients seen by the Veterans Health Administration. Because patients use health care from other providers, too, a researcher might need to combine V.H.A. data with that from Medicare and Medicaid to assess and improve outcomes from treatment. Suppose treatment for alcohol addiction reduced the likelihood of an accident that would land a patient in the emergency room. Because many emergency room visits are outside the V.H.A. system, Medicare and Medicaid data are essential to measuring this effect of treatment. If researchers can’t see it, they can’t improve it. (Full disclosure: I am employed as a health economist with the V.H.A. The views expressed are my own and do not necessarily reflect the positions of the Department of Veterans Affairs.)

    When health care researchers received Medicare and Medicaid data last year, some of them noticed — to their shock — that much of it was missing. (Because they weren’t told of this, many didn’t even notice until the news broke last December.)

    All told, the suppression affects about 7 to 8 percent of inpatient hospital medical records or about 1 million Medicare or Medicaid beneficiaries. A smaller proportion of records pertaining to outpatient care and nursing home care is also withheld. The systematic removal of this much data can lay waste to a significant segment of research.

    Pointing to decades-old regulations designed to protect patients’ privacy,Aaron Albright, Media Relations Group director for the agency overseeing Medicare and Medicaid, said that release of data pertaining to patients with substance-use disorders was not permitted “without patient consent.”

    The caution is understandable. Substance-use disorders carry stigma. Some patient advocates have expressed concerns that medical data could be used by law enforcement to incarcerate patients or to separate children from their parents. Perhaps the information could be used to deny employment. Without robust privacy protections, these concerns could deter some patients from seeking treatment.

    But as the University of Michigan law professor Nicholas Bagley and Idescribed in the New England Journal of Medicine, the problem for research is substantial. Because detailed, individual-level health care data from private insurers is often costly or difficult to obtain, for decades Medicare and Medicaid data have offered the best available window into health care use, outcomes and costs. And for all those years, those data were made available and in full.

    Regulations that justify the withholding of these research data also stymie the delivery of health care. Doctors’ offices and hospitals are not allowed to share patient data pertaining to substance-use disorders or treatment without each patient’s consent to each transmission of information.

    This may have been tenable when records were entirely on paper and were rarely shared across providers. But now that many have gone digital, with data sharing more commonplace and encouraged, it’s far more onerous. Many organizations, therefore, exclude such information from their electronic data systems, which can put patients at risk.

    Imagine that a doctor sees a patient who does not disclose a history of opioid addiction, which was diagnosed at another doctor’s office. If that physician prescribes a narcotic painkiller, he could fuel the patient’s addiction, with potentially lethal results. But today, the prescribing physician couldn’t be told about the condition absent patient consent — even if the two physicians are working together to coordinate that very patient’s care.

    Can we exchange and analyze patient data to improve care while minimizing the risk of and harm from breaches of privacy? It’s already standard for researchers to work in secure data environments. They’re already subject to criminal penalties if they don’t adhere to strict data protection protocols. You cannot obtain access to Medicare and Medicaid research data without submitting to those conditions.

    Yes, as tight as the data protections already are, they could always be strengthened, though at some cost. (For example, access to certain types of Census Bureau data requires a background check, fingerprinting and an in-person interview.)

    The current privacy protection pulls at some of the ambitions of health care reform.

    The Affordable Care Act and other recent legislation promote the exchange of electronic medical data and analysis of such data to enhance the quality and efficiency of health care. In the midst of an opioid epidemic — as we are — and as we struggle to finance new and costly drugs for hepatitis C — which disproportionately affects drug users — the timing of the Medicaid and Medicare data suppression could hardly be worse.

    The administration announced this month that it would be revisiting the relevant regulations. This gives the government an opportunity to balance privacy concerns with data access for research and coordination across providers.

    @afrakt

    Share
    Comments closed
     
  • AcademyHealth: A Medicare Advantage literature update

    Two recent studies of Medicare Advantage (MA) assess its cost and value. I summarize findings in my latest AcademyHealth post.

    @afrakt

     

     

     

     

    Share
    Comments closed
     
  • These lines are all perfectly level

    Via Kyle Hill:

    level

    @afrakt

    Share
    Comments closed
     
  • ACO payment issues and alternatives

    Two papers raise a few problems with how ACOs are paid and offer some remedies: (1) A paper by Rudy Douven, Thomas McGuire, and J. Michael McWilliams published in Health Affairs earlier this year and (2) a white paper by Michael Chernew, Thomas McGuire, and J. Michael McWilliams. Below are notes for each in turn, a mix of quotes and my own paraphrases and comments.

    (1) The Douven paper

    • “As of early 2014 over 360 provider organizations had contracted with Medicare as ACOs in the Pioneer program or the Shared Savings Program.” This number is over 400 now.
    • “[W]e estimate that for every dollar increase in spending in the last year before an ACO starts a new three-year contract, the ACO will get back between $1.48 and $1.90 during the contract period.” This stems from how the ACO benchmarks (to which actual spending is compared for the purposes of calculating bonuses and penalties) are established. Spending in the three years before the contract set the benchmark. But the final year before the three-year contract is weighted heavily (60%) in that calculation, incentivizing ACOs to overspend in that year, which can lead to bonuses collected in the subsequent three years.
    • The benchmark is adjusted year-to-year: “The benchmark is [] adjusted annually by a national inflation factor to establish the spending target in each contract year. It is also adjusted for year-to-year changes in the case-mix of patients served by the ACO.”
    • The benchmark is rebased with each three year contract. This means an ACO that saved a lot is penalized in the next contract with a lower benchmark. An ACO that didn’t save gets more breathing room. See any problems with that?
    • Suggested improvements include equally weighting three years of spending to establish the benchmark; using more years in the calculation; not rebasing benchmarks with each successive three-year contract renewal but at some later, unspecified time; establishing benchmarks based on other, perhaps similarly efficient providers rather than on the organizations own, historical spending; blending the current benchmark (or some variant) with spending by other ACOs in the same market or markets. These ideas are not all mutually exclusive and benchmark calculation approach could vary by performance (i.e., whether an organization is gaining or losing efficiency).
    • It probably goes without saying that no approach is perfect: each has strengths and limitations, which you can read about in the paper. But these ideas are likely to be improvements over the existing benchmark calculation. (I wonder why the existing calculation is so obviously flawed. Or was it not obvious when it was set? How did it get established as it did? Were these scholars consulted at the time? I do not know the history.)
    • “Basing payments on cost performances on peer groups has worked well in Medicaid payments to psychiatric hospitals and psychiatric units in New Hampshire, accommodating systematic differences in casemix while maintaining incentives for cost-effective care.”

    (2) The Chernew white paper

    • “As is true of virtually every Medicare payment area, the regulatory framework needs to evolve as experience accumulates.”
    • “[T]he aim of the ACO programs is to create incentives which are strong enough to encourage providers to change behavior, but not so stringent that providers will not participate.”
    • “Despite general evidence of success, organizations have been leaving the Pioneer program. In July 2013, nine Pioneers left this ACO model after the preliminary results for the first performance year were released. In August 2014, another Pioneer dropped out, followed by three more in September shortly after the second year performance results were announced, leaving 19 remaining Pioneer ACOs. The apparent paradox of generally positive results but declining participation in the downside risk model may signal shortcomings in the program structure.”
    • “[A] large organization may have the option of becoming an ACO or developing an MA plan. Such an organization, whose spending exceeds the local MA benchmark based on local FFS spending, would have an incentive to become an ACO. The more efficient organizations would have an incentive to create MA plans.” I had not considered the ACO vs MA tradeoff before. This is particularly interesting, and complicated.
    • “[I]f an ACO reduces utilization (say avoids an MRI) such that Medicare spending drops by 1000 dollars, the revenue drops only by $1000*(1- the shared saving percent) and costs drop by the variable cost of the MRI (assume $400). Thus the profit of such a program is the avoided variable cost ($400) – (1-shared saving percent)*$1000. If the shared saving percent is 50%, the net program actually loses $100. That is because the MRI had contributed $600 to the bottom line ($1000 revenue less $400 variable cost). When the MRI is not done, the provider loses that $600 but only gets back $500.” I had not factored into my thinking that only some of the cost of care is variable. In particular, start-up costs to establish an ACO and redesign practices are fixed and potentially large. They need to be recouped. This is important.
    • “Specifically, empirical estimates suggest variable costs could be as low as 16% of total costs.” Yow! All of this supports the idea that the proportion of savings shared with organizations may be too low.
    • “Profitability is greater in organizations with more patients in the ACO because spillover losses are less and savings are generated on more patients.” Let’s unpack that: Consider the likelihood that an ACO can only practice in one way. It doesn’t treat a non-Medicare patient any differently than a Medicare one. That means if it reorganizes to reduce revenue from Medicare (some of which could be made up for with a bonus from Medicare), it loses revenue on non-Medicare patients too, a spillover effect. But it receives no bonus on the non-Medicare patients, so this non-Medicare revenue loss is a pure loss. On the one hand, we want spillover effects, to the extent they reflect more efficient care. On the other hand, there’s no incentive from Medicare for them. (Private insurers should appreciate them, but the vast majority aren’t paying in an ACO-like manner, though some are.)
    • The paper includes suggestions for reform. The preferred approach articulated is to set benchmark updates after an initial period based on some preset growth rate, modified by initial efficiency. That is, updates should grow more slowly for less efficient organizations and faster for more efficient ones. This approach severs the link between benchmark updates and prior savings. Updates could be set within this framework to balance rate of convergence toward common risk adjusted benchmarks within a market and encouraging participation, even by less efficient organizations (which is where most of the savings will come from long term). In the future, a payment neutral system between ACOs and Medicare Advantage could be considered.

    @afrakt

    Share
    Comments closed
     
  • Medical alphabet

    Via Healthcare IT News:

    med alphabet

    @afrakt

    Share
    Comments closed
     
  • Interpreting the latest ACO study

    Below I list some findings, and what I think they mean, from the recent study on Medicare Pioneer ACOs by Michael McWilliams and colleagues. My thoughts are informed by emails exchanged with Michael.

    After one year (i.e., in 2012), Pioneer ACOs saved money (1.2%) and maintained or improved in various measures of quality of care. As much as you can take any findings about Pioneer ACOs to the bank, you can take these. No, they’re not based on a randomized design or natural experiment. No, there’s no highly plausible instrumental variable design, nor one I could imagine. So of course there are plenty of threats to a causal interpretation. But, given the constraints, the authors used about the strongest possible approach (difference-in-differences with controls) and did a large number of very strong sensitivity analyses and falsification tests. The working assumption that the results are causal is plausible and interesting, so let’s go with that.

    Even ACOs that had dropped out of the Pioneer program had achieved savings. The good news is they didn’t drop out because they were failing to save money. The bad news is that they dropped out even while saving money, not exactly a good policy outcome. If the model is going to sustain itself on self-selection, this is a serious concern. One problem is that as benchmarks fall over the years, ACOs are expected to save more and more. Perhaps that’s not realistic, or the pace of change is too fast. Other approaches that don’t attempt to save so much so soon may attract and maintain more participants, which could end up saving Medicare more overall. (I may post about such approaches at another time. I have a bit of reading to do first.)

    ACOs with higher initial spending achieved greater savings than initially lower-spending ACOs. It is, perhaps, not the wisest thing to do to penalize already relatively efficient ACOs. At the same time, it is, perhaps, not the wisest thing to do to expect relatively inefficient ACOs to become too efficient too quickly. Of course we’d like optimal efficiency tomorrow. But, again, in a voluntary program, too much pressure just forces organizations out. (One could argue whether the program should be voluntary. Perhaps someday it won’t be. We’re not there yet.) To put it another way, participation by inefficient organizations is especially valuable. They have the headroom to achieve gains more rapidly than more efficient organizations. But press too hard and they will leave. It’s a delicate balance.

    There were no difference in savings achieved by ACOs that are financially integrated groups of physicians and hospitals versus those that are independent physician groups. Contrary to many claims, consolidation between physicians and hospitals is not necessary to reduce costs and maintain or improve quality. However, such consolidation increases market power with respect to private insurers, raising prices. These findings suggest that consolidation serves no useful purpose except to the consolidating organization itself. We should remain very wary of any claims that it does so.

    Meta. This is an important study. CMS is contemplating how to tweak the program, so it’s particular well timed, as is consideration of other ACO payment approaches. Note too that this study is after one year. How Pioneer ACOs, and others, perform long term is much more important than short term results. Good research takes time. So, we will have to wait.

    @afrakt

    Share
    Comments closed
     
  • Cost effectiveness depends on coverage

    Mark Pauly made a very good point in his recent paper in Health Economics. It’s a fairly simple chain of logic, but not one to which I had given much thought before:

    1. Cost sharing matters. It affects the number and types of patients that receive treatment. As cost sharing goes down (i.e., coverage goes up) not only do more people obtain treatment, but different types of people do so.
    2. In particular, “different types of people” means effectiveness of treatment is heterogeneous across the population that receives it. Not everyone benefits to the same extent. As cost sharing goes down, marginal (and average) effectiveness tends to as well, under the assumption that people or their doctors can assess the likelihood and extent of benefit, at least somewhat. (This is clearly not always true, but it does hold some of the time and for some treatments, at least. We do know something about who is more and less likely to benefit from a coronary stent or a mammogram, say.)
    3. Though it need not be the case, let’s assume treatment costs are constant. (Positive returns to scale at sufficient rate could change the argument. I didn’t notice Pauly considering this, but it’s probably not likely to hold in general anyway.)
    4. Consequently, cost effectiveness varies with coverage. If you evaluate cost effectiveness in a setting in which patients are fully covered, you’ll get a different result than if you do so in which patients are liable for some costs. Something that’s not cost effective with zero cost sharing might be for some positive value of cost sharing because it changes who receives care from a population for which it is, on average, less effective to one for which it is more effective.

    This puts some meat on the bones of generality statements. A cost effectiveness study’s results might not generalize to other populations because they have different levels of coverage. Related, they may have different levels of income or other costs of living (so that cost sharing affects them differently) or receive different levels of benefits from treatment.

    Cost sharing is one of the principle levers of plan design. If one is interested in designing a plan that covers cost effective treatment (in some sense), then one had better pay close attention to interactions with cost sharing as one considers what is and is not cost effective. I doubt the body of evidence on cost effectiveness is rich enough to take this very far at this point. It’s just one more complexity that cost effectiveness researchers should pay attention to, but I think often do not.

    @afrakt

    Share
    Comments closed
     
  • Well informed or sane, but not both

    Via Justin Wolfers:

    informed insane

    @afrakt

    Share
    Comments closed
     
  • JAMA Forum: When do externalities matter in health care?

    Most people feel that the negative externality that anti-vaxxers impose on society—endangering those who cannot be vaccinated and threatening loss of herd immunity—warrants some government coercion to vaccinate. Yet, in many other instances in which externalities arise, including due to lack of purchase of health insurance, government coercion not as widely accepted. Why the difference? I discuss over on the JAMA Forum.

    @afrakt

    Share
    Comments closed