• Distributive injustice in US health care

    From “Distributive Injustice(s) in American Health Care, ” by Clark Havighurst and Barak Richman:

    [I]t should certainly be a cause for concern if consumption patterns vary greatly and positively with income—rather than with health needs alone—in situations where everyone pays the same premium for the same health coverage. This appears[*] to be the case in many U.S. health plans, since higher-income employees seem to make greater use of their coverage, demanding and receiving more and costlier services at plan expense than their lower-income coworkers. […] [T]he tax subsidy is the ultimate source of the problem. By causing health coverage to be purchased in heterogeneous employment groups (including individuals with disparate, income-correlated preferences and consumption patterns), it creates conditions in which lower-income premium payers may be paying—unwittingly—costs incurred by their more demanding, affluent, and influential coworkers. […]

    Our concern, however, is not that health care is rationed or distributed unequally but the likelihood that conditioning eligibility for insurer payments on patients’ willingness to make certain out-of-pocket payments causes lower income participants in employee health plans to get disproportionately fewer benefits than their more affluent coworkers receive in return for equivalent premiums. […] Likewise, as employers pursue the increasingly popular strategy of funding health savings accounts and enrolling their workers in high-deductible health plans, it is possible that greater emphasis on cost sharing to contain moral hazard will cause insurers’ premium pools to be allocated even more disproportionately to the care of the affluent. […]

    [A]n economist might suggest that employers unconsciously adjust the amount of wages they are willing to pay to different classes of worker to reflect the class’s propensity to utilize employer-financed health benefits—in which case it might be incorrect to hypothesize that lower-income workers actually bear costs incurred by higher-income, higher-utilizing participants in the same plan. […] [But, t]he notion that there is no regressivity depends on heroic assumptions about employee and employer perceptions, rationality, and the smoothness of the market’s operation. Thus, workers’ decisions about which jobs to take turn on many factors besides the implicit value of particular health coverage. Furthermore, employers probably think only rarely in terms of total compensation packages, perhaps even administering employee benefits and cash compensation in separate cost centers.

    * The authors admit that they are unaware of a lot of direct evidence this happens.

    h/t Nicholas Bagley

    @afrakt

    Share
    Comments closed
     
  • Free will

    Via several on Twitter:

    free will

    @afrakt

    Share
    Comments closed
     
  • AcademyHealth: High health spending is more persistent than you might think

    How persistent is high health spending? This question is more challenging to answer than you might think, especially so for the working age population. In my AcademyHealth post, I discuss some new evidence that might surprise you. (It’s kind of a big deal.)

    @afrakt

     

    Share
    Comments closed
     
  • JAMA Forum: What we can learn from hospital competition in the UK

    The UK’s National Health Service (NHS) is a nationalized health care system—meaning doctors are employed by the government. From that familiar fact, many conclude that the US—with its competitive, private market-based delivery system—has nothing to learn. That’s wrong. Go read my JAMA Forum post to find out why.

    @afrakt

     

    Share
    Comments closed
     
  • The American Hospital Association’s president on cost shifting

    According to the Medicare Payment Advisory Commission, the Medicare margin was negative 5.4 percent in 2013. Hospitals must make up for shortfalls through a combination of approaches, and cost shifting to the private sector is among them.

    —Rich Umbdenstock, President and Chief Executive, American Hospital Association, in a letter printed in The New York Times on June 24, 2015

    All TIE’s cost shifting posts are here. I wrote about it in The New York Times here.

    @afrakt

    Share
    Comments closed
     
  • Very long waiting times for Dr. Robot

    Using methods you can read about for yourself, in 2013 University of Oxford scholars predicted that health care is among the sectors least likely to be automated “by means of computer-controlled equipment.” Here’s their chart, with “Healthcare Practitioners and Technical” in dark green:

    computerization

    Do not interpret this to mean that some functions of medicine won’t be automated (some already are). What it means is that health care sector jobs are not at high risk due to computerization. The nature of those jobs may change, due to automation, but their numbers are very unlikely to do so.

    @afrakt

    Share
    Comments closed
     
  • Where the health care money is, in charts

    Here are two fascinating charts from the recent NBER working paper by Mariacristina De Nardi and colleagues. Both are average, per person total health care spending by type of service, the first by age and the second by month prior to death. The second is estimated from a model, which is why it’s so smooth.

    spending by age, type

    from death

    Two points:

    1. Nursing home care is driving the entirety of growth in health spending after age 85 or so. Even before age 85, it’s the main driver. (Yes, the article says “nursing home” specifically, even when referencing the figure that just says “nursing”.)
    2. Hospital care is the biggest source of cost in the last year of life, followed by nursing home care.
    3. Per person spending is not informative about total spending. One would have to multiply by the number of people (and by age, for age-specific averages, per the first chart above). I point this out just so those on Twitter who think I’m not aware of this fact are assured that I am.

    @afrakt

    Share
    Comments closed
     
  • Confirmation bias

    Via Justin Dimick and Mamta Swa:

    confirmation bias

    @afrakt

    Share
    Comments closed
     
  • Our safer, yet imperfect, automated systems

    An apt analog to William Langewiesche’s story of the 2009 crash of Air France Flight 447 is Bob Wachter’s account of the non-fatal overdosing of a pediatric patient at the UCSF Medical Center. Neither story is new, and I make no claims that my feeble insights below are novel.*

    In fact, Wachter mentions Flight 447’s fatal crash as he explains how automation can create new vectors for disaster, even as it closes off old ones. Both he and Langewiesche provide evidence that automation—auto-pilots in aviation and clinical decision support and order fulfillment features in electronic medical systems—improves safety on average while courting danger in a subset of cases. This is not a knock on their work or the compelling anecdotes they use to drive their narratives, but it’s a plea for a bit of perspective as you read either story, and I highly recommend both.

    At the heart of both is a cascade of errors that begins with a human (or humans’) misunderstanding of the mode in which an automated system is operating. Wachter offers a very nice example of such a “mode error,” which I’m certain you can relate to: ACCIDENTALLY TYPING WITH CAPS LOCK ON. The caps lock key toggles the keyboard mode such that all (or most) keys behave differently.

    When typing, an inadvertent caps lock toggle can cause annoying mode errors, like failing to properly enter a password. When flying an aircraft or ordering medications for a patient, mode errors can be deadly, even if they’re usually annoyances that get remedied before disaster strikes.

    The pilots aboard Flight 447 didn’t recognize that their plane had switched modes, relying less on auto-pilot and ceding more control to them. They misinterpreted this sudden grant of autonomy as a confusing set of malfunctions. Likewise, the physician that initiated the sequence of errors that landed Pablo Garcia in the ICU, but might have killed him, didn’t recognize a mode change: the electronic medication entry system had switched from interpreting entries in milligrams to milligrams per kilogram of patient weight, thus multiplying a 160mg dose by a factor of 39.

    Failure of humans to recognize mode changes and failure of systems to make them more obvious but without exacerbating “alarm fatigue” are among the many ways automation can harm. It relies on humans’, often well-earned, trust in automation. When we ignore the warning signs that an automated system is telling us, in part that’s because that system has served us very well in the past, sparing us from far more errors than it creates. Despite their intent, the vast majority of alarms (car alarms, fire alarms, the flashing “check engine” light, and the like) are not signals of immediate danger, so our learned response is to treat them as such and to ignore them when possible. Though infrequently, this will sometimes be a mistake. It won’t always lead to disaster (because we have other means of obtaining the right information and correcting our first, false assumption), but it could do so.

    Such assumptions are not unique to automated systems. I’m well aware that every wail from my children is not a signal of deadly distress. Their sounds of alarm don’t always mean what they think they mean. Likewise, the political candidate who warns the end of America if his opponent is elected is no longer alarming.

    Our trust in (or conferring of) authority is not unique to the machine-human relationship either. Though I do trust many machines, I trust a great number of humans too. They’ve earned it. And yet they err, and their errors cause me harm, just as mine cause harm to others. Naturally, we should be aware of the harms of machines, of humans, and of the marriage of the two. We should strive to reduce the potential for grave error, provided we can do so in ways that don’t invite greater costs (by which I do not merely mean money).

    A careful read of the accounts of Flight 447 and patient Pablo Garcia reveals the overwhelming benefits of automation in aviation and medicine, as well as the dangers that still remain. There is much more work to do, as both authors expertly document. Humans are highly imperfect. So are our systems designed to protect us from ourselves.

    * Also, let me assure you that I understand the differences between aviation and medicine, as I mentioned previously. All recent posts on automation are so tagged.

    @afrakt

    Share
    Comments closed
     
  • Increasing scientific credibility

    Some ideas for reducing publication bias and increasing the credibility of published scientific findings, from Brendan Nyhan:

    • Pre-accepted articles, based on pre-registered protocol and before findings are known. Brendan reports that this is already happening at AIMS Neuroscience, Cortex, Perspectives on Psychological Science, Social Psychology, and for a planned special issue of Comparative Political Studies.
    • Results-blind peer review. A similar idea to pre-accepted articles, this would evaluate submissions on all aspects of a paper (data, methods, import) apart from the actual findings. Brendan notes that this has been attempted at Archives of Internal Medicine.
    • Verifying replication data and code, basically providing everything necessary to replicate the study. Already standard at the American Journal of Political Science and American Economic Review.
    • Reward higher quality and faster reviews with credits, redeemable for faster review of one’s own manuscripts. I’m not aware of a any journal that has attempted any program aimed at reducing review times, let alone succeeded in doing so.
    • Forward reviews of promising manuscripts to section journals. That is, if the flagship journal can’t accept, but recommends publication in an affiliated journal, streamline the process by treating the flagship’s review as the first round. Something like this already happens with JAMA journals and the American Economic Review and its affiliated journals.
    • Triple blind reviewing would blind the editor from the authors, not just the authors and reviewers from one another. Already standard at MindEthics, and American Business Law Journal.

    As Brendan writes, all of these have limitations and none can remove all potential bias or gaming. Yet it’s hard to argue they’re not worth considering.

    @afrakt

    Share
    Comments closed