• Healthcare Triage News: Sugar Ban, Driving Regulations Work, and the Apple Watch

    Hospitals banning sugar sweetened beverages, provinces cracking down on dangerous drivers. And get me an Apple Watch!

    We’re closing in on 100,000 subscribers. Please spread the word!


    Comments closed
  • Some good writing on health and health care by Lisa Rosenbaum

    Do you want chemo and three months of life, or six weeks of life without the nausea and vomiting that the chemo causes? Do you want high-risk open-heart surgery, with a fifteen-per-cent risk of dying during the operation, or would you rather continue as you are, with a fifty-per-cent chance you will be dead in two years? Do you want a prostatectomy, which has a five-per-cent chance of impotence and incontinence, or radiation, with a three-per-cent chance of leaving a hole in your rectum, or would you rather “watch and wait,” with the chance that your cancer will never grow at all?

    That’s from Lisa Rosenbaum’s July 2013 piece in The New Yorker on shared decision making. Her most recent piece, which I also enjoyed, is this one on the relationship between extreme exercise and heart damage. It hits close to home because my wife will run her second 50k next month. Training alone includes several marathons over a few-week span. This, to me, is unfathomable.

    Here’s another terrific piece by Lisa that taught me a great deal about stenting and helpful vs. unnecessary care. (This is saying a lot since I know quite a bit about this stuff already.)

    It was in these gaps between data and life where I lost Sun Kim. There is no guideline that says, “This is how you manage an elderly man who asks nothing of anyone, who may or may not be taking his medications, and who has difficulty coming to see you because he vomits every time he gets on the bus.” In a world with infinite resources, we could conduct clinical trials to address every permutation of coronary disease and every circumstance. But that’s not the world we live in. And in our world, I reached a point where I could not keep Sun Kim out of the hospital.

    The rest of Lisa’s pieces are here. I was not aware of her and her work until relatively recently or I’d probably have referenced it many times by now.


    Comments closed
  • The Obama plan for combatting antibiotic resistance is out

    After several months of intense study, President Obama released a package of actions today designed to combat antibiotic resistance.

    The most surprising action item is the creation of a one-time $20 million prize for a new point-of-care diagnostic for highly resistant infections. That is a big deal, on top of the £10 million UK Longitude prize on the same topic. Hopefully, HHS (NIH & BARDA) will coordinate with the UK on this prize. This is very encouraging news.  In the 2014 ERG Report, we found a MRSA rapid point-of-care diagnostic to have a value to society exceeding $22 billion.  These prizes are bargains – if they work, we get an exceedingly valuable diagnostic; if they don’t, no federal money is spent.

    President Obama issued an Executive Order to direct federal agencies to implement the President’s Council on Science and Technology (PCAST) Report. We will also have a National Strategy with Cabinet level leadership, led by HHS with Defense and Agriculture.

    Additional limits are proposed on antibiotic use in agriculture, above and beyond the recent FDA actions, especially for classes useful for humans. This is a “One Health” strategy, using WHO language, a combination of human and animal health, including food safety and the environment. For antibiotics, we are just now understanding the spread of antibiotic resistance genes in the environment and the interaction between animal use and human health is a serious concern. 80% of US antibiotics by weight are used in agriculture.

    I was also encouraged by the emphasis on international coordination.

    Actual texts will be released in an hour. I’ll update with links.

    UPDATE:  Executive Order here. The PCAST Report is here. The National Strategy is here.

    Key proposals from PCAST today, my comments in bold italics:

    • Double federal spending on antibiotic resistance research, surveillance and prevention, an additional $450 million per year. This is a huge increase, exactly what is needed. Will need Congress to appropriate the funds.
      • including $90 million in additional CDC grants to strengthen state and local public health surveillance and response to bacterial resistance
      • National surveillance based on genomic sequencing ($190 million per year)  A good time to be a post-doc in whole genomic sequencing of bacteria
      • $150 million over 7 years to basic research to support non-traditional approaches to overcoming antibiotic resistance
      • $25 million per year to develop alternatives to antibiotics in agriculture.  Give the farmers options – another good idea.
      • $25 million to create a national clinical trials infrastructure for antibiotics.  Will reduce costs for everyone.
    • Replenish BARDA funding for public-private partnerships in antibiotic R&D, with approximately $800 million per year, roughly equal to one new antibiotic per year. This is huge – a stunning announcement and precisely what many have been privately calling for.  BARDA has supported many key antibiotics in the pipeline. This announcement is a prominent vote of confidence in BARDA’s model.
    • Make antibiotic stewardship a condition of participation in Medicare by 2017 and a condition for receiving federal grants.  Hospitals were expecting this.
    •  $25 million prizes for “rapid, inexpensive, and clinically relevant diagnostics that can substantially improve therapy in important clinical settings.”  Joins the UK Longitude Prize and promises to work with prizes from other nations and private foundations. This is a larger prize than reported separately by the White House and contemplates multiple prizes, not just one.
    • “PCAST strongly supports FDA’s new Guidances 209 and 213, designed to promote the judicious use of antibiotics in agriculture.” No solid action beyond existing FDA Guidance.
    • “Vigorously support” the WHO Global Action Plan Good news, as the WHO Plan will need resources to be effective globally.


    Comments closed
  • Hearings on new business models for antibiotics

    I testify on Friday Sept 19 at the House Energy & Commerce Committee hearing, part of the ongoing series 21st Century Cures. I call for dramatic changes in how we create, use and pay for antibiotics.

    As we seen before here, the antibiotic business model is broken. In a recent study undertaken by the Eastern Research Group for HHS/FDA, none of the six antibacterial targets yielded an expected net present value even close to the $100 million benchmark (previous TIE coverage here, with charts). In all six, the 90% confidence interval included negative NPVs. Few businesses will commit millions to a long-term R&D program with so little upside potential. This stands in stark contrast to the remarkable social value of antibiotics, even when you limit that calculation to quite direct effects (you don’t die). More expansive definitions would include the things that antibiotics make possible, like surgery and chemotherapy (Ramanan Laxminarayan is working on those numbers).

    If the business model is broken, how do fix it? Download testimony here; download ppt here.


    Comments closed
  • Higher quality antibiotics

    Sometimes a single chart can jumpstart a movement.  This chart certainly qualifies:


    Looking at this, you might conclude that the 1980s and early 1990s were the “glory years” for new antibiotic introductions.

    But that would only be partially correct. Twenty of the new antibiotics on this chart were not commercially or clinically successful and were ultimately withdrawn or discontinued from the market. An additional six antibiotic drugs were formally withdrawn for safety-related reasons, while for others, safety questions played a role in limiting clinical and commercial success.  Since 1980, antibiotics have suffered market withdrawals at triple the rate of all other FDA-approved drugs.

    High-quality antibiotics High-quality antibiotics

    Approval of these drugs didn’t help patients much, nor were the companies rewarded because sales were low. In short, we should not celebrate antibiotic introductions from the 1980s and early 1990s in the way the chart above implies. When discontinued and withdrawn drugs are backed out, the chart looks quite different:


    Antibiotics look pretty steady by decade. In other data (not shown) antimicrobial innovation shifted in a massive way to anti-retroviral drugs to treat HIV and to a lesser extent, fungi.

    Governments and think tanks are mooting many proposals to boost antibiotic innovation. We must focus on the quality of the new drug, not just the sheer quantity.

    h/t to the good folks at CDDEP for help with the charts and for cross-posting.


    Comments closed
  • Shrooms to help you quit smoking?

    From the Journal of Psychopharmacology, “Pilot study of the 5-HT2AR agonist psilocybin in the treatment of tobacco addiction“:

    Despite suggestive early findings on the therapeutic use of hallucinogens in the treatment of substance use disorders, rigorous follow-up has not been conducted. To determine the safety and feasibility of psilocybin as an adjunct to tobacco smoking cessation treatment we conducted an open-label pilot study administering moderate (20 mg/70 kg) and high (30 mg/70 kg) doses of psilocybin within a structured 15-week smoking cessation treatment protocol. Participants were 15 psychiatrically healthy nicotine-dependent smokers (10 males; mean age of 51 years), with a mean of six previous lifetime quit attempts, and smoking a mean of 19 cigarettes per day for a mean of 31 years at intake.

    The gist of this study was that they gathered 15 otherwise healthly (including mental health) smokers who had all tried and failed to quit smoking in the past. They were all given a moderate dose of psilocybin on their intended quit date. Later, they were given a high dose of psilocybin. Here:

    After informing subjects about what their experience with the drug might be like, the first dose of psilocybin was administered by pill the day each participant planned to quit smoking. Two subsequent sessions, with higher doses of the mind-altering drug, were held two weeks and eight weeks later.

    During each psilocybin session, which lasted six to seven hours, participants were closely monitored by two members of the research team in a comfortable, homelike setting. Most of the time, participants wore eyeshades and earphones that played music, and they were encouraged to relax and focus on their inner experiences.

    I was a skeptical as many of you likely are right about now. I mean, how was this even legal? I couldn’t help but snicker when one of the authors said, “When administered after careful preparation and in a therapeutic context, psilocybin can lead to deep reflection about one’s life and spark motivation to change.” But the results were somewhat amazing. Twelve of the fifteen, or 80% of participants, reported abstinence at 6 months. That’s an insanely large quit rate.

    It’s a small study. It was open label, and it had no controls. It involves using an abused drug to treat dependence on another. But it’s really hard to quit smoking. 80% at 6 months? Someone better do some follow-up work.


    Comments closed
  • Methods: Good points from Cook, Shadish, and Wong

    The paper by Thomas Cook, William Shadish, and Vivian Wong, “Three Conditions under Which Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings from Within-Study Comparisons,” makes some good points. Below I quote from their paper, referencing some of my prior posts that express similar sentiments.

    At least in some disciplines, randomized designs have a “privileged role,” supported by education and the research establishment.

    The randomized experiment reigns supreme, institutionally supported through its privileged role in graduate training, research funding, and academic publishing. However, the debate is not closed in all areas of economics, sociology, and political science or in interdisciplinary fields that look to them for methodological advice, such as public policy. [...] Alternatives to the experiment will always be needed, and a key issue is to identify which kinds of observational studies are most likely to generate unbiased results. We use the within-study comparison literature for that purpose.

    We should not expect results from observational studies with strong designs for causal inference to match those from experimental approaches in all cases.

    But the procedure used in these early studies contrasts the causal estimate from a locally conducted experiment with the causal estimate from an observational study whose comparison data come from national datasets. Thus, the two counterfactual groups differ in more than whether they were formed at random or not; they also differ in where respondents lived, when and how they were tested, and even in the actual outcome measures. [...] The aspiration is to create an experiment and an observational study that are identical in everything except for how the control and comparison groups were formed. [...] We should not confound how comparison groups are formed with differences in estimators.

    We can learn something useful from good observational designs.

    Past within-study comparisons from job training have been widely interpreted as indicating that observational studies fail to reproduce the results of experiments. Of the 12 recent within-study comparisons reviewed here from 10 different research projects, only two dealt with job training. Yet eight of the comparisons produced observational study results that are reasonably close to those of their yoked experiment, and two obtained a close correspondence in some analyses but not others. Only two studies claimed different findings in the experiment and observational study, each involving a particularly weak observational study. Taken as a whole, then, the strong but still imperfect correspondence in causal findings reported here contradicts the monolithic pessimism emerging from past reviews of the within-study comparison literature.

    RCTs are simple to explain, but that’s just one criterion and not the most important one.

    [Observational methods] do not undermine the superiority of random assignment studies where they are feasible. Th[ose] are better than any alternative considered here if the only criterion for judging studies is the clarity of causal inference. But if other criteria are invoked, the situation becomes murkier. The current paper reduces the extent to which random assignment experiments are superior to certain classes of quasi-experiments, though not necessarily to all types of quasi-experiments or nonexperiments. Thus, if a feasible quasi-experiment were superior in, say, the persons, settings, or times targeted, then this might argue for conducting a quasi-experiment over an experiment, deliberately trading off a small degree of freedom from bias against some estimated improvement in generalization.

    But we should be concerned about accepting bad designs because they either (1) are simple or (2) have shown themselves to match RCTs in a different setting. We need to evaluate each design in the context of the particular questions being asked on each study.

    For policymakers in research-sponsoring institutions that currently prefer random assignment, this is a concession that might open up the floodgates to low-quality causal research if the carefully circumscribed types of quasi-experiments investigated here were overgeneralized to include all quasi-experiments or nonexperiments. Researchers might then believe that “quasi-experiments are as good as experiments” and propose causal studies that are unnecessarily weak. But that is not what the current paper has demonstrated. Such a consequence is neither theoretically nor empirically true but could be a consequence of overgeneralizing this paper.

    Even those of us who argue these points probably agree on this:

    We suspect that few methodologically sophisticated scholars will quibble with the claim that [...] the notion that understanding, validating, and measuring the selection process will substantially reduce the bias associated with populations that are demonstrably nonequivalent at pretest.

    Clearly I have not told you much about their study or findings. You’ll have to read the paper for that.


    Comments closed
  • JAMA Forum: We don’t save every baby we could. Maybe that’s OK.

    Not all premature infants are saved and not all of those treated in [neonatal intensive care units] and survive receive the subsequent care we might wish for them. This reality of the implicit trade-offs is hard to confront, but that doesn’t make it any less real.

    Read the rest at the JAMA Forum.


    Comments closed
  • Why are there increasing numbers of disabled children?

    Many writers have worried about the increasing number of disabled American adults. Some argue that the social security disability payment system encourages able-bodied adults to drop out of the work force. But the proportion of American children who are disabled is also increasing.


    Data from surveys of approximately 200,000 families. The error bars represent two standard errors around the estimated disability rates. All graphs are mine, plotted from data in Houtrow et al.’s tables.

    These data are from Amy J. Houtrow, Kandyce Larson, Lynn M. Olson, Paul W. Newacheck and Neal Halfon based on surveys of parents aged 0 to 17 years. The increase in disability appears to accelerate after 2008, when the recession hit. ‘Disabled’ means that the child had a chronic condition that limited an activity such bathing or walking; or that the child needed special education or early intervention services. Using a somewhat different definition, the CDC also finds increasing rates of childhood disability.

    What kinds of disorders are causing these disabilities? The researchers classified disabilities as either physical or neurological/mental health. Neurological/behavioral disabilities are much more common. Moreover, it’s the neurological/mental health disabilities that are growing; whereas physical disabilities have actually declined.


    Disabilities are also more common among children in poor families. In the next graph, the red line represents families at or below the federal poverty line, while the blue line represents families making four or more times the poverty level. Houtrow and her co-authors note that the rate of disability has increased more quickly among more affluent families than among the poor, but the wide error bars around the estimates for poor families make me skeptical about this claim.


    “FPL” means “Federal Poverty Line”, that is, the level of income that determines whether the Federal government counts a family is poor.

    So why is childhood disability increasing? We can’t tell from these data, so what follows are just my thoughts.

    Children aren’t faking disability to avoid joining the workforce. I don’t think parents gain anything by exaggerating the limitations of their children. There aren’t, to my knowledge, groups of lawyers making their livings filing disability suits on behalf of children.

    Disability is one those concepts like “competence” or “sanity” that has clear paradigm cases but fuzzy boundaries. So perhaps the concept of disability is expanding over time. That is, maybe the definition is changing, not the children. I think this is likely and even sensible. The standard for what counts as “able” may be rising because the social expectations that define minimum cognitive or behavioral functioning for children are rising. Physical labor is disappearing and employment increasingly requires sophisticated skills deployed in office settings. A neurological or mental health impairment makes you less able to function in an office and also in the schools designed to prepare you for one.

    But it’s also possible that children are changing. Things have gotten harder for most families and children: median family income has fallen since 2000. Hard times have stressed families and we may be seeing the costs of that stress in the mental health of our children.


    Comments closed
  • Limiting choice to control health spending: A caution

    The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

    To what extent will the recent moderation in the growth of health care prices and spending continue? This is a big question, and the answer relies on many factors. But for plans offered in the new health insurance exchanges as well as a substantial minority of employer-sponsored plans, it may depend, in part, on how long consumers are willing to trade lower premiums for less choice. History offers a cautionary tale.

    Insurers selling plans in the exchanges are offering fewer choices of doctors and hospitals. According to a 2013 survey by Mercer of employers who sponsor work-based health plans, over one-quarter of employers with more than 20,000 employees and 15 percent of those with over 500 employees offer plans with limited networks of providers selected for quality, as well as cost, considerations.

    Narrow networks, as they are known, save plans and employers money because they tend to exclude doctors and hospitals that demand higher prices. Some of the savings is passed on to consumers through lower premiums.

    A recent study by McKinsey & Company found that plans that covered care at more than 70 percent of hospitals in their area charged 13 to 17 percent higher premiums than plans with more narrow networks. It’s a trade-off: lower premiums for less choice. However, the restrictions in choice may not be detrimental to patients, as suggested by a recent study of narrow network plans in Massachusetts, which found that such plans were associated with a 36 percent reduction in health care spending for consumers who joined them and their employers.

    We’ve seen this before. Seeking to end the rapid rise in health care costs, in the 1990s employers embraced managed care plans — plans, like health maintenance organizations, that restricted consumers’ choices with narrow networks, as well as requirements for preapproval for some forms of treatment. Though such plans were promoted nationally by the Health Maintenance Organization Act, signed by President Nixon in 1973, they did not achieve prominence until the 1990s. By 1993, 51 percent of private plan enrollees were covered by managed care; a mere two years later, that figure rose to 70 percent.

    About this rush toward managed care, Robert Winters, head of the Business Roundtable’s Health Care Task Force from 1988 to 1994, explained: “What happened in the late 1980s and in the early 1990s was that health care costs became such a significant part of corporate budgets that they attracted the very significant scrutiny of C.E.O.’s,” and more and more C.E.O.’s were “saying, ‘Goddammit, this has to stop!’”

    What stopped it, at least temporarily, was greater restrictions on choice of doctors, hospitals and treatments and a greater willingness of employers and consumers to accept them. Health care spending growth moderated. After many years of rapid growth, premiums held steady in the mid-1990s. The success didn’t last.

    To keep the lid on premium growth, and in an attempt to maintain profitability, over the years plans further tightened networks, imposed more frequent and stringent preapproval rules, and offered less coverage for more cost sharing.

    These cost-saving measures became increasingly unpopular. The backlash was swift and severe. Consumers filed class-action lawsuits against insurers, alleging that H.M.O.’s misrepresented the level of coverage and service they delivered. Stories of patients denied coverage for specific treatments circulated, whether factual — a denial of a wheelchair to a paraplegic patient— or fictional — Helen Hunt’s famous dissatisfaction with her H.M.O. in the 1997 movie “As Good As It Gets.”

    Physicians bristled at plans’ attempts to circumscribe doctors’ autonomy in medical decision making, contributing to the negative reputation of H.M.O.’s. States enacted consumer protection laws, and Congress passedpatients’ bill of rights legislation.

    In one sense, the backlash worked. Plans backed away from the practices most distasteful to consumers. America entered a new age of health care plans, with less restrictive networks and less onerous preapproval rules.

    In another sense, the backlash is a story of failure. The cost control that managed care brought was reversed. By the turn of the millennium, health care spending and premium growth had returned to their historical highs. Americans had rejected the trade of lower premium growth for less patient and doctor autonomy and choice.

    In an insightful analysis of the rise and fall of managed care, David Mechanic of Rutgers University wrote that the episode reflected fundamental American values: “Basic to the backlash against managed care is the underlying American cultural preference for independence, autonomy, choice, and activism, and the view shared by many Americans that there should be no barriers to their access and choices in seeking and receiving medical care.”

    Today’s new narrow network plans also restrict choices, so will they suffer the same fate as 1990s managed care?

    Already there are signs of disgruntlement and increased scrutiny of narrow networks. Experts have questioned the ability of consumers to understand the extent of plans’ networks at time of enrollment, and consumer advocateshave called for greater transparency.

    Consumers complained when the high-priced Cedars-Sinai Medical Center in Los Angeles was excluded from the networks of all but one exchange plan. Narrow networks were an issue in a campaign for a vacant House seat in Florida. Regulators in some states are restricting insurers’ ability to exclude some hospitals from their networks or considering banning narrow networks altogether. A new regulation in Washington State requires that plans cover enough doctors so that any enrollee can find a primary care appointment within 10 days and 30 miles. A national organization that rates the quality of health plans is considering adding a measure of network adequacy. Medical associations and consumers have filed lawsuits against insurers, claiming harm from narrow networks. The Obama administration has issued regulations to increase the choices of providers plans must offer,including more that serve low-income patients, as policy experts have called for minimum standards and consumer safeguards.

    Despite these early warning signs, it’s too soon to tell if narrow networks are doomed, along with the cost control they offer. There are some reasons consumers may be more tolerant of them than 1990s managed care plans. Today’s narrow network plans are less restrictive in some ways; for example, they don’t require preapprovals as often. At least in the exchanges, consumers have a choice of network size; in the 1990s many were forced into H.M.O.’s by their employers.

    Also, today’s plans and health care organizations may be more focused on quality than their predecessors. Limits on choice don’t force patients to go to poorer-quality doctors and hospitals, nor do they restrict access to the types of doctors consumers need most. Plans might be designed to provide adequate access to primary care doctors, for instance, as suggested by arecent study of narrow network plans in Massachusetts.

    Nevertheless, the story of 1990s managed care is a cautionary tale: Cost control by limiting choice can seemingly be achieved, only to slip away if consumers and providers reject the limitations it imposes. Only with great hubris can one say that low health care price and spending growth will be sustained long term and that narrow networks will play a role.


    Comments closed