• A study of health care technology in Ontario, Canada

    The study is by Mehrdad Roham and colleagues:

    We find that both the overall volume of services provided per capita and the average cost of these services decreased over our data period, once account is taken of changes in the age distribution of the population (the calculations relate to an age-standardized population) and in prices (all fees are expressed in constant dollar terms, using the consumer price index). However, these decreases are concentrated in services that have low HTI [Health Technology Intensity] and, to a lesser extent, medium HTI; over the same period, the average (age-standardized) number of services for high HTI increased by 55 percent and their share by 7.4 percentage points. We find also that whereas the decreases in the volume and cost of low and medium HTI services took place fairly uniformly across all age groups, the increases in high HTI were concentrated in the middle age groups and, more especially, in the old age groups.

    The results suggest two main policy implications. First, technological change and its diffusion within the population are too important to ignore: decision makers (and the policy discussion) should focus on how the delivery of care is changing while, at the same time, accounting for the effects of external changes (such as population aging). Second, health technology assessment should be based on real-life ex-post studies of how health technologies are used by doctors and patients rather than one ex-ante studies of how they should be used. That would help health policy analysts and researchers to gain a better understanding of the relationships between aging populations and the relative distribution of spending on health care for different levels of health technological intensity. Taking into account the observed changes in the use of technology in relation to patient age will also help to produce better predictions of future health care expenditures. However, the important questions of whether the observed changes are warranted, in the sense of leading to better patient outcomes and being cost effective, are ones that we are not able to address. It would be of great analytical and policy interest to have records that include information about patient outcomes following procedures, and not just the procedures themselves.

    The bit in bold (added) is a key point that many overlook. Many look to new technologies to cut costs and improve outcomes. That’s how they’re marketed. And, they very well may do so if their use is restricted to the subset of the population for which they’re ideally suited and designed. But what is typical is that technology diffuses more broadly than efficient use would warrant, in part because it’s good business. That ends up turning valuable technology into waste (or, more accurately, valuable for some, wasteful for others). And this is why I’m deeply skeptical of claims that any technology will actually cut costs and improve outcomes, on average, even if it does so for some.

    @afrakt

    Share
    Comments closed
     
  • If at first you don’t succeed …

    Via tumblr.tastefullyoffensive.com:

    fail stat sig

    @afrakt

    Share
    Comments closed
     
  • Some good writing on health and health care by Lisa Rosenbaum

    Do you want chemo and three months of life, or six weeks of life without the nausea and vomiting that the chemo causes? Do you want high-risk open-heart surgery, with a fifteen-per-cent risk of dying during the operation, or would you rather continue as you are, with a fifty-per-cent chance you will be dead in two years? Do you want a prostatectomy, which has a five-per-cent chance of impotence and incontinence, or radiation, with a three-per-cent chance of leaving a hole in your rectum, or would you rather “watch and wait,” with the chance that your cancer will never grow at all?

    That’s from Lisa Rosenbaum’s July 2013 piece in The New Yorker on shared decision making. Her most recent piece, which I also enjoyed, is this one on the relationship between extreme exercise and heart damage. It hits close to home because my wife will run her second 50k next month. Training alone includes several marathons over a few-week span. This, to me, is unfathomable.

    Here’s another terrific piece by Lisa that taught me a great deal about stenting and helpful vs. unnecessary care. (This is saying a lot since I know quite a bit about this stuff already.)

    It was in these gaps between data and life where I lost Sun Kim. There is no guideline that says, “This is how you manage an elderly man who asks nothing of anyone, who may or may not be taking his medications, and who has difficulty coming to see you because he vomits every time he gets on the bus.” In a world with infinite resources, we could conduct clinical trials to address every permutation of coronary disease and every circumstance. But that’s not the world we live in. And in our world, I reached a point where I could not keep Sun Kim out of the hospital.

    The rest of Lisa’s pieces are here. I was not aware of her and her work until relatively recently or I’d probably have referenced it many times by now.

    @afrakt

    Share
    Comments closed
     
  • Methods: Good points from Cook, Shadish, and Wong

    The paper by Thomas Cook, William Shadish, and Vivian Wong, “Three Conditions under Which Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings from Within-Study Comparisons,” makes some good points. Below I quote from their paper, referencing some of my prior posts that express similar sentiments.

    At least in some disciplines, randomized designs have a “privileged role,” supported by education and the research establishment.

    The randomized experiment reigns supreme, institutionally supported through its privileged role in graduate training, research funding, and academic publishing. However, the debate is not closed in all areas of economics, sociology, and political science or in interdisciplinary fields that look to them for methodological advice, such as public policy. [...] Alternatives to the experiment will always be needed, and a key issue is to identify which kinds of observational studies are most likely to generate unbiased results. We use the within-study comparison literature for that purpose.

    We should not expect results from observational studies with strong designs for causal inference to match those from experimental approaches in all cases.

    But the procedure used in these early studies contrasts the causal estimate from a locally conducted experiment with the causal estimate from an observational study whose comparison data come from national datasets. Thus, the two counterfactual groups differ in more than whether they were formed at random or not; they also differ in where respondents lived, when and how they were tested, and even in the actual outcome measures. [...] The aspiration is to create an experiment and an observational study that are identical in everything except for how the control and comparison groups were formed. [...] We should not confound how comparison groups are formed with differences in estimators.

    We can learn something useful from good observational designs.

    Past within-study comparisons from job training have been widely interpreted as indicating that observational studies fail to reproduce the results of experiments. Of the 12 recent within-study comparisons reviewed here from 10 different research projects, only two dealt with job training. Yet eight of the comparisons produced observational study results that are reasonably close to those of their yoked experiment, and two obtained a close correspondence in some analyses but not others. Only two studies claimed different findings in the experiment and observational study, each involving a particularly weak observational study. Taken as a whole, then, the strong but still imperfect correspondence in causal findings reported here contradicts the monolithic pessimism emerging from past reviews of the within-study comparison literature.

    RCTs are simple to explain, but that’s just one criterion and not the most important one.

    [Observational methods] do not undermine the superiority of random assignment studies where they are feasible. Th[ose] are better than any alternative considered here if the only criterion for judging studies is the clarity of causal inference. But if other criteria are invoked, the situation becomes murkier. The current paper reduces the extent to which random assignment experiments are superior to certain classes of quasi-experiments, though not necessarily to all types of quasi-experiments or nonexperiments. Thus, if a feasible quasi-experiment were superior in, say, the persons, settings, or times targeted, then this might argue for conducting a quasi-experiment over an experiment, deliberately trading off a small degree of freedom from bias against some estimated improvement in generalization.

    But we should be concerned about accepting bad designs because they either (1) are simple or (2) have shown themselves to match RCTs in a different setting. We need to evaluate each design in the context of the particular questions being asked on each study.

    For policymakers in research-sponsoring institutions that currently prefer random assignment, this is a concession that might open up the floodgates to low-quality causal research if the carefully circumscribed types of quasi-experiments investigated here were overgeneralized to include all quasi-experiments or nonexperiments. Researchers might then believe that “quasi-experiments are as good as experiments” and propose causal studies that are unnecessarily weak. But that is not what the current paper has demonstrated. Such a consequence is neither theoretically nor empirically true but could be a consequence of overgeneralizing this paper.

    Even those of us who argue these points probably agree on this:

    We suspect that few methodologically sophisticated scholars will quibble with the claim that [...] the notion that understanding, validating, and measuring the selection process will substantially reduce the bias associated with populations that are demonstrably nonequivalent at pretest.

    Clearly I have not told you much about their study or findings. You’ll have to read the paper for that.

    @afrakt

    Share
    Comments closed
     
  • JAMA Forum: We don’t save every baby we could. Maybe that’s OK.

    Not all premature infants are saved and not all of those treated in [neonatal intensive care units] and survive receive the subsequent care we might wish for them. This reality of the implicit trade-offs is hard to confront, but that doesn’t make it any less real.

    Read the rest at the JAMA Forum.

    @afrakt

    Share
    Comments closed
     
  • Limiting choice to control health spending: A caution

    The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

    To what extent will the recent moderation in the growth of health care prices and spending continue? This is a big question, and the answer relies on many factors. But for plans offered in the new health insurance exchanges as well as a substantial minority of employer-sponsored plans, it may depend, in part, on how long consumers are willing to trade lower premiums for less choice. History offers a cautionary tale.

    Insurers selling plans in the exchanges are offering fewer choices of doctors and hospitals. According to a 2013 survey by Mercer of employers who sponsor work-based health plans, over one-quarter of employers with more than 20,000 employees and 15 percent of those with over 500 employees offer plans with limited networks of providers selected for quality, as well as cost, considerations.

    Narrow networks, as they are known, save plans and employers money because they tend to exclude doctors and hospitals that demand higher prices. Some of the savings is passed on to consumers through lower premiums.

    A recent study by McKinsey & Company found that plans that covered care at more than 70 percent of hospitals in their area charged 13 to 17 percent higher premiums than plans with more narrow networks. It’s a trade-off: lower premiums for less choice. However, the restrictions in choice may not be detrimental to patients, as suggested by a recent study of narrow network plans in Massachusetts, which found that such plans were associated with a 36 percent reduction in health care spending for consumers who joined them and their employers.

    We’ve seen this before. Seeking to end the rapid rise in health care costs, in the 1990s employers embraced managed care plans — plans, like health maintenance organizations, that restricted consumers’ choices with narrow networks, as well as requirements for preapproval for some forms of treatment. Though such plans were promoted nationally by the Health Maintenance Organization Act, signed by President Nixon in 1973, they did not achieve prominence until the 1990s. By 1993, 51 percent of private plan enrollees were covered by managed care; a mere two years later, that figure rose to 70 percent.

    About this rush toward managed care, Robert Winters, head of the Business Roundtable’s Health Care Task Force from 1988 to 1994, explained: “What happened in the late 1980s and in the early 1990s was that health care costs became such a significant part of corporate budgets that they attracted the very significant scrutiny of C.E.O.’s,” and more and more C.E.O.’s were “saying, ‘Goddammit, this has to stop!’”

    What stopped it, at least temporarily, was greater restrictions on choice of doctors, hospitals and treatments and a greater willingness of employers and consumers to accept them. Health care spending growth moderated. After many years of rapid growth, premiums held steady in the mid-1990s. The success didn’t last.

    To keep the lid on premium growth, and in an attempt to maintain profitability, over the years plans further tightened networks, imposed more frequent and stringent preapproval rules, and offered less coverage for more cost sharing.

    These cost-saving measures became increasingly unpopular. The backlash was swift and severe. Consumers filed class-action lawsuits against insurers, alleging that H.M.O.’s misrepresented the level of coverage and service they delivered. Stories of patients denied coverage for specific treatments circulated, whether factual — a denial of a wheelchair to a paraplegic patient— or fictional — Helen Hunt’s famous dissatisfaction with her H.M.O. in the 1997 movie “As Good As It Gets.”

    Physicians bristled at plans’ attempts to circumscribe doctors’ autonomy in medical decision making, contributing to the negative reputation of H.M.O.’s. States enacted consumer protection laws, and Congress passedpatients’ bill of rights legislation.

    In one sense, the backlash worked. Plans backed away from the practices most distasteful to consumers. America entered a new age of health care plans, with less restrictive networks and less onerous preapproval rules.

    In another sense, the backlash is a story of failure. The cost control that managed care brought was reversed. By the turn of the millennium, health care spending and premium growth had returned to their historical highs. Americans had rejected the trade of lower premium growth for less patient and doctor autonomy and choice.

    In an insightful analysis of the rise and fall of managed care, David Mechanic of Rutgers University wrote that the episode reflected fundamental American values: “Basic to the backlash against managed care is the underlying American cultural preference for independence, autonomy, choice, and activism, and the view shared by many Americans that there should be no barriers to their access and choices in seeking and receiving medical care.”

    Today’s new narrow network plans also restrict choices, so will they suffer the same fate as 1990s managed care?

    Already there are signs of disgruntlement and increased scrutiny of narrow networks. Experts have questioned the ability of consumers to understand the extent of plans’ networks at time of enrollment, and consumer advocateshave called for greater transparency.

    Consumers complained when the high-priced Cedars-Sinai Medical Center in Los Angeles was excluded from the networks of all but one exchange plan. Narrow networks were an issue in a campaign for a vacant House seat in Florida. Regulators in some states are restricting insurers’ ability to exclude some hospitals from their networks or considering banning narrow networks altogether. A new regulation in Washington State requires that plans cover enough doctors so that any enrollee can find a primary care appointment within 10 days and 30 miles. A national organization that rates the quality of health plans is considering adding a measure of network adequacy. Medical associations and consumers have filed lawsuits against insurers, claiming harm from narrow networks. The Obama administration has issued regulations to increase the choices of providers plans must offer,including more that serve low-income patients, as policy experts have called for minimum standards and consumer safeguards.

    Despite these early warning signs, it’s too soon to tell if narrow networks are doomed, along with the cost control they offer. There are some reasons consumers may be more tolerant of them than 1990s managed care plans. Today’s narrow network plans are less restrictive in some ways; for example, they don’t require preapprovals as often. At least in the exchanges, consumers have a choice of network size; in the 1990s many were forced into H.M.O.’s by their employers.

    Also, today’s plans and health care organizations may be more focused on quality than their predecessors. Limits on choice don’t force patients to go to poorer-quality doctors and hospitals, nor do they restrict access to the types of doctors consumers need most. Plans might be designed to provide adequate access to primary care doctors, for instance, as suggested by arecent study of narrow network plans in Massachusetts.

    Nevertheless, the story of 1990s managed care is a cautionary tale: Cost control by limiting choice can seemingly be achieved, only to slip away if consumers and providers reject the limitations it imposes. Only with great hubris can one say that low health care price and spending growth will be sustained long term and that narrow networks will play a role.

    @afrakt

    Share
    Comments closed
     
  • The Health Policy Salon: Every third Friday of the month

    The Health Policy Salon is starting. If you’re in Boston, you might want to attend. Details follow.

    Who? Any health policy wonk, but the incomparable Emma Sandoe and I will attend for sure. (This is our idea.) We’ve heard from several others that they’ll be there, but I’m not putting them on the spot by naming names.

    What? A gathering to share ideas, coffee, breakfast (as desired)

    Where? A coffee shop in downtown Boston. Email for exact location. It won’t always be in the same place, maybe.

    When? Every third Friday of the month, typically. The first one is this Friday, September 19. Emma and I will attend from 7:15AM until at least 8:15AM, though anyone can come and go as they please. Future gatherings will be announced on TIE and Twitter with location details by email. So get on the email list.

    Why? To stimulate our thinking about issues pertaining to health policy and related research. Also to have fun.

    How? We will communicate with each other using our mouths, hands, facial expressions, and body language, and any other device, as needed, as one does in real life.

    Srsly? Yes.

    @afrakt

    Share
    Comments closed
     
  • *Five Days at Memorial*

    Hurricane Katrina hit New Orleans. Floodwaters rose in the Uptown streets surrounding Memorial Medical Center, where hundreds of people slowly realized that they were stranded. The power grid failed, toilets overflowed, stench-filled corridors went dark. Diesel generators gave partial electricity. Hospital staff members smashed windows to circulate air. Gunshots could be heard, echoing in the city. Two stabbing victims turned up at this hospital, which was on life support itself, and were treated.

    By Day 4 of the hurricane, the generators had conked out. Fifty-two patients in an intensive care wing lay in sweltering darkness; only a few were able to walk. The doctors and nurses, beyond exhaustion, wondered how many could survive.

    When evacuations were done, 45 patients had not made it out alive. The State of Louisiana began an investigation; forensic consultants determined that 23 corpses had elevated levels of morphine and other drugs, and decided that 20 were victims of homicide.

    That’s from Jason Berry’s review of Five Days at Memorial, by Sheri Fink. It sounds riveting, from the review. And it has its moments, to be sure. But, to me, the book is too long and confusing as, no doubt, were the events themselves.

    Later in the review Berry explains that the book is an extension of Fink’s Pulitzer Prize winning investigation. He called this a “literary gamble.” It’s great material for a gripping tale of ethically questionable decisions under challenging circumstances few could imagine in advance. It’s worth knowing and contemplating. But the gamble on this style, as a book, didn’t pay off. Some skimming and skipping may be warranted. Your mileage may vary.

    UPDATE: Bill Gardner’s take on the book is here.

    @afrakt

    Share
    Comments closed
     
  • The flow of pi

    Nice work by Cristian Ilies Vasile. Details of what this is at the link.

    flow-of-pi-cristian

    @afrakt

    Share
    Comments closed
     
  • AcademyHealth: Provider factors and regional variation

    Yesterday,  Louise Sheiner presented a paper at Brookings that challenges some of the the interpretations of Dartmouth research on geographic variation in health care. Her work suggests that patient, not provider, factors explain most of geographic (in her case, state) variation in spending. Coincidentally, I had already prepared a post reviewing work that comes to the opposite conclusion. It is not intended as a rebuttal to Sheiner’s work. You’ll find the post on the AcademyHealth blog.

    @afrakt

    Share
    Comments closed