The following is co-authored by Austin Frakt and Garret Johnson. Garret is a research assistant for Dr. Ashish Jha at the Harvard T.H. Chan School of Public Health, where he also works with Austin on Medicare Advantage studies.
The recent paper by Kate Baicker and Jacob Robbins estimates a spillover effect from Medicare Advantage (MA) to traditional Medicare (TM). The idea is that MA, through care management, influences patterns of care in such a way to reduce costs, which then affects the quality and costs of care for TM beneficiaries as well. (Prior posts on this kind of spillover here and here).
We wanted to convey the estimates in the paper in a different way, one we find more helpful. Here, using their results, is the answer to the question, for the marginal dollar spent on MA, how much is saved in TM?
They find that $1200 in additional payment per MA enrollee per year yields 2.2 pct points higher MA market penetration.
They also calculate a $252 per TM beneficiary per year in spillover “savings” for every 10 percentage points in higher MA market penetration. (The savings come through more efficient use of hospital services, in particular shorter lengths of stay, mediated by a concurrent increase in less costly outpatient service use. Therefore, it’s not really immediate savings to Medicare since diagnosis-based Medicare hospital payments aren’t sensitive to shortening lengths of stay. But, longer term, perhaps these savings could be captured through changes in rate increases.)
Therefore, combining (1) and (2), for each $1200 per MA enrollee per year in additional payment there’s a $252 × (2.2/10) of spillover savings per TM beneficiary per year, or $55.44. Dividing, that’s $0.0462 of spillover per TM beneficiary for every $1 of payment for an MA enrollee.
There are a lot more TM beneficiaries than MA enrollees, so we need to scale these. At the midpoint of the authors’ study window (2005), MA had a market penetration of 13%, leaving 87% in TM. Multiplying these by the figures from (3), we get $0.040 of spillover per $0.13 in MA payment. That’s $0.31 of savings per $1 of payment.
The results of (4) is at the particular margin examined. Not every $1 of MA payment is associated with 31 cents of savings. Extrapolating out of sample, today, when the MA/TM enrollment split is 31%/69%, the savings is closer to 10 cents on the dollar. (This is just repeating the calculation in step (4), but with 31%/69% instead of 13%/87%.)
There are lots of caveats:
We mentioned it above, but it should be emphasized that these figures only pertain to the time period (1999-2011) and range of MA payment (from about 5% to 15% above TM costs) and market penetration (9%-11.9%) examined. In particular, our out of sample extrapolation is not likely to be correct. We just did it to illustrate how the results change a lot as market penetration changes.
Related, the results above are for the marginal dollar spent on MA. What about all the other dollars? No doubt, some of them contribute toward TM savings, but some do not. In the paper, the authors estimate TM utilization as a quadratic function of MA penetration. For some lower range of penetration, MA is associated with higher per person TM costs (perhaps due to favorable selection into MA). Spillover effects probably taper off at a higher range of penetration, too.
What the total cost or savings of MA to TM is today, we cannot tell. We don’t even know if it’s net positive or negative. That estimate would require more estimation work and a bunch of assumptions.
This is, on the one hand, interesting. Some MA spending is associated with a high degree of TM savings (31 cents on the dollar circa 1999-2011 is HUGE). To the extent that MA growth makes TM “cheaper” for taxpayers, then some amount of payment above TM rates to MA plans may be efficient. On the other hand, this is unsatisfying. We don’t really want to know if the marginal dollar spent on MA in, say, 2005 saved money. We want to know if, in total, MA is saving or costing money today. We just don’t know.
This post is co-authored by Bill Gardner and Austin Frakt.
The recent controversy about disclosure of conflicts of interest (see Bill here and here; Austin here and here) has called renewed attention to pervasive quality control problems in the scientific literature. We agree with Ian Roberts that
the challenge is not to describe the flaws in the current system but to create a better one, where decisions about healthcare are informed by valid and reliable evidence.
Open science describes the practice of carrying out scientific research in a completely transparent manner, and making the results of that research available to everyone. Isn’t that just ‘science’?
How can we get serious about creating an open, valid, and reliable scientific literature?
We recommend starting by acknowledging our moral response to the problem, and then putting it aside. It’s impeding our thinking. We’re struck by how often we hear that the problem in bias is the “corruption” of some researchers or the “perversion” of the research process. There are many contexts in which it’s important to view science in moral terms. But we doubt that focusing on the virtues or vices of researchers will get us much closer to a solution. Instead, we should think about what institutions and policies will advance scientific learning.
In an ideal world, peer review of science should concern the evidence—data and methods—and the interpretation of findings in the light of existing knowledge. Facts about the authors ought to be extraneous. Aaron Kesselheim found that reviewers downgraded their ratings of the methodological rigor of clinical trials when they believed that the trials were funded by industry. That seems wrong: Consider how you would react if a study showed that reviewers downgraded their ratings of articles written by women, for example.
But this is the real world, and you can also make a case that the reviewers in Kesselheim’s study were behaving rationally. We may want reviewers to evaluate a research report based on the data and methods, but authors can only document so much in a paper. Given the limits on what authors can document, there’s reviewer uncertainty about the quality of the evidence. Bayesian inference suggests that the more uncertain you are about the evidence, the more weight you should give to your prior probabilities concerning the credibility of a report’s authors. Therefore, the evidence that studies funded by pharmaceutical companies are biased toward the companies’ products would seem to justify some weight on a prior to distrust their research.
In practice, though, humans may not compute like perfect Bayesians. We may use real or perceived COIs to over- or under-correct. So the better response, in the long-term, is to reduce our uncertainty about the data and methods. With less uncertainty about the evidence, priors about the authors would matter less and applied (or misapplied) less.
There are several strategies for reducing this uncertainty that the scientific community has applied (though not uniformly) or could apply going forward (perhaps with some infrastructure development). These strategies include:
Registration of trials and reporting of all registered analysis (or clear metrics of the extent to which they are not reported);
Archiving of trials’ analytical data files (see BMJ‘s Open Data campaign and GlaxoSmithKlein‘s commitment to provide access to anonymized patient-level data);
Expert evaluation of study methods by an individual or individuals without conflicts of interest;
A possible future extension of these strategies is: Archiving of the transformations required to generate the analytic data files.
Pursuing these strategies would likely increase the transparency and reproducibility of research, the quality of scientific practice, and reduce uncertainty about its credibility and validity. To our knowledge, there are no scientific reasons not to pursue these strategies.
But there are economic, psychological, and ethical reasons. For example, we can’t make data sets public unless we can make sure that research participants can’t be identified from them. We should also consider the costs in researchers’ time, attention, and resources in complying with more rigorous standards of documentation, with parallel costs to society in possible delay of projects. It is true that science requires a meticulous attention to detail. Nevertheless, humans have finite attention and limited capacities for decision making. More time and attention spent on documentation might mean less time spent thinking and reading.
We should not take the existence of these potential costs as an excuse to do nothing about improving science and its credibility. We should do something while, reasonably, taking them into consideration.
There are reasons to believe that improvements in technology and the self-regulation* of research will facilitate our ability to do better science without unduly burdening researchers or endangering research participants. We are more likely to develop those technologies and self-regulations if we frame our considerations more in terms of questions about how to improve the validity, reliability, and transparency of science, as well as the rate of scientific progress, rather than questions about the moral virtues of researchers.
Disclosure of financial conflicts of interest should be retained as a necessary, though insufficient, tool of scientific integrity. But we must get beyond disclosure, and beyond our outrage over what we think it signals, to tighten up the process of science directly. In a world of competing interests, humans, unfortunately, do not always do good science by accident or because it’s the “right thing to do.” Science is important. We need to treat it as such, and tighten up our regulation of it.
* “Self-regulation” means regulation by scientists, not the government. The scholarly community must find ways to adequately regulate itself, e.g., through a consensus about the requirements of publication in top (or all) medical journals. Having said this, we acknowledge that NIH requirements on grantees—which we support—are an interesting and important case in which a governmental body can advance open science.
This post is jointly authored by Nicholas Bagley and Austin Frakt.
Yesterday evening, the New England Journal of Medicine released a Perspective piece that we co-authored on the recent suppression of Medicare and Medicaid data to researchers. (For our earlier coverage, see the posts collected here.) As we explain, the data suppression is both unnecessary and harmful:
What if it were impossible to closely study a disease affecting 1 in 11 Americans over 11 years of age—a disease that’s associated with more than 60,000 deaths in the United States each year, that tears families apart, and that costs society hundreds of billions of dollars? What if the affected population included vulnerable and underserved patients and those more likely than most Americans to have costly and deadly communicable diseases, including HIV–AIDS? What if we could not thoroughly evaluate policies designed to reduce costs or improve care for such patients?
These questions are not rhetorical. In an unannounced break with long-standing practice, the Centers for Medicare and Medicaid Services (CMS) began in late 2013 to withhold from research data sets any Medicare or Medicaid claim with a substance-use–disorder diagnosis or related procedure code. This move—the result of privacy-protection concerns—affects about 4.5% of inpatient Medicare claims and about 8% of inpatient Medicaid claims from key research files (see table), impeding a wide range of research evaluating policies and practices intended to improve care for patients with substance-use disorders.
The timing could not be worse. Just as states and federal agencies are implementing policies to address epidemic opioid abuse and coincident with the arrival of new and costly drugs for hepatitis C—a disease that disproportionately affects drug users—we are flying blind.
While NEJM was preparing the piece for publication, ResDAC, released new Medicare data indicating that the suppression is even more extensive than we wrote. For 2013, Medicare suppressed 6.43% of all Medicare inpatient claims; for 2014, that figure rose to 6.8%. (The figures for Medicaid in our piece remain the same.)
Eric Goplerud, speaking to Alcohol and Drug Addiction Weekly in January, suggested that SAMHSA is planning on proposing a rule change this year that would allow CMS to restore access to the affected data. We hope so. The issue is much too urgent to ignore.
This post is jointly authored by Bill Gardner and Austin Frakt.
Why do we write for the public about science and research? It’s a lot of work and our day jobs pay better. Last week, Austin and Bill conversed about this on Twitter. We had help from Kristen Rosengren (@RosenKris) and others, including Janet Weiner (@weinerja). We’ve edited the conversation for clarity and expanded some of our answers.
Austin: I typically do not write more than once about the same topic. A mistake?
Kristen: Depends on your primary goal. To express yourself, once is enough; to convince or change minds, repetition is helpful.
Austin: I don’t write to convince, actually. I think that’s a dangerous objective.
Bill: “I don’t write to convince.” I can see that it might be dangerous to need to change other’s views. But if you don’t want to change others’ views, why offer an argument? Moreover, doesn’t TIE have a goal of science translation?
Kristen: To educate & inform so others can make decisions? Perhaps biased by our mission, I think that has real value.
Austin: I write first to convince myself I have a reasonable understanding of the world. And writing for the public changes the quality of my thinking. I find that publishing motivates deeper engagement and care than I would otherwise apply.
Bill: Excellent motives. One of the things you learn in good science lab meetings or good philosophy seminars is how much deeper you can get when you are pushed by the best critics.
Nevertheless I feel an obligation to persuade. I had the privilege of a great education leading to a PhD. That incurs a debt because not everyone had those chances. I see some ways in which we could act together to make the world better. This obligates me to take part in public debates about health policy. If my writing gets at the truth, and I write well enough that others can see that truth, maybe we’ll make better choices.
Austin: I’m delighted to serve a translation role, and even change minds. It’s not my primary motivation, though. Were it so, I worry I’d not be as faithful to the evidence, wherever it may lead. An “obligation to persuade” could (though need not) become a conflict of interest.
Bill: True. There is a tension between writing to persuade and writing to discover the truth. The danger I fear more is partisanship: that is, writing that serves the needs of your political identity rather than your commitment to the truth. This leads to a risk of motivated reasoning, as Dan Kahan describes so well. One of the great things about social media is that it is easy find smart, well-informed people who disagree with you.
Austin: I worry about more than partisanship. I’m not sure why that’s the only bias of relevance. Fealty to any set of values would shape how one sees and conveys facts and ideas. Though I think it’s possible for someone to be faithful to evidence and still be principally motivated by an ambition to persuade, I think many minds can’t handle that. It’s very easy to become invested in a position or to take the view that if you’re seen as having been fallible, that weakens your strength of persuasion.
Whenever I hear or perceive that someone is in it to persuade me I lose a bit of trust. I am much more comfortable if I feel they are just in it to convey truth, as best they can and whatever that means. The way in which one acknowledges limitations and counterpoints offers a clue. It can be done dismissively, or it can be done in a way that shows the writer is really wringing his hands over it. I like to see sweat on the brow. Synthesizing evidence for a firm conclusion is hard and fraught, or should be. That challenge should come through. If it doesn’t, I feel I’m being spun. I try to avoid doing that as a writer.
Bill: Yes. Cognition is social: we depend on others for access to information. I only know anything about recent physics because I trust Brian Greene, Sean Carroll, and other great science writers. Yet we know this social dependence makes us vulnerable to manipulators. So we are wired to worry about the motivations of our sources. We look for evidence about whether they care about the truth. Of course, caring about the truth can be faked. But I’m not that subtle. As I learned the hard way playing poker, I’m a terrible liar. The only way I can communicate that I care for the truth is to actually care for the truth. So my best shot at persuasion is to be as faithful as I can to the science.
This is, by the way, the reason why I think TIE has gained a loyal and discerning readership. We are clear about our values but I think we are all first committed to the norms of our disciplines. My sense is that lots of people who disagree with us about both policy and the facts nevertheless trust us to give our best effort at the truth.
Austin: Trust is a huge topic. I had a high school social studies teacher who convinced me that it’s all we have. He’s right, but to go into that would take another post. (Well, I see I wrote that other post on trust in 2010.)
Anyway, I’m delighted if we are perceived as trustworthy. I wonder if we’re clear about our values, though. There are degrees of clarity. There are absolutely some elements of my life and upbringing, even professional circumstances, that I have not and likely will not disclose. That’s true of everyone, perhaps to different degrees. Can we ever know why a given person is communicating in a given topic in a given way? What is speaking, the evidence or some tribal value? Or, to what extent does the latter shape presentation of the former? Or, what about subjects that are never raised?
Bill: Great point and I spoke too quickly in claiming that we are clear about our values. Actually, I find that most of us don’t even fully know our own values. This is part of the value of moral philosophy: it’s a practice of confronting your theories about what is right or good with your judgements about actual cases, with the goal of making them cohere. You get clearer about what you really believe and perhaps you can revise your views for the better. And when you work on the edge between research and policy, getting clear about your values is essential, because policy choices are based on both scientific evidence and goals informed by values.
To get back to where we started: this is another reason to write for the public. The net is full of smart people with diverse values. They can, perhaps, see something that you are blind to. That valuable exchange can happen whether you convince one another of something or not.
The following is co-authored by Austin Frakt and Aaron Carroll. It originally appeared on The Upshot (copyright 2015, The New York Times Company). Click over to that version of the post to see the accompanying chart.
If we knew more, would we opt for different kinds and amounts of health care? Despite the existence of metrics to help patients appreciate benefits and harms, a new systematic review suggests that our expectations are not consistent with the facts. Most patients overestimate the benefits of medical treatments, and underestimate the harms; because of that, they use more care.
The study, published in JAMA Internal Medicine and written by Tammy Hoffmann and Chris Del Mar, is the first to systematically review the literature on the accuracy of patients’ expectations of benefits and harms of treatment. They examined over 30 studies that assessed whether patients understood the upsides or downsides of certain treatments. To a great extent, patients didn’t.
In the 34 studies that assessed understanding of benefits, patients overestimated their potential gain in 22 of them, or 65 percent. For instance, a 2002 study published in the Journal of the National Cancer Institute asked women who had undergone prophylactic bilateral (double) mastectomy to estimate how much the procedure reduced their risk of breast cancer. On average, the women thought they had reduced that risk from 76 percent to 11 percent, an absolute risk reduction of 65 percentage points.
For the more than 80 percent of the women in the study who did not have a BRCA genetic mutation — which drastically increases the risk of breast cancer — the real risk before surgery of developing breast cancer was 17 percent, meaning they greatly overestimated their risk reduction. Even the women with a BRCA mutation overestimated their risk reduction, but to a lesser extent.
Another 2012 study published in the Annals of Family Medicine asked patients to estimate the benefits of screening for bowel and breast cancer, and the use of medications to prevent hip fracture and cardiovascular disease. More than two-thirds of patients overestimated the benefits of medications to prevent cardiovascular disease, and more than 80 percent overestimated the benefits of medications to prevent hip fractures.
Further, 90 percent of patients overestimated the benefits of breast cancer screening, and 94 percent overestimated the benefits of bowel cancer screening. The researchers also asked the patients to estimate the minimum reduction in bad outcomes (like fractures or deaths) they would need to achieve to find the treatment worthwhile. For three of the four studied interventions, the minimum benefit patients would accept was higher than the actual benefit.
In the 15 studies examined in the systematic review for which harms were a focus, patients underestimated potential downsides in 10 of them (67 percent). For example, a study published in 2012 in the Journal of Medical Imaging and Radiation Oncology asked patients to estimate the risks associated with a CT scan. A single CT scan exposes a patient to the same amount of radiation as 300 chest X-rays, and carries with it a 1-in-2,000 chance of inducing a fatal cancer. More than 40 percent of patients underestimated a CT’s radiation dose, and more than 60 percent of patients underestimated the risk of cancer from a CT scan.
Why do patients err in assessments of risks and benefits? One reason could be that what they know is driven by the messages they hear. Doctors, direct-to-consumer ads and the media can skew our perceptions. They tend to focus on the benefits, but rarely quantify them. Health care centers, screening advocacy programs and pharmaceutical ads all push us to talk to our doctors about getting treatment without talking about actual gains.
Doctors also aren’t always good at communicating risks. A 2013 study published in JAMA Internal Medicine found that fewer than 10 percent of patients were told about overdiagnosis and overtreatment associated with cancer screening, even though 80 percent of patients wanted to know about harms.
This study, and others, indicate that patients would opt for less care if they had more information about what they may gain or risk with treatment. Shared decision-making in which there is an open patient-physician dialogue about benefits and harms, often augmented with use of treatment decision aids, like videos, would help patients get that information. However, a majority of patients still report that they prefer to leave medical decision-making to their doctors.
It might also be the case that some patients would use more of certain types of care if they had more information. Many chronic conditions remain undermanaged and undertreated in the United States. It’s possible that people with these conditions who had more information would use more care, which could raise spending for these patients but make them better off.
There’s also an argument to be made that people who overestimate the benefits of medicine to treat some conditions are more likely to take it regularly, which might lead to better outcomes, in some cases, than would occur if these patients were better informed.
Regardless, even though some patients may benefit somewhat from being ill informed, it seems wrong to argue that we should keep them in the dark. Many of the studies in the systematic review show that people report that they would opt for less care if they better understood benefits and harms. Improved communication could better serve patients and might improve the efficiency of our health system if patients focus on getting the types of care for which the benefit outweighs risk of harm.
It’s also possible that unrealistic expectations of care help patients cope with disease or provide them with some sense of control. Feeling hopeful about one’s future is not to be dismissed. But those unrealistic expectations don’t come cheap. We should at least consider the price that we pay for being uninformed.
The following is co-authored by Aaron Carroll and Austin Frakt. It originally appeared on The Upshot (copyright 2015, The New York Times Company). Click over to that version of the post to see the accompanying charts.
As we wrote last week, many fewer people benefit from medical therapies than we tend to think. This fact is quantified in a therapy’s Number Needed to Treat, or N.N.T., which tells you the number of people who would need to receive a medical therapy in order for one person to benefit. N.N.T.s well above 10 or even 100 are common. But knowing the potential for benefit is not enough. We must also consider potential harms.
Not every person who takes a medication will suffer a side effect, just as not every person will see a benefit. This fact can be expressed by Number Needed to Harm (N.N.H.), which is the flip side of N.N.T.
For instance, the N.N.T. for aspirin to prevent one additional heart attack over two years is 2,000. Even though this means that you have less than a 0.1 percent chance of seeing a benefit, you might think it’s worth it. After all, it’s just an aspirin. What harm could it do?
But aspirin can cause a number of problems, including increasing the chance of bleeding in the head or gastrointestinal tract. Not everyone who takes aspirin will bleed. Moreover, some people will bleed whether or not they take aspirin.
Aspirin’s N.N.H. for such major bleeding events is 3,333. For every 3,333 people, just over two on average will have a major bleeding event, whether they take aspirin or not. About 3,330 will have no bleed regardless of what they do. But for every 3,333 people who take aspirin for two years, one additional person will have a major bleeding event. That’s an expression of the risk of aspirin, complementing the fact that one out of 2,000 will avoid a heart attack.
Granted, one out of 3,333 is a pretty tiny risk. But remember that the chance of benefit is pretty small, too.
If you look at the data for all randomized controlled trials of breast cancer screening, the N.N.T. for recommending screening to prevent one death from breast cancer after 13 years of follow-up is 1,477. But further analyses show that the one woman would have probably died of other causes anyway. There may be no benefit at all with respect to preventing death from all causes.
Screening with mammograms can cause harm, though. They lead to overdiagnosis, encouraging the provision of therapies that provide no benefits — but do carry risks, and therefore are considered harms.
If we look at those same studies, for every 333 women who are assigned to have a screening mammogram, one extra will undergo a lumpectomy or mastectomy as a result. One in every 390 women assigned to have a screening mammogram will undergo an extra course of radiation therapy as a result. (In these randomized controlled trials, patients are either assigned to get screening mammograms or they are not. The study then usually looks at the outcome for all who were assigned to get the mammogram, whether they actually did or not.)
In other words, for about every 1,500 women assigned to get screening for 10 years, one might be spared a death from breast cancer (though she’d most likely die of some other cause). But about five more women would undergo surgery and about four more would undergo radiation, both of which can have dangerous, even life-threatening, side effects.
Thus, N.N.H., paired with N.N.T., can be very useful in discussing the relative potential benefits and harms of treatments. As another example, let’s consider antibiotics for ear infections in children. There are many reasons that parents and pediatricians might consider treatment. One commonly cited reason is that we want to prevent serious complication from untreated infections. Unfortunately, antibiotics don’t do that, and the N.N.T. is effectively infinite. Antibiotics also won’t reduce pain within 24 hours. Antibiotics have, however, been shown to reduce pain within two to seven days. Not all children will see that benefit, though. The N.N.T. is about 20 for that outcome.
This means that when a child is prescribed antibiotics for an ear infection, it’s more likely that he will develop vomiting, diarrhea or a rash than get a benefit. When patients are presented with treatment options in this manner, they are sometimes more likely to agree to watchful waiting to see if the ear infection resolves on its own. For most children with ear infections, observation with close follow-up is recommended by the American Academy of Pediatrics.
A wealth of N.N.T. and N.N.H. data based on clinical trials is available on a website developed by David Newman, a director of clinical research at Icahn School of Medicine at Mount Sinai hospital, and Graham Walker, an assistant clinical professor at the University of California, San Francisco. But it’s important to understand that results from clinical trials do not always reflect what happens in the real world. As criteria for treatment become more permissive beyond those applied in trials, the N.N.T.s can go up. But importantly, N.N.H.s often do not. Healthier people are less likely to see a benefit from antibiotics or an aspirin. But they are not less likely to have a side effect or complication.
This is because the harms associated with treatment usually have nothing to do with the underlying illness. They are caused by the therapy, regardless of the reason for use. Children will develop diarrhea, vomiting or rashes from antibiotics in the same relative amounts no matter why we are using them. Put another way, clinical trials are designed to target the class of patients that most likely benefits from treatment, but they are not targeted to those more or less likely to experience harm. When treatments are applied in real-world clinical settings, we generally don’t see changes in the proportion of patients harmed by them relative to trials.
When we stray from recommendations for therapies, and broaden the population given studied treatments, the N.N.T.s often go up, but the N.N.H.s stay the same. Things are often even worse than the data in studies make them look. Fewer people benefit, but just as many are harmed.
We hope that every therapy has a benefit. The N.N.T. shows us that benefits are often much less likely than many might think. The N.N.H. can show us how likely we are to have a harm compared with a benefit. Considering both, especially in light of how practice often differs from studies, can help us make better decisions about how to care for ourselves and those we love.
The following is co-authored by Austin Frakt and Aaron Carroll. It originally appeared on The Upshot (copyright 2015, The New York Times Company). Click over to that version of the post to see the accompanying charts.
In his State of the Union address last week, President Obama encouraged the development of “precision medicine,” which would tailor treatments based on individuals’ genetics or physiology. This is an effort to improve medical care’s effectiveness, which might cause some to wonder: Don’t we already have effective drugs and treatments? In truth, medical care is often far less effective than most believe. Just because you took some medicine for an illness and became well again, it doesn’t necessarily mean that the treatment provided the cure.
This fundamental lesson is conveyed by a metric known as the number needed to treat, or N.N.T. Developed in the 1980s, the N.N.T. tells us how many people must be treated for one person to derive benefit. An N.N.T. of one would mean every person treated improves and every person not treated fails to, which is how we tend to think most therapies work.
What may surprise you is that N.N.T.s are often much higher than one. Double- and even triple-digit N.N.T.s are common.
Consider aspirin for heart attack prevention. Based upon both modifiable risk factors like cholesterol level and smoking, and factors that are beyond one’s control, like family history and age, it is possible to calculate the chance that a person will have a first heart attack in the next 10 years. The American Heart Association recommends that people who have more than a 10 percent chance take a daily aspirin to avoid that heart attack.
How effective is aspirin for that aim? According to clinical trials, if about 2,000 people follow these guidelines over a two-year period, one additional first heart attack will be prevented.
That doesn’t mean the 1,999 other people have heart attacks. The fact is, on average about 3.6 of them would have a first heart attack regardless of whether they took the aspirin. Even more important, 1,995.4 people would never have a heart attack whether or not they took aspirin. Only one person is actually affected by aspirin. If he takes it, the number of people who remain heart attack-free rises to 1996.4. If he doesn’t, the number remains 1995.4. But for 1,999 of the 2,000 people, aspirin doesn’t make any difference at all.
Of course, nobody knows if they’re the lucky one for whom aspirin is helpful. So, if aspirin is cheap and doesn’t cause much harm, it might be worth taking, even if the chances of benefit are small. But this already reflects a trade-off we rarely consider rationally. (And many treatments do cause harm. There is a complementary metric known as the number needed to harm, or N.N.H., which says that if that number of people are treated, one additional person will have a specific negative outcome. For some treatments, N.N.T. can be higher than the number needed to harm, indicating more people are harmed than successfully treated.)
Not all N.N.T.s are as high as aspirin’s for heart attacks, but many are higher than you might think. A website developed by David Newman, a director of clinical research at Icahn School of Medicine at Mount Sinai hospital, and Dr. Graham Walker, an assistant clinical professor at the University of California, San Francisco, has become a clearinghouse of N.N.T. data, amassed from clinical trials. Among them, for example, are those for the effects of the Mediterranean diet.
The Mediterranean diet, which is heavy in vegetables, fruits, nuts and olive oil; moderate in fish and poultry; and light in dairy, meat and sweets; has long been advocated as a means to avoid heart disease. In people who have never had a heart attack, but who are at risk, the N.N.T. is 61 to avoid a heart attack, stroke or death. And that is for people who adhere to the diet for about five years. For those at higher risk, who have already had a heart attack, to avoid one additional death, the N.N.T. is about 30. That’s the number of people who would have to adhere to the diet for four years so that one extra person survived. About 1.4 people out of 30 such people will die no matter what they eat; 27.6 will not die no matter what they eat. Only one will benefit from sticking to the diet.
But it’s not easy for everyone to stick to such a diet for that many years. Some — for example, those who enjoy steak and ice cream — will feel that it diminishes their quality of life. When you hear that the diet prevents heart attacks, then it might sound worth it. But does it still sound worth it when you consider that 29 out of 30 people who stick to the diet for several years see no benefit at all? Will you stick to it for years and be the lucky one for whom that matters?
As treatments go, an N.N.T. of 30 is pretty good. Very few are as low as 10, though some are. For instance, the use of steroids in people having asthma attacks to prevent admission to the hospital has an N.N.T. of eight. This is so obvious, and so powerful a treatment, that there are no commercials and no op-eds preaching steroid use for asthma. (Maybe there should be. It’s likely that this therapy is being underutilized, perhaps because cost-sharing discourages some people with asthma from seeking care when they might need it.) Steroids work very well for asthma attacks — better than many treatments for other conditions. But still, seven of eight people suffering an asthma attack see no benefit at all from steroids with respect to preventing hospitalization.
Even more concerning, N.N.T.s as calculated from clinical trial data are probably higher than those based on real-world medical care. In clinical trials, treatments are applied to a select population for whom they’re intended. In medical practice, it’s very common for treatments to be applied to a much broader population, including many people for whom they’re less likely to be effective, which increases the N.N.T. This is, perhaps, because doctors would rather offer an explicit treatment — perhaps to harness a placebo effect — even when it’s not likely to be of additional benefit.
In fact, as recently reported in The Times, a new study showed that many people who are prescribed aspirin for the primary prevention of cardiovascular disease don’t meet the criteria described above for its use. Because of this use in a population beyond that targeted in clinical trials, the N.N.T. in practice is most likely higher than the 2,000 suggested by those trials. (It’s worth noting that our best estimates of N.N.T.s can rise or fall as more data are collected and as treatments or how or to whom they’re delivered change.)
Antibiotics are a classic example of overuse. For instance, the N.N.T. for antibiotics to treat radiologically diagnosed acute sinusitis is 15, meaning that 14 out of 15 who take them derive no benefit. But physicians often write prescriptions for antibiotics in situations when the diagnosis of sinusitis is far less assured. This leads to antibiotics being overprescribed and overused, raising their N.N.T. in practice.
The use of stents to open up clogged arteries in patients who are not actively suffering a heart attack is another treatment that is employed too often. (Stents are considered appropriate in patients who are having a heart attack.) Many more patients believe they extend life than their N.N.T. suggests. The N.N.T is effectively infinite, relative to treatment with medications, for people not suffering a heart attack.
Until health care technology improves, there’s not a lot we can do about N.N.T.s that are larger than we might hope. It’s just a fact of current medical technology that not everyone benefits from treatment, even when well targeted. President Obama’s push for “precision medicine” is an attempt to change this, by using genomics to focus treatments on people who would most benefit from them. That will take time.
In the meantime, we would all be better served by a more informed understanding of exactly how much, or how little, benefit is reasonably to be expected by taking a drug, changing our lifestyle or undergoing a procedure. Especially since the chance of benefit, as expressed by N.N.T., might not be worth the risk of harm, as expressed by N.N.H. We’ll discuss that more next week.
There are many social and cultural factors that contribute to Mississippi’s catastrophic public health. One factor is a pervasive lack of health insurance.
Small businesses dominate [Mississippi’s] economy. [They] typically don’t offer health insurance, and Mississippi’s public health program for the poor is one of the most restrictive in the nation. Able-bodied adults without dependent children can’t sign up for Medicaid in Mississippi, no matter how little they earn, and only parents who earn less than 23 percent of the federal poverty level—some $384 a month for a family of three—can enroll. As a result, one in four adult Mississippians goes without health coverage. For African-Americans, the numbers are even worse: One in three adults is uninsured.
As passed, the ACA required states to extend Medicaid eligibility to 138% of the Federal Poverty Line and provided substantial federal funds in support. But the Supreme Court made that extension optional and Mississippi opted out.
Refusing Medicaid had serious effects on access to health care in rural Mississippi. It isn’t just that poor residents couldn’t get care because there were uninsured. They also had fewer places to go when local clinics had to close.
The Medicaid gap hit hospitals hard, too. Without the cash infusion that a Medicaid expansion would have brought, Mississippi hospitals are being strained to a near breaking point, with a number of them shuttering entire departments and laying off staff. Poor people often flocked to the emergency room at Montfort Jones Memorial Hospital in Kosciusko, for instance… Earlier this year, the hospital shut down its intensive care unit and laid off 38 employees. Next, the psychiatric unit for seniors closed. One in five people who come to the hospital can’t pay their medical bills, and Montfort Jones had relied on supplemental Medicaid payments to defray the costs. But under the health law, federal aid for uncompensated care trails off. Without those payments, and with no softening in the demand for uncompensated care, Montfort Jones had been losing up to $3 million a year, and couldn’t meet payroll…
[A Kosciusko physician] led me down a darkened hallway [in the Montfort Hospital Emergency Department] and pushed open the doors to the ICU. It looked as though the nurses, doctors and janitors had just gotten up and left. Scanning the bay of ghostly patient rooms, Alford said mordantly, “This is a state-of-the-art ICU.” Now, patients with pneumonia, blood clots or infections are sent 70 miles away to Jackson.
Montfort Jones Hospital in Kosciusko, MS.
If you are unfamiliar with rural poverty, be aware that not everyone has a car, public transportation is not always available, and ambulances are unaffordable if you are uninsured. If you are concerned about health care efficiency, think about making the capital investment required to build an ICU and then just letting it depreciate unused.
So how should we think about the ACA in the light of Mississippi? First, let’s acknowledge that many people’s views about the ACA reflect their philosophical commitments. For Michael Cannon, who may have influenced Mississippi’s decision making, freedom itself was at stake. Progressives disagreed: Bill argued that the ACA expands human freedom. Progressives also emphasized the importance of equal access to health care.
However, conservatives and progressives also disagreed on the likely outcomes of the ACA for the health system. The data are now coming in and they pose important questions for each side.
For conservatives: Many opponents believed both that the implementation of the ACA would fail and that it would result in worse rather than better health care. But after a rough start, the ACA is largely meeting its implementation goals. It is significantly reducing the number of the uninsured. And whereas health outcomes are improving in Massachusetts, the first adopter of ACA-like reform, Varney’s article suggests that in Mississippi the health care system is showing signs of collapse. It’s hard to look at Mississippi and conclude conservatives were right, for the ACA was hardly given a chance in that state, as Varney details.
Yet, Mississippi, though it’d be better off in many respects, would probably never be like Massachusetts if it fully embraced the ACA. Massachusetts started with a strong economy, a well-educated citizenry, and first world rather than developing world population health. As Varney shows, part of what has happened to Mississippi is likely the result of specific policy choices by elected officials. But a lot of it was baked into a terrible history.
“The opposition to [Obamacare] was really either political or ideological,” Ohio Gov. John Kasich (R) told the Associated Press earlier this month. “I don’t think that holds water against real flesh and blood, and real improvements in people’s lives.”
Principles matter, but how do you weigh principles that dictate opposition to access to basic health insurance against the costs borne by Mississippi’s rural poor?
For progressives: The Supreme Court’s ruling increased the powers to the states decide their own health policies. Progressives must ask: How important is it that Medicaid expansion proceed in Mississippi according to the standards in the ACA? Would you be willing to grant greater flexibility for Mississippi (and other Red states) to expand the program with features you don’t like? Granting flexibility looks like complicity in allowing unequal standards of health and health care, depending on the accident of where they live. But look at the vast differences between Mississippi and Massachusetts now. They are vast. For worse or (in our view) for better, we have a federal system. What’s happening in Mississippi reflects the (fully legal) choices politicians have made at all levels. If refusing to compromise leads to a political blockage of reform, we end up with outcomes like Mississippi’s. Is the fight for equal standards worth that cost?
Mississippi should prompt both conservatives and progressives to assess the cost of their principles. How much are they worth in “flesh and blood, and real improvements in people’s lives”?
The following originally appeared on The Upshot (copyright 2014, The New York Times Company) and is coauthored by Austin Frakt and Aaron Carroll.
Most news coverage of the new Kaiser Family Foundationannual survey on employer-sponsored health plans has focused on the fact that growth in premiums in 2013 was as low as it has ever been in the 16 years of the survey. But buried in the details of the report are some interesting insights into how employers think about controlling health care costs. One example is that they’re very fond of workplace wellness programs. This is surprising, because while such programs sound great, research shows they rarely work as advertised.
Wellness programs aim to encourage workers to be more healthy. Many use financial incentives to motivate workers to monitor and improve their health, sometimes through lifestyle-modification programs aimed at lowering cholesterol or blood pressure, for instance. Some programs offer a carrot, like discounts on health insurance to employees who complete health-risk assessments. Others use a stick, penalizing poor performance, or charging people more for smoking or having a high body mass index, for example.
Wellness programs are popular among employers. An analysis by the RAND Corporation found that half of all organizations with 50 or more employeeshave them. The new survey by the Kaiser Family Foundation found that 36 percent of firms with more than 200 workers, and 18 percent of firms over all, use financial incentives tied to health objectives like weight loss and smoking cessation. Even more large firms — 51 percent of those with 200 workers or more — offer incentives for employees to complete health risk assessments, intended to identify health issues.
Medium-to-large employers spent an average of $521 per employee on wellness programs last year, double the amount they spent five years ago, according to a February report by Fidelity Investments and the National Business Group on Health. The programs are generally offered not directly by insurance companies, but by specialist firms that tell employers they will reduce spending on employees’ care by encouraging the employees to take better care of their health.
Wellness programs have grown into a $6 billion industry because employers believe this. In fact, asked which programs are most effective at reducing costs, more firms picked wellness programs than any other approach. The Kaiser survey found that 71 percent of all firms think such programs are “very” or “somewhat” effective, compared with only 47 percent for greater employee cost sharing or 33 percent for tighter networks. (Recent research on public employee plans in Massachusetts found that tighter networks were associated with large savings.)
What research exists on wellness programs does not support this optimism. This is, in part, because most studies of wellness programs are of poor quality, using weak methods that suggest that wellness programs are associated with lower savings, but don’t prove causation. Or they consider only short-term effects that aren’t likely to be sustained. Many such studies are written by the wellness industry itself. More rigorous studies tend to find that wellness programs don’t save money and, with few exceptions, do notappreciably improve health. This is often because additional health screenings built into the programs encourage overuse of unnecessary care, pushing spending higher without improving health.
However, this doesn’t mean that employers aren’t right, in a way. Wellness programs can achieve cost savings — for employers — by shifting higher costs of care onto workers. In particular, workers who don’t meet the demands and goals of wellness programs (whether by not participating at all, or by failing to meet benchmarks like a reduction in body mass index) end up paying more. Financial incentives to get healthier sometimes simply become financial penalties on workers who resist participation or who aren’t as fit. Some believe this can be a form of discrimination.
The Affordable Care Act encourages this approach. It raises the legal limit on penalties that employers can charge for health-contingent wellness programs to 30 percent of total premium costs. Employers can also charge tobacco users up to 50 percent more in premiums. Needless to say, this strikes some people as unfair and has led to objections by workers at some organizations, as well as lawsuits.
Another way that wellness programs can help employers is by putting a more palatable gloss on other changes in health coverage. For instance, workers might complain if a company tries to reduce costs through higher cost sharing or narrower networks that limit doctor and hospital choice. But if these are quietly phased in at the same time as a wellness program that’s marketed as helping people become healthier, a company might be able to achieve those cost reductions with less grumbling.
At least one study has shown that a wellness program can achieve long-term savings. In 2003, PepsiCo introduced what was to become its Healthy Living program, which included lifestyle management (weight, nutrition and stress management along with smoking cessation and fitness) and disease management components (targeting participants with asthma, coronary artery disease, atrial fibrillation, congestive heart failure, stroke, hyperlipidemia, hypertension, diabetes, low back pain and chronic obstructive pulmonary disease). A study published in Health Affairs examined the outcomes of the program seven years after implementation, the longest such study of a wellness program to date.
Researchers found that participation in the PepsiCo program was associated with lower health care costs, but only after the third year, and all from the disease management components of the program. This suggests that wellness programs that target specific diseases that may drive employer costs could achieve savings, though perhaps only after several years. When more broadly implemented and focused on lifestyle management, as many wellness programs are, savings may not materialize, and certainly not in the short term.
Employers may misunderstand the research if they think that just any wellness program, by itself, is the surest route to reducing overall health care spending. That just isn’t the case. It may be true that, if designed well, some programs can save money for both the employer and employees in the long run, but not by focusing on lifestyle changes. Programs that merely do that may cut employer costs, but only by shifting them to employees. If firms wish to count that as a victory in the battle against health care costs, they may do so, but their employees may look at it differently.
Austin and Aaron are participants in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.