Via Language Log:
- item.php
Twice in the last few weeks I was all riled up and feeling the need to blast out posts on how everyone needed to stop freaking out and pay attention to real risks and not the scream du jour. But before I could even get to it, there was Christopher Ingraham in the Washington Post, doing it for me.
He saved me the trouble, and made it much easier to write this week’s Healthcare Triage News:
For those of you who want to read more, here you go:
item.phpIn 2012, NEJM published a randomized study of how physicians use financial conflicts of interest (COI) disclosures, by Aaron Kesselheim and eight others.
From the paper’s introduction/background and that which it cites, we learn the following:
- Echoing my struggles in this area, COI disclosure to clinical trial participants may not help them because they cannot evaluate its relevance, among other reasons.
- A systematic review of researchers attitudes about COI, published in 2005, concluded that COIs act subconsciously, that their disclosure doesn’t eliminate bias or change the quality of research.
- Most physicians say they would discount study findings from sources viewed as “conflicted,” but in one study COI information didn’t affect self-reported likelihood of prescribing a new drug.
- Several other studies found the opposite, that COI disclosure leads physicians to discount trial findings.
- Other studies found that doctors’ COI disclosure can enhance trustworthiness of patients in their doctors.
We should pause and recognize that this collection of work suggests that COI disclosure can have completely different effects on confidence in doctors and findings, depending on the study and, in particular, the population of focus (physicians vs. patients). Such disclosure may be like a box of chocolates, in the Forest Gumpian sense. As such, and in light of point 2, claims that disclosure comes any where near systematically addressing the bias that such conflicts may create, or our interpretation of them, are suspect.
The Kesselheim study is most relevant to points 3 and 4, and its findings support the latter. The researchers provided 269 American Board of Internal Medicine-certified physicians with abstracts of hypothetical research on made-up drugs for hyperlipidemia (“lampytinib”), diabetes (“bondaglutaraz”), and angina (“provasinab”), said to have been recently FDA approved.* The abstracts varied in drug, funding source (no mention, NIH, or one of the top 12 global pharmaceutical companies), and three levels of methodological rigor. Each participant received three abstracts at random but such that they varied across the full ranges of the last two of these dimensions.
Since “methodological rigor” is vague, let’s be clear what the researchers meant by their three levels of it. This chart tells you all you need to know:
The study results show that the participants differentiated and understood the levels of methodological rigor. Controlling for rigor, industry-funded results were viewed as less credible and actionable than those funded by NIH or without indicated source of support. The following charts, with results adjusted for methodological rigor, tell the story.
Do these results reflect rational behavior? Here’s how they might: there are aspects of a study’s quality that, in today’s research reporting environment, are hard for a reader to assess. These include the withholding of critical data, failing to publish negative findings, or paper ghostwriting. The authors mention all these potential problems with citations to work documenting controversies in these areas with respect to industry-funded work: for data withholding see this, this, and this; for publication bias see this and this; for ghostwriting see this. To the extent that these problems are more pervasive in industry-sponsored work than that with other funding, it’s rational to downgrade the former accordingly (though precisely by how much is unclear).
However, we must acknowledge that citing some examples of problems with industry-sponsored work does not, itself, demonstrate that bias, on the whole, is more common under that kind of funding, let alone by how much. A key danger in assuming that other types of sponsorship are not accompanied by significant conflicts is that it could lead to the wrong policy solution. For instance, would more publicly-funded trials help? It’s possible, but can we say how much more credible they’d be, beyond our own intuition based on anecdotes? This is an important question. (Please understand, I am not against more publicly funded trials.)
This points directly to some other ways to alleviate the concerns COIs raise, and not just for industry-sponsored studies but for all studies: beef up research reporting. The authors conclude,
Financial disclosure is important, but more fundamental strategies, such as avoiding selective reporting of results in reports of [all] trials, ensuring protocol and data transparency, and providing an independent review of end points, will be needed to more effectively promote the translation of high-quality clinical trials — whatever their funding source — into practice. [Note: I substituted “all” for the authors’ “industry-sponsored” in this quote. Given how the quote ends, it seems more in keeping with what they intended.]
In the near future, Bill and I intend to write more about what we might do to beef up research reporting in these and other ways. We think it’s the right way to take COI seriously.
Addendum: For irony’s sake only, I looked at the study authors’ COI disclosures. Three of the nine authors reported having received compensation for speaking/appearances or data monitoring/analysis activities from Merck, Novartis, Astra Zeneca, Genzyme/Sanofi, or PhRMA. It is not my view that this in any way reduces the credibility of the study, which was supported by a grant from the Edmond J. Safra Center for Ethics at Harvard University, a career development award from the Agency for Healthcare Research and Quality, and a Robert Wood Johnson Foundation Investigator Award in Health Policy Research, a fellowship at the Petrie–Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and a grant from the National Cancer Institute.
* My main disappointment is that I was not invited to the meeting at which these fictitious drug names were cooked up.
item.phpYesterday, Austin flagged two lawsuits that have been filed in California against Anthem Blue Cross over its refusal to pay for Harvoni to treat Hepatitis C. Setting aside the question of whether Harvoni is cost-effective—check out Allan Joseph on that—what are the lawsuits about? And are they likely to succeed?
Anthem denied Harvoni to the plaintiff in the lead California case on the ground that it wasn’t medically necessary for someone with her limited extent of liver damage. The rejection appears to be inconsistent with prominent clinical guidelines, which say that some kind of therapy should be provided to anyone with Hep C, although priority should be given to those with serious liver damage. The American Association for the Study of Liver Disease, for example, has said that successful treatment “is tantamount to virologic cure, and as such, is expected to benefit nearly all chronically infected persons.”
Nonetheless, Anthem refused to pay for Harvoni based on its own internal guidelines, which you can see here. (I have no idea what resources Anthem used to develop its guidelines; the guidelines themselves don’t say.) Although Harvoni is only one of three recommended therapies for someone with the plaintiff’s type of Hep C, it doesn’t look like Anthem would have paid for the two alternatives either.
Judging from the Wall Street Journal article about the lawsuit, Anthem thinks that Harvoni is still experimental for those Hep C patients with healthy-ish livers. As an Anthem spokesperson explained, “[b]roader use of these drugs and knowledge about the long term effects and potential harms and outcome of various alternative therapies are needed on those with limited effects of infection.”
I’m hesitant to draw strong conclusions without hearing more fully from Anthem, but this strikes me as a weak defense. Anthem is contractually bound to cover medically necessary care. Even if it’s reasonable in the abstract to think that some Hep C patients might not benefit from Harvoni, Anthem’s position appears to be, in the words of one California court, “significantly at variance with the medical standards of the community.” And if there’s anything clear in the law, it’s that an insurer’s idiosyncratic view of medical necessity is unlikely to carry the day.
item.phpI’ve used Slack a little bit over a span of a few weeks. That’s enough for me to know a few things about it I like and don’t, which may help you decide if you want to try it or help you think about how you might use it.
Here’s where I’d describe Slack, but I’m too lazy. Go watch the video. If you’re too lazy to do that, then think of Slack as a kind of substitute for email.
That makes this question the best place to start: What’s wrong with email?
Nothing or everything, depending on your preferences and how you use it. I like email. A lot. But there are several ways it doesn’t play nice with how I (and maybe you) think or live:
- Most emails are too long for several reasons. There’s no binding constraint on length. Contrast that to Twitter’s 140 characters per tweet. By email, it’s convention to write more than a few sentences,* perhaps for legacy reasons. (Back in the day, email came to be seen as a replacement to letters and memos.) People don’t self-impose the hard discipline of brevity.*
- Email demands a lot of mental overhead: opening messages, confronting different fonts, parsing who is saying what or finding the updates in a long thread, visually jumping over signature info you don’t need—these small cognitive taxes add up.
- Filing, tagging, de-attaching are all annoying, time consuming chores, as is finding old emails and their attachments. You know what I mean.
- We use email as a to do list, which makes a mess of our inboxes.
- Reply all.
For all that, I’m not abandoning email, in part because most of the world runs on it and very little of the world is on Slack (network externalities). I have to use email a lot anyway. But I also like it. It’s easy to create, on the fly, a custom conversation, dedicated to a new topic and including just the people I want included. In this sense, it’s as private as I want it to be (ignoring NSA/hacking/forwarding issues). I’m not broadcasting my thoughts to more people than I care to, and hearing back from people I don’t wan’t to hear from. (See Twitter.)
Google has a nice fix for the to do problem (number 4). It has a nice fix for the “get me out of this reply all madness” problem (number 5). In many ways, email is not bad, which is not to say it’s perfect.
That’s where Slack comes in. It addresses reasonably well at least problems 1-3. It offers Twitter-like efficient scanability, though without Twitter’s length constraint. The norm (such as I’ve seen it) is to write tersely. You don’t have to open each message. There are handles and no signatures making it quick and easy to see who wrote what. Various files and links relevant to a thread are all collected in one place and in the cloud. (You can, in fact, get that last benefit and largely integrated with email using Basecamp, or something like it.)
Slack is fine, so long as you keep up with it. I don’t. It can be too social when I don’t want it to be, like Twitter. To consume the wisdom of your network on Twitter (or the Slack equivalent: invitees to a particular channel) you also have to weed through (or enjoy!) its other content. That’s not good or bad. It just is. I share both health policy and photos from my commute on Twitter, for example. Social media is social to a greater extent than email. If you’re like me, sometimes you’re into it. Sometimes you’re not.
Here’s where I’d lower the hammer on Slack, identifying its fatal flaw, or where I’d deliver the triumphant “the solution to all your communication problems has arrived” BS. That’s not how it is. Slack is good the way Twitter is good, the way email is good, the way blogs are good, the way Facebook is good, the way phone calls are good, the way meeting someone in person is good. It’s an incomplete goodness along with some stuff that doesn’t always work for you.
So, Slack? Maybe.
* What’s wrong with people!? Concision. Word. (Your mileage may vary.)
item.phpEveryone I know is tweeting or sending me this piece at io9 about a team that faked a study of chocolate and weight loss and fooled the world. I remember when this study came out, and I thankfully dismissed it because I thought it was crap.
Turns out it was. It was a hoax.
It’s an amazing read, and it’s certainly captivating. It’s also shockingly unethical. I don’t know whether to love it or hate it. I think the message is important (science journalism often SUCKS, see here and here and here and here and here and forget it just go here), but these guys knowingly perpetrated a fraud to make a point, on a huge and real scale. I can’t ignore that.
item.phpFOTB Brad Flansbaum sent me this piece. It’s about a former Congressman who is suing his Congressional doctors for malpractice. The piece is short, but here’s what you need to know:
According to the court filing, in early 2012, LaTourette, then still a congressman, went to George Washington University Hospital for an MRI after stomach pain and a diagnosis of mild pancreatitis.
The MRI revealed a 1.5-centimeter lesion on his pancreas. The doctor recommended follow-up imaging in six months. As a congressman, LaTourette’s doctor was at the Office of the Attending Physician at the Capitol.
According to the filing, the hospital doctor told a Capitol doctor about the imaging the day it was taken and sent a report to the Capitol doctor’s office the following day.
The Capitol doctors never performed the follow-up screening six months later, the filing says, and LaTourette himself had not been informed of the need for a follow-up.
It was not until last year, after his retirement in 2013, that he felt abdominal pain once again and doctors at the Cleveland Clinic discovered that he had developed a cancerous mass on his pancreas.
I have no idea who is at “fault” here. There’s certainly plenty of potential blame to go around. But how many system failures do you need. Here’s what I can see off the top of my head:
- Was there a proper discussion of the differential diagnosis before or after the MRI?
- Why were they getting the MRI for stomach pain and mild pancreatitis?
- Was the initial lesion even what developed into the cancer? Or was it an incidentaloma?
- When they “told” the Capitol doctor about it, what information was passed?
- When they “sent” the records, were they electronic or paper? (I bet the latter!)
- Were the data in the form of a document, or in the form of actual data that could be used?
- Was there a flag to get the docs to remember to do the follow-up test?
- Even if there was, why didn’t the patient himself follow up? He had a mass on his pancreas!
- Was he really not informed of the need for follow-up? Did he forget?
The MRI might have been overkill. There’s a complete lack of communication. The many different parts of the system couldn’t pass data efficiently. There were no systemic efforts to make sure that follow-up occurred. No clinical decision support. No effective use of the data. And this happened to a Congressman, at some pretty impressive facilities. You think regular people have it better?
item.phpThis is one way we try to do it:
Express Scripts Holding Co., a large manager of prescription-drug benefits for U.S. employers and insurers, is seeking deals with pharmaceutical companies that would set pricing for some cancer drugs based on how well they work. […]
Drug companies are countering with pricing models of their own, such as offering free doses during a trial period. […]
Express Scripts’ approach would be similar to that proposed by Peter Bach, director of the Center for Health Policy and Outcomes at Memorial Sloan Kettering Cancer Center.
In an article published last year in the Journal of the American Medical Association [link], he suggested that in an indication-specific arrangement, the monthly price for Eli Lilly & Co.’s cancer drug Erbitux would plummet from $10,320 a patient to about $470 a patient for its least effective use, treating recurrent or metastatic head and neck cancer.
This is one way it gets rolled back:
The controversy over the new crop of hepatitis C treatments has taken yet another turn as consumers are starting to file lawsuits against insurers that deny them access to the medicines. Over the past two weeks, two different women alleged that Anthem Blue Cross refused to pay for the Harvoni treatment sold by Gilead Sciences because it was not deemed “medically necessary.” […]
Both lawsuits claim the insurer denied coverage for Harvoni, one of two hepatitis C treatments sold by Gilead, because the amount of liver damage sustained by the women was insufficient to warrant payment for the drug. In both cases, the insurer decided that Harvoni was not medically necessary, according to the lawsuits.
Prior TIE coverage of “medically necessary” care and the law here and here. Maybe there’s more here somewhere, but I forget where.
item.phpIt’s one of those things that I think we consider too rarely: paying patients to be healthier. After all, we have no problem penalizing them for being unhealthy (ie wellness programs). But the latter is totally accepted, and the former is often considered ridiculous. But here’s the NEJM with a study to change your mind. “Randomized Trial of Four Financial-Incentive Programs for Smoking Cessation“:
BACKGROUND: Financial incentives promote many health behaviors, but effective ways to deliver health incentives remain uncertain.
METHODS: We randomly assigned CVS Caremark employees and their relatives and friends to one of four incentive programs or to usual care for smoking cessation. Two of the incentive programs targeted individuals, and two targeted groups of six participants. One of the individual-oriented programs and one of the group-oriented programs entailed rewards of approximately $800 for smoking cessation; the others entailed refundable deposits of $150 plus $650 in reward payments for successful participants. Usual care included informational resources and free smoking-cessation aids.
So here’s the deal. Researchers randomly assigned employees, along with their relatives and friends, to one of four programs to help them quit, or to “usual care”. The randomization was stratified over two variables, whether they had full healthcare benefits through the employer and whether their annual household income was at least $60,000. This was to balance recruitment in those areas.
Two of the programs involved an individual incentive. The first was a straight payment system, with participants getting $200 at 14 days, 30 days, and 6 months, with a potential $200 bonus at the end of their enrollment if they were still not smoking. So they could get $800 total potentially. They were checked by laboratory testing to see if they were smoke free. The second individual assessment was the same, but involved requiring the participants to pony up a refundable $150 at the start of the trial They’d get that back if they didn’t smoke.
The other two groups were collective. The first was collaborative. Participants were enrolled in groups of six. At each time point, they all received $100 for each member that was still smoke free. In this way, they could earn up to $600 per check, with the $200 bonus still available. Thus, there was $2000 total potentially available, depending on how many in the group stuck with it. This was supposed to see if getting people incentivized to work together might help.
The last group was competitive, and also involved deposits. Everyone had to pony up $150. People were paid more if others failed. They could receive between $2000 and $1200 at each time period, with the $200 bonus at the end, for a potential $3800 dollars. Again, though, they’d get more money if fewer people quit. They were kept anonymous, though, so people couldn’t sabotage each other.
They got more than 1000 people to participate. Overall, people liked the rewards based programs much more than they liked the deposit based programs. About 90% agreed to participate in the rewards program, versus only about 14% agreeing to the deposit programs. In other words, they didn’t like the idea of risking their own money. But what we really care about was the different quit rates. In an intention to treat analysis, the quite rates were significantly higher with all of the incentive programs than with usual care, which had a quit rate of 6%.
At 6 months, the individual deposit program had a quit rate of 9.4%, and the competitive deposit program had a quit rate of 11.1%. The individual rewards program had a quit rate of 15.4%, and the collaborative rewards program had a quit rate of 16%. All much better than the 6% in usual care.
The sad news is that almost all of these pretty much halved at 12 months, but still – the programs were generally better than usual care.
And let’s not forget, more quitting is better. So how much did it cost for each 6-month quit? It was $122 in usual care, $1,058 in individual rewards, $1,193 in collaborative rewards, $542 in individual deposits, and $858 in competitive deposits. Is that worth it? Might be. We pay a lot more for things that do us a lot less good than quitting smoking would.
item.phpThe following originally appeared on The Upshot (copyright 2015, The New York Times Company).
Can hospitals provide better care for less money? The assumption that they can is baked into the Affordable Care Act.
Historically, hospital productivity has grown much more slowly than the overall economy, if at all. That’s true of health care in general. Productivity — in this case the provision of care per dollar and the improvements in health to which it leads — has never grown as quickly as would be required for hospitals to keep pace with scheduled cuts to reimbursements fromMedicare.
But to finance coverage expansion, the Affordable Care Act made a big bet that hospitals could provide better care for less money from Medicare. Hospitals that cannot become more productive quickly enough will be forced to cut back. If the past is any guide, they may do so in ways that harm patients.
The Obamacare gamble that hospitals can become much more productive conflicts with a famous theory of why health care costs rise. William Baumol, a New York University economist, called it the “cost disease.” (He wrote a book about it by that title; I blogged on it as I read it if you’d like to quickly get the gist.)
This theory asserts that productivity growth in health care is inherently low for the same reason it is in education: Productivity-enhancing technologies cannot easily replace human doctors or teachers. In contrast with, say, manufacturing — a sector in which machines have rapidly taken over functions that workers used to do, and have done them better and more cheaply — there are, at least for the time being, far fewer machines that can step in and outperform doctors, nurses or other health sector jobs.
But a new study casts doubt on that theory and suggests Obamacare’s bet may indeed pay off. The study, published in Health Affairs by John Romley, Dana Goldman and Neeraj Sood, found that hospitals’ productivity has grown more rapidly in recent years than in prior ones. Hospitals are providing better care at a faster rate than growth in the payments they receive from Medicare, according to the study.
[Note: y-axis is cumulative percent increase in productivity, as defined in the chart’s footnote.]
This is both good news for patients and good news for the financing of the health reform law, which assumes hospitals will become significantly more productive. This bet is built into a schedule of reductions in the rate of growth in Medicare payments to hospitals. According to the law, those rates are to be reduced commensurate with the productivity growth of the overall economy. The only way for hospitals to keep up is if their productivity rises just as quickly.
The cost disease theory says it can’t be done. This, according to the theory, is what causes health care spending growth to outpace that of the overall economy.
Computers, cellphones, televisions — over the years they’ve all gotten better and cheaper. High productivity growth in such sectors — not mirrored in health care — leads to wage growth in those sectors. Higher wages provide more resources to spend on goods and services. Because health care is valuable, we use those resources to pay health care workers more, too, to keep them from doing something else. This helps explain why health care spending outpaces economic growth: We keep paying more for health care (through growing wages) without getting more (because of low productivity growth).
Not all economists find every detail of the cost disease theory compelling. Some have argued, for example, that it gives short shrift to ways in which the quality of care changes, along with its price. Heart attack treatment certainly costs more today than a decade ago. Perhaps it’s also better. The acceptance of inevitably low health care productivity growth also troubles some economists.
Amitabh Chandra, a Harvard economist, is one of them: “In Baumol’s view, as long as there is a steady stream of innovation in sectors others than health care — from cars to computers to everything on Amazon — we’ll be able to spend even more on health care, despite its jaundiced productivity growth. But if productivity in health care improves, too, then think about how much more health care we’ll be able to afford.”
If the cost disease theory’s premise of low health care productivity growth holds, then the idea of tying reductions in the growth of Medicare payments to hospitals to economic growth — as the Affordable Care Act does — spells trouble.
The findings by Mr. Romley and colleagues from the Schaeffer Center for Health Policy and Economics at the University of Southern California are a hopeful sign this need not happen. A strength of the study is it incorporated an aspect of the quality of care into its measure of productivity: whether the care received kept more patients alive and out of the hospital for at least 30 days. The findings were qualitatively similar for shorter (two weeks) or longer windows (one year). This distinguishes it from other approaches that measure productivity according to how many procedures a hospital can do per dollar, but not how well they do them.
According to the analysis, productivity fell for heart attack and heart failure patients between 2002 and 2005, after which it began to rise. For hospital care for all three conditions examined — heart attacks, heart failure and pneumonia — productivity growth accelerated after 2007. By 2011 it was more than 14 percent over the level it had been in 2002.
The source of the broadest optimism from the study: Hospital productivity increased in the most recent years faster than that of the overall economy.
Though the study is an important one, we should interpret it with some caution. It examined only one measure of productivity; it examined only three conditions in Medicare patients; and it examined data only through 2011. More studies like this one — but using different methods and more recent data — could confirm or refute these findings.
Nevertheless, for decades the conventional wisdom has been that hospitals — and the health care sector in general — could not become more productive, explaining its growing expense. This new study suggests that such a cost disease may not be as inherent as once believed — and that the health care law’s cuts to Medicare are not as risky a bet as they once seemed.
item.php










