Overall, the FDA and Harvard Pilgrim say that they have access to prescription medication data on approximately 178 million people, with the routine accrual of medication data on 48 million currently enrolled or treated at the eighteen core partner organizations. Sentinel, they say, has 358 million person-years of data that include 4.0 billion prescriptions, 4.1 billion doctor or lab visits and hospital stays, and 42.0 million acute inpatient stays.
The data derive primarily from medical bills (claims), but a growing portion comes from EHRs or laboratory results (approximately 10 percent)—a portion expected to grow steadily in coming years. Here’s how the system works to proactively assess drug safety:
Prompted by a signal from FAERS [the FDA’s Adverse Event Reporting System], clinical trials, meta-analyses, case reports, or other regulatory bodies outside the United States showing a potential link between a prescription drug and an adverse event or safety risk, the FDA and Harvard Pilgrim staff and authorized researchers from collaborating institutions send a query via a secure portal to Sentinel’s network of data partners.
The data partners then conduct the query within their systems. All use the same analytical program. The partners are required to update their data sets periodically, with the largest data partners doing this quarterly and less frequent updates coming from smaller partners.
The findings are returned from each data partner through a secure portal. A team of data experts ensures the data quality before giving the FDA the findings.
There’s lots more in the brief, including limitations, controversies, aspirations, and so forth.
“Avalere Health projects that 76 percent of beneficiaries in Medicaid and the related Children’s Health Insurance Program (CHIP) will be covered by private managed care by 2016.”
“Private insurers booked $115 billion in Medicaid revenue last year, according to data compiled from regulatory filings by Mark Farrah Associates and analyzed by Kaiser Health News.”
“Operating profit on those premiums came to $2.4 billion. Net profit, after accounting for taxes, depreciation and other expenses not directly connected to health coverage, would have been less.”
Among the new, proposed regulations of Medicaid managed care plans, “HHS now wants states to certify at least annually, perhaps based on direct queries to doctors, that enough caregivers are in the managed-care network and close enough to plan members to serve them.”
“[F]or nondisabled adults  increased [Medicaid managed care or MMC] penetration is associated with increased probability of an emergency department visit, difficulty seeing a specialist, and unmet need for prescription drugs.” Some other work they cite, but not all of it, is consistent with these findings.
Consistent with prior work they cite (here’s some, the working paper version of which I mentioned here), Medicaid managed care penetration “is not associated with reduced expenditures. We find no association between penetration and health care outcomes for disabled adults.”
“[T]he primary gains from MMC may be administrative simplicity and budget predictability for states rather than reduced expenditures or improved access for individuals.”
Their analysis excluded Medicaid-Medicare dual enrollees and those with less than full year coverage. Separate analyses of the SSI population were generally not statistically significant, but the sample was smaller.
“[M]ore than half of all Medicaid beneficiaries are enrolled in risk-based managed care organizations (MCOs) through which they receive all or most of their care.”
“Not all state Medicaid programs contract with MCOs, but a large and growing number are doing so, and some states mandate that beneficiaries enroll in MCOs to receive Medicaid benefits.”
“In FY 2013, capitation payments to comprehensive MCOs accounted for about 28% of Medicaid spending nationally.”
“The federal regulations, last updated in 2002, set forth state responsibilities and requirements in areas including enrollee rights and protections, quality assessment and performance improvement (including provider access standards), external quality review, grievances and appeals, program integrity, and sanctions.”
Because I am hoping to see a new issue brief that summarizes what’s actually in the proposed regs, I’m not going go through this brief in detail. Suffice it to say, as things to look for in the new regs, the brief covers availability and accessibility of plan information, enrollee appeal rights, provider network adequacy, quality of care, long-term care services and supports, actuarial soundness of capitation rates, medical loss ratio, encounter data, and program integrity (basically, auditing contractors).
“Many state Medicaid programs are expanding their reliance on MCOs. In a recent 50-state survey of Medicaid directors conducted by the Kaiser Commission on Medicaid and the Uninsured, half the states reported taking action in 2014 to enroll additional Medicaid eligibility groups in MCOs. These states include California, New York, Texas, Florida, and Illinois – the five states with the largest Medicaid populations. A smaller number of states expanded their managed care programs geographically and/or shifted from voluntary to mandatory MCO enrollment. Nearly half the states plan to expand their risk-based managed care programs in 2015 as well.”
“A number of large health insurance companies have a significant stake in the Medicaid managed care market. Currently, 16 firms own Medicaid MCOs in two or more states, including five firms – UnitedHealth Group, WellPoint, Centene, Aetna, and Molina – that have Medicaid MCOs in 10 or more states. Eleven of the 16 multi-state parent firms are publicly traded; eight of these 11, including the five just mentioned, are ranked in the Fortune 500. The other five multi-state parent firms are nonprofit companies.”
Also, from the tracker directly, of the 38 states (plus DC) with MCO contracts, 25 have five or fewer and 14 have three or fewer. Suffice it to say, competition isn’t robust in many states.
Noam Schieber’s NYT piece today is devastating. About selecting papers to be most prominently featured at a top economics conference, David Card is quoted, “‘I choose papers that are going to be written up’ in the mainstream press. […] ‘It’s what the people want.'”
[T]he benefits to academics of generating media attention may be subtly skewing their research. “The pressure is tremendous,” said James Heckman, an economist at the University of Chicago and the winner of a Nobel Memorial Prize in Economic Science. “Many young economists realize that they win a MacArthur or the Clark prize, or both, by being featured in The Times.” […]
[P]opular media attention increasingly works in a candidate’s favor . For tenure decisions, “I’ve gotten letters,” Dr. Heckman said, “that ask me to assess the impact and visibility of a person’s work.”
Often the effect is indirect but no less pronounced. Many scholars said, for example, that a growing number of colleagues relied on nonprofit foundations to finance their research and that foundation administrators tended to be most excited when the work found its way into the news media.
“The grant-giver looks at this and says, ‘O.K., let’s fund this guy or this woman because we’re not just going to generate results that are read by 10 people,'” said Daniel Drezner, a political scientist at Tufts University’s Fletcher School of Law and Diplomacy. “It’s actually going to be talked about.” […]
All of this has led to a new model of disseminating social science research through the media.
The piece mentions—and is no doubt motivated by—the recent retraction by Science of the Michael LaCour study. It ends reminding us of the Rogoff-Reinhardt kerfuffle in 2013.
When I talk about promoting research via social and conventional media, I mention the problems Schieber is getting at. Maybe I don’t emphasize them enough.
There is danger in the allure of attention and the rewards it can bring. There are incentives to cheat a little, if not a lot. But there are huge penalties too. The consequences of making a mistake, and the personal damages for outright fraud, are much higher when one leverages up one’s work and message through, say, New York Times reporting or column writing (or similar).
It’s tempting to say these are all financial incentives, of a type. A bigger name can command a better academic post, more book sales, higher speaking fees, and the like. But these are not financial incentives in the same way we perceive those of, say, industry-sponsored clinical trials. They’re not incentives to produce a specific result. They’re incentives to do something—anything—perceived as provocative, important, and timely (though, perhaps, still consistent with one’s tribal affiliations).
We most typically call these non-financial conflicts of interest. Scheiber has reminded us that they are strong. And they are dangerous. Yes, science can be self-correcting, but in the interim, we should be humble and cautious. We should guard against being fooled by a blockbuster new study that reverses previous conventional wisdom. We should be skeptical—and express caution, include caveats—until findings are vetted and replicated.
We should also guard against fooling ourselves. When there’s no direct money on the line, we are all still at risk of promoting ideas that science cannot and will not ultimately support.
To understand and manage Medicaid spending requires analysis of Medicaid enrollees with substance use disorder. A GAO report released in May implicitly makes the case.
The first thing one learns from the report is just how skewed the Medicaid distribution of spending is. For instance, the top 5% most expensive Medicaid enrollees* account for nearly half of all Medicaid spending. (Really, we knew this already. All health care spending is similarly skewed, whether Medicaid or not.)
On page 11 of the report we learn that the most costly Medicaid enrollees disproportionately have a substance use disorder: Of the 5% most costly Medicaid enrollees, one in five have one, whereas across all Medicaid enrollees fewer than one in 20 do.
On page 13 we learn that substance use is intimately related to mental health conditions, but even more so for the most costly enrollees. Among the 5% most costly enrollees, 71% of those with a substance use disorder also have a mental health condition. Medicaid-wide, the figure is 51%.
I doubt the basic story changes much if one examines the top 10% most expensive enrollees (who account for about 65% of spending) or the top 20% (accounting for about 80% of spending). Substance use disorder is key.
We should pause and recognize that this collection of work suggests that COI disclosure can have completely different effects on confidence in doctors and findings, depending on the study and, in particular, the population of focus (physicians vs. patients). Such disclosure may be like a box of chocolates, in the Forest Gumpian sense. As such, and in light of point 2, claims that disclosure comes any where near systematically addressing the bias that such conflicts may create, or our interpretation of them, are suspect.
The Kesselheim study is most relevant to points 3 and 4, and its findings support the latter. The researchers provided 269 American Board of Internal Medicine-certified physicians with abstracts of hypothetical research on made-up drugs for hyperlipidemia (“lampytinib”), diabetes (“bondaglutaraz”), and angina (“provasinab”), said to have been recently FDA approved.* The abstracts varied in drug, funding source (no mention, NIH, or one of the top 12 global pharmaceutical companies), and three levels of methodological rigor. Each participant received three abstracts at random but such that they varied across the full ranges of the last two of these dimensions.
Since “methodological rigor” is vague, let’s be clear what the researchers meant by their three levels of it. This chart tells you all you need to know:
The study results show that the participants differentiated and understood the levels of methodological rigor. Controlling for rigor, industry-funded results were viewed as less credible and actionable than those funded by NIH or without indicated source of support. The following charts, with results adjusted for methodological rigor, tell the story.
Do these results reflect rational behavior? Here’s how they might: there are aspects of a study’s quality that, in today’s research reporting environment, are hard for a reader to assess. These include the withholding of critical data, failing to publish negative findings, or paper ghostwriting. The authors mention all these potential problems with citations to work documenting controversies in these areas with respect to industry-funded work: for data withholding see this, this, and this; for publication bias see this and this; for ghostwriting see this. To the extent that these problems are more pervasive in industry-sponsored work than that with other funding, it’s rational to downgrade the former accordingly (though precisely by how much is unclear).
However, we must acknowledge that citing some examples of problems with industry-sponsored work does not, itself, demonstrate that bias, on the whole, is more common under that kind of funding, let alone by how much. A key danger in assuming that other types of sponsorship are not accompanied by significant conflicts is that it could lead to the wrong policy solution. For instance, would more publicly-funded trials help? It’s possible, but can we say how much more credible they’d be, beyond our own intuition based on anecdotes? This is an important question. (Please understand, I am not against more publicly funded trials.)
This points directly to some other ways to alleviate the concerns COIs raise, and not just for industry-sponsored studies but for all studies: beef up research reporting. The authors conclude,
Financial disclosure is important, but more fundamental strategies, such as avoiding selective reporting of results in reports of [all] trials, ensuring protocol and data transparency, and providing an independent review of end points, will be needed to more effectively promote the translation of high-quality clinical trials — whatever their funding source — into practice. [Note: I substituted “all” for the authors’ “industry-sponsored” in this quote. Given how the quote ends, it seems more in keeping with what they intended.]
In the near future, Bill and I intend to write more about what we might do to beef up research reporting in these and other ways. We think it’s the right way to take COI seriously.
Addendum: For irony’s sake only, I looked at the study authors’ COI disclosures. Three of the nine authors reported having received compensation for speaking/appearances or data monitoring/analysis activities from Merck, Novartis, Astra Zeneca, Genzyme/Sanofi, or PhRMA. It is not my view that this in any way reduces the credibility of the study, which was supported by a grant from the Edmond J. Safra Center for Ethics at Harvard University, a career development award from the Agency for Healthcare Research and Quality, and a Robert Wood Johnson Foundation Investigator Award in Health Policy Research, a fellowship at the Petrie–Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and a grant from the National Cancer Institute.
* My main disappointment is that I was not invited to the meeting at which these fictitious drug names were cooked up.
I’ve used Slack a little bit over a span of a few weeks. That’s enough for me to know a few things about it I like and don’t, which may help you decide if you want to try it or help you think about how you might use it.
Here’s where I’d describe Slack, but I’m too lazy. Go watch the video. If you’re too lazy to do that, then think of Slack as a kind of substitute for email.
That makes this question the best place to start: What’s wrong with email?
Nothing or everything, depending on your preferences and how you use it. I like email. A lot. But there are several ways it doesn’t play nice with how I (and maybe you) think or live:
Most emails are too long for several reasons. There’s no binding constraint on length. Contrast that to Twitter’s 140 characters per tweet. By email, it’s convention to write more than a few sentences,* perhaps for legacy reasons. (Back in the day, email came to be seen as a replacement to letters and memos.) People don’t self-impose the hard discipline of brevity.*
Email demands a lot of mental overhead: opening messages, confronting different fonts, parsing who is saying what or finding the updates in a long thread, visually jumping over signature info you don’t need—these small cognitive taxes add up.
Filing, tagging, de-attaching are all annoying, time consuming chores, as is finding old emails and their attachments. You know what I mean.
We use email as a to do list, which makes a mess of our inboxes.
For all that, I’m not abandoning email, in part because most of the world runs on it and very little of the world is on Slack (network externalities). I have to use email a lot anyway. But I also like it. It’s easy to create, on the fly, a custom conversation, dedicated to a new topic and including just the people I want included. In this sense, it’s as private as I want it to be (ignoring NSA/hacking/forwarding issues). I’m not broadcasting my thoughts to more people than I care to, and hearing back from people I don’t wan’t to hear from. (See Twitter.)
Google has a nice fix for the to do problem (number 4). It has a nice fix for the “get me out of this reply all madness” problem (number 5). In many ways, email is not bad, which is not to say it’s perfect.
That’s where Slack comes in. It addresses reasonably well at least problems 1-3. It offers Twitter-like efficient scanability, though without Twitter’s length constraint. The norm (such as I’ve seen it) is to write tersely. You don’t have to open each message. There are handles and no signatures making it quick and easy to see who wrote what. Various files and links relevant to a thread are all collected in one place and in the cloud. (You can, in fact, get that last benefit and largely integrated with email using Basecamp, or something like it.)
Slack is fine, so long as you keep up with it. I don’t. It can be too social when I don’t want it to be, like Twitter. To consume the wisdom of your network on Twitter (or the Slack equivalent: invitees to a particular channel) you also have to weed through (or enjoy!) its other content. That’s not good or bad. It just is. I share both health policy and photos from my commute on Twitter, for example. Social media is social to a greater extent than email. If you’re like me, sometimes you’re into it. Sometimes you’re not.
Here’s where I’d lower the hammer on Slack, identifying its fatal flaw, or where I’d deliver the triumphant “the solution to all your communication problems has arrived” BS. That’s not how it is. Slack is good the way Twitter is good, the way email is good, the way blogs are good, the way Facebook is good, the way phone calls are good, the way meeting someone in person is good. It’s an incomplete goodness along with some stuff that doesn’t always work for you.
Express Scripts Holding Co., a large manager of prescription-drug benefits for U.S. employers and insurers, is seeking deals with pharmaceutical companies that would set pricing for some cancer drugs based on how well they work. […]
Drug companies are countering with pricing models of their own, such as offering free doses during a trial period. […]
Express Scripts’ approach would be similar to that proposed by Peter Bach, director of the Center for Health Policy and Outcomes at Memorial Sloan Kettering Cancer Center.
In an article published last year in the Journal of the American Medical Association [link], he suggested that in an indication-specific arrangement, the monthly price for Eli Lilly & Co.’s cancer drug Erbitux would plummet from $10,320 a patient to about $470 a patient for its least effective use, treating recurrent or metastatic head and neck cancer.
The controversy over the new crop of hepatitis C treatments has taken yet another turn as consumers are starting to file lawsuits against insurers that deny them access to the medicines. Over the past two weeks, two different women alleged that Anthem Blue Cross refused to pay for the Harvoni treatment sold by Gilead Sciences because it was not deemed “medically necessary.” […]
Both lawsuits claim the insurer denied coverage for Harvoni, one of two hepatitis C treatments sold by Gilead, because the amount of liver damage sustained by the women was insufficient to warrant payment for the drug. In both cases, the insurer decided that Harvoni was not medically necessary, according to the lawsuits.
Prior TIE coverage of “medically necessary” care and the law here and here. Maybe there’s more here somewhere, but I forget where.
The following originally appeared on The Upshot (copyright 2015, The New York Times Company).
Can hospitals provide better care for less money? The assumption that they can is baked into the Affordable Care Act.
Historically, hospital productivity has grown much more slowly than the overall economy, if at all. That’s true of health care in general. Productivity — in this case the provision of care per dollar and the improvements in health to which it leads — has never grown as quickly as would be required for hospitals to keep pace with scheduled cuts to reimbursements fromMedicare.
But to finance coverage expansion, the Affordable Care Act made a big bet that hospitals could provide better care for less money from Medicare. Hospitals that cannot become more productive quickly enough will be forced to cut back. If the past is any guide, they may do so in ways that harm patients.
The Obamacare gamble that hospitals can become much more productive conflicts with a famous theory of why health care costs rise. William Baumol, a New York University economist, called it the “cost disease.” (He wrote a book about it by that title; I blogged on it as I read it if you’d like to quickly get the gist.)
This theory asserts that productivity growth in health care is inherently low for the same reason it is in education: Productivity-enhancing technologies cannot easily replace human doctors or teachers. In contrast with, say, manufacturing — a sector in which machines have rapidly taken over functions that workers used to do, and have done them better and more cheaply — there are, at least for the time being, far fewer machines that can step in and outperform doctors, nurses or other health sector jobs.
But a new study casts doubt on that theory and suggests Obamacare’s bet may indeed pay off. The study, published in Health Affairs by John Romley, Dana Goldman and Neeraj Sood, found that hospitals’ productivity has grown more rapidly in recent years than in prior ones. Hospitals are providing better care at a faster rate than growth in the payments they receive from Medicare, according to the study.
[Note: y-axis is cumulative percent increase in productivity, as defined in the chart’s footnote.]
This is both good news for patients and good news for the financing of the health reform law, which assumes hospitals will become significantly more productive. This bet is built into a schedule of reductions in the rate of growth in Medicare payments to hospitals. According to the law, those rates are to be reduced commensurate with the productivity growth of the overall economy. The only way for hospitals to keep up is if their productivity rises just as quickly.
The cost disease theory says it can’t be done. This, according to the theory, is what causes health care spending growth to outpace that of the overall economy.
Computers, cellphones, televisions — over the years they’ve all gotten better and cheaper. High productivity growth in such sectors — not mirrored in health care — leads to wage growth in those sectors. Higher wages provide more resources to spend on goods and services. Because health care is valuable, we use those resources to pay health care workers more, too, to keep them from doing something else. This helps explain why health care spending outpaces economic growth: We keep paying more for health care (through growing wages) without getting more (because of low productivity growth).
Not all economists find every detail of the cost disease theory compelling. Some have argued, for example, that it gives short shrift to ways in which the quality of care changes, along with its price. Heart attack treatment certainly costs more today than a decade ago. Perhaps it’s also better. The acceptance of inevitably low health care productivity growth also troubles some economists.
Amitabh Chandra, a Harvard economist, is one of them: “In Baumol’s view, as long as there is a steady stream of innovation in sectors others than health care — from cars to computers to everything on Amazon — we’ll be able to spend even more on health care, despite its jaundiced productivity growth. But if productivity in health care improves, too, then think about how much more health care we’ll be able to afford.”
If the cost disease theory’s premise of low health care productivity growth holds, then the idea of tying reductions in the growth of Medicare payments to hospitals to economic growth — as the Affordable Care Act does — spells trouble.
The findings by Mr. Romley and colleagues from the Schaeffer Center for Health Policy and Economics at the University of Southern California are a hopeful sign this need not happen. A strength of the study is it incorporated an aspect of the quality of care into its measure of productivity: whether the care received kept more patients alive and out of the hospital for at least 30 days. The findings were qualitatively similar for shorter (two weeks) or longer windows (one year). This distinguishes it from other approaches that measure productivity according to how many procedures a hospital can do per dollar, but not how well they do them.
According to the analysis, productivity fell for heart attack and heart failure patients between 2002 and 2005, after which it began to rise. For hospital care for all three conditions examined — heart attacks, heart failure and pneumonia — productivity growth accelerated after 2007. By 2011 it was more than 14 percent over the level it had been in 2002.
The source of the broadest optimism from the study: Hospital productivity increased in the most recent years faster than that of the overall economy.
Though the study is an important one, we should interpret it with some caution. It examined only one measure of productivity; it examined only three conditions in Medicare patients; and it examined data only through 2011. More studies like this one — but using different methods and more recent data — could confirm or refute these findings.
Nevertheless, for decades the conventional wisdom has been that hospitals — and the health care sector in general — could not become more productive, explaining its growing expense. This new study suggests that such a cost disease may not be as inherent as once believed — and that the health care law’s cuts to Medicare are not as risky a bet as they once seemed.
I want to flag something meta about my Upshot post today, in which I describe a study that suggests hospital productivity has increased in recent years (through 2011). The study findings surprised me. Based on prior work and history, I am highly skeptical hospitals can maintain the high productivity growth it suggests.
Put another way, writing about the study by John Romley, Dana Goldman and Neeraj Sood the way I did was counter to confirmation bias. I’ve posted about the hospital—or health care—productivity problem many times on TIE, as I linked to in the piece. It would have been easy to cling fast to the view that hospitals can never become substantially more productive (the cost disease) and to discount the Romley et al. study for any number of reasons. (I mention caveats at the end of the piece; more have been suggested to me on Twitter.) I find it more interesting and rewarding to take the study at face value—to challenge and update my own priors, if even provisionally.
I suspect some will read the piece as Obamacare boosterism. That’s a mistake. I don’t do that. The ACA really did make a big and risky bet that hospitals could increase productivity. I’ve worried about it for years. I hope it’ll pay off, as the study suggests. We should be prepared for the possibility it won’t. While we wait, we should be brave enough to assimilate new evidence independent of what it implies about the ACA.