Via CGP Grey: Chihuahua or muffin?
Via karen zak: Sheepdog or mop?
Via Christopher Ingraham: Puppy or bagel?
Via CGP Grey: Chihuahua or muffin?
Via karen zak: Sheepdog or mop?
Via Christopher Ingraham: Puppy or bagel?
Last week, I wrote that the latest evidence on prescription drug monitoring programs suggests that they don’t work so well. But a recent Health Affairs study from a team at Vanderbilt—a study that I’d initially overlooked—makes me wonder if I spoke too soon. By comparing national mortality data to state-by-state variations in prescription drug monitoring programs, the study finds that the adoption of high-quality programs is associated with a modest reduction in mortality from opioid-related deaths.
Methodologically, the study looks solid. It’s not immediately obvious why it found different results from the NEJM study I discussed in the last post. One possibility is that the researchers teased apart differences in how the programs were implemented across the states. It may be that how states implement these programs matters more than whether they adopt them at all.
Whatever the case may be, the authors estimate that the adoption of high-quality prescription drug monitoring programs in every state would save about 600 lives in 2016, or about 2% of overall deaths from opioid overdose. That’s not going to end the opioid epidemic anytime soon, but it’s much better than nothing.
Update your priors accordingly; I’ve updated mine.
So often, when we implement new policy, I wish we had better ways to capture its effects so that we could expand our knowledge base as to how decisions change health and health care. The Oregon Health Insurance Experiment, and its older brother the RAND HIE, were RCTs designed to look at how insurance affected utilization and health. While these were impressive studies, they had their flaws.
RCTs are hard to do, though; they’re also expensive. Sometimes, other designs are necessary. Recently, in Annals of Internal Medicine, Laura Wherry and Sarah Miller looked at how the Medicaid expansion has changed things. Let’s discuss. This is Healthcare Triage News.
This was adapted from a post I wrote for AcademyHealth.
6 Things That Happened in Health Policy This Week is produced by a mix of research assistants from the Healthcare Quality & Outcomes (HQO) Initiative at the Harvard T.H. Chan School of Public Health. In each edition we feature a variety of news articles, reports, and studies focused on U.S. health policy and health services research. This week’s edition is from Zoe Lyon (@zoemarklyon), Kim Reimold (@kimreimold), and Anthony Moccia (@anthony_moccia).
Loren Adler and Paul Ginsburg, both of the Brookings Institution, have published a fascinating new post on the Health Affairs blog, examining the impact of the Affordable Care Act on individual market premiums. Loren’s a personal friend, so I happen to know that this isn’t a typical blog post; this analysis has been in process for several months.
Their central finding is that average premiums in the individual market would likely be higher absent the health reform law.
To address this issue, we draw on Congressional Budget Office estimates of average individual market premiums in 2009 (the most recent pre-ACA year for which CBO provides an estimate), largely based on data from the Medical Expenditure Panel Survey (MEPS) and adjusted for insights from their Health Insurance Simulation Model and observed data. We then adjust this estimate downward to account for it being based on pre-2009 data, by the ratio of actual 2009 employer-provided plan premiums for single coverage (from MEPS) to CBO’s predictions at the time.
Therefore, we estimate that the average annual premium in the individual market in 2009 was $3,480 (or $290 per month), which was for a plan that on average covered roughly 60 percent of an enrollee’s covered health expenses — an actuarial value of 60 percent.
By comparison, the average premium in 2014 for the SLS plan was $3,800according to CBO, only 9 percent higher despite the passage of five years. Adjusting for the difference in actuarial value, this premium was actually lower in nominal dollars than that in 2009.
Moreover, by any measure, individual market premiums had grown enough by 2013 such that the $3,800 average SLS plan premium in 2014 represented a sharp drop from the previous year, despite covering a higher percentage of enrollee costs and offering a broader set of health benefits.
Another key takeaway is that average post-ACA premiums are lower than we would expect average individual market premiums to be in the land of counterfactuals, where the ACA was never acted. This remained true under conservative assumptions.
I strongly recommend reading the post in full, especially given how counterintuitive the findings seem; conventional wisdom has long held that the ACA increased individual market premiums simply as a function of benefit generosity and guaranteed issue. Importantly, their analysis is uses up-front premium estimates, before accounting for premium tax credits and cost-sharing reductions.
For those who remain skeptical, Loren walked through some plausible explanations on Twitter (click through, as this goes on for several more tweets):
1) The individual health insurance "market" was an absolute mess before the ACA, w/ massive discrimination & selection issues.
— Loren Adler (@LorenAdler) July 21, 2016
To be clear, the post deals in averages; there is variation around the mean, and the variation pre-ACA would have been much higher pre-ACA than post, because of the new community rating and 3:1 age band. The available data also couldn’t be used to evaluate changes to cost-sharing design (like deductible size) or provider networks. Data on pre-ACA individual market coverage is notoriously difficult to come by; this analysis uses CBO data (which hasn’t been widely used outside CBO for analyses like this), supplemented by information from MEPS.
Even with these caveats, the analysis offers surprising, new, and important information. Go read it!
I made a conscious decision not to watch any of the Republican National Convention this week. According to my twitter feed, however, the theme of last night appears to be “The Sky is Falling”.
It really isn’t. On almost any metric you can pick, we’re doing really, really well. This has been a regular beat of mine both here at the blog, and on Healthcare Triage. Therefore, I offer some counterprogramming to this myth.
I get a lot of emails, tweets, etc. asking me to tell people if the health articles they’re reading are to be believed. This week, a lot of them were about prostate cancer. I was travelling, though, and so I couldn’t look right away.
They were reacting to a fairly large number of articles that were reporting on a study that declared that the rate of diagnoses of advanced prostate cancer was rising, and that this was correlated with the reduction in screening that has been advised in recent years.
As always, I refused to comment or answer them until I had read the study. I have now. It’s “Increasing incidence of metastatic prostate cancer in the United States (2004–2013)” and it was published in the journal Prostate Cancer and Prostatic Diseases. The methods:
Methods: We identified all men diagnosed with prostate cancer in the National Cancer Data Base (2004–2013) at 1089 different health-care facilities in the United States. Joinpoint regressions were used to model annual percentage changes (APCs) in the incidence of prostate cancer based on stage relative to that of 2004.
The annual incidence of metastatic prostate cancer increased from 2007 to 2013 (Joinpoint regression: APC: 7.1%, P<0.05) and in 2013 was 72% more than that of 2004. The incidence of low-risk prostate cancer decreased from years 2007 to 2013 (APC: −9.3%, P<0.05) to 37% less than that of 2004. The greatest increase in metastatic prostate cancer was seen in men aged 55–69 years (92% increase from 2004 to 2013).
Can you spot the problem here? Denise Grady at the NYT did:
In the study, the doctors examined the records of 767,550 men with prostate cancer diagnosed from 2004 to 2013. Using the number of cases of metastatic disease in 2004 (1,685) and 2013 (2,890), they reported an alarming increase of 72 percent.
But for the United States population, that percentage could be meaningless. On the cancer society website, Dr. Brawley said that to measure whether a disease was becoming more common, researchers could not rely on just the absolute number of cases. They need to calculate rates, meaning the number of cases per a certain number of people.
You can’t just look at the numbers of cases. You also have to look at the numbers of people who might have been diagnosed. You have to look at the rates.
This is epidemiology 101. It’s bread and butter.
There are other issues, too. One of the reasons that more aggressive disease is being found is because we’ve become better at finding it. The number of diagnoses started rising even before we started seeing recommendations to reduce screening. We don’t know if the number of people treated at these hospitals changed. The bottom line is that the coverage, and likely even the press release, were more than this study might have warranted.
The big question here is what should the media do about this? I include myself in that. There are so many publications, and the press releases say things that sometimes aren’t correct. I can’t expect every journalist to be able to read and parse a paper on its methods, can I? I also can’t expect every news organization to have its own peer review system to judge papers on their own.
You’re going to have a hard time getting people who have the scientific expertise to do this to be full-time journalists. The number of people with this skill set who are even willing to do what Austin and I do (WHICH IS A LOT) is very, very limited. There are only so many of us. Most of us have other, full-time job(s).
So what’s the solution? I’m seriously wondering. This kind of stuff hurts science and the credibility of the media.
What is this post about? Look here.
My latest AcademyHealth post is something of a literature review of the mixed (or poor) evidence that integrated delivery systems provide superior care at lower costs.
The following originally appeared on The Upshot (copyright 2016, The New York Times Company).
I’ve become somewhat known for medical myth-busting (having been a co-author of three books on the subject), so a fairly large number of emails sent to me are from people with articles or studies that they think prove me wrong.
This week, as a few of us sniffle with summer colds, the emails are all about a new study that they think proves that cold weather makes you more likely to catch a cold.
I’m sorry to say that this continues to be a myth. Research doesn’t support it.
This latest study, published in the Proceedings of the National Academy of Sciences, is complicated research of cells in laboratory conditions. The researchers showed that cells kept at 37 degrees Celsius were more likely to undergo apoptosis (basically, cell suicide) than cells kept at 33 degrees Celsius. Apoptosis is a way that we protect ourselves from infection. If the infected cells kill themselves, then there’s fewer chances for replication of the viruses that infect them.
This study has led a number of news sources, and many people who email me, to argue that this is more evidence that you’re more likely to get sick in cold weather. If your immune system can’t function as well when it’s cold, then infections must take advantage of cold weather, right?
That’s just not as clear as it looks. First of all, 33 degrees Celsius is not cold weather. It’s 91.4 degrees Fahrenheit. And 37 degrees Celsius is 98.6 degrees Fahrenheit. In other words, 37 degrees Celsius is close to a body’s core temperature, and 33 degrees is closer to what it might be in your nostrils. I have no trouble believing that some viruses, like cold viruses, do better in your nose than deep in your body because that’s where they first attack, and also where they often set up shop.
This isn’t the first time this lab produced work that has been interpreted this way. About a year and a half ago, it published another study, this one in mice cells, that showed that rhinovirus replicates better at 33 degrees Celsius than at 37 degrees. Again, I have no trouble believing that — but it doesn’t prove that cold weather makes you more likely to get a cold. Nonetheless, many interpreted it that way.
There are better ways to try proving that. Those studies have actually been done.
Back in 1958, researchers conducted a randomized controlled trial of people to see if being cold made them more likely to get sick. They had one group sit in a room that was 10 degrees Fahrenheit dressed in street clothes, overcoats, hats and gloves. They had another group sit in their underwear in a room that was 60 degrees Fahrenheit. A third group sat in 80 degrees Fahrenheit, also in their underwear. All of them were “inoculated” with the mucus of a sick person in their noses and then followed to see if they became ill.
Don’t ask me who volunteers for such things. But thanks to them, we know that the temperature didn’t seem to have any effect on their chances of getting sick.
In 1968, another study was published in The New England Journal of Medicine. This one gathered volunteers from federal or state penitentiaries in Texas and subjected them to a variety of conditions, with temperatures ranging from 4 to 10 degrees Celsius. They even submerged them in water baths at 32 degrees Celsius. As in previous studies, they inoculated the subjects with rhinovirus and then followed them clinically and with many cellular and antibody studies.
They found no differences in how people became infected, how they reacted if they were infected, and how they recovered based on exposure to cold.
People’s feelings about colds, like a lot of medical myths, become entrenched. It seems that no matter how hard you push back on them, they refuse to change their minds. It doesn’t matter that some research showsthat being exposed to the cold actually stimulates the immune system rather than impairing it.
It may also be, as a 2005 study in The Journal of Family Practice showed, that people who are exposed to cold are more likely to report symptoms, even if they aren’t actually infected more often. Perception, and even potentially a belief in this explanation, may contribute to its longevity.
Viruses are also seasonal. Some are more likely to get you in the winter than the summer. Additionally, our response to cold weather may be more to blame than the cold weather itself. It has been postulated that when it’s cold, people tend to congregate inside. This behavior makes it easier, of course, for viruses to be spread from person to person. My kids are much more likely to sneeze on me when we’re cooped up in the winter than when they’re running around outside in the summer.
To be fair, you can point to papers that seem to disagree with me. A review article published in The International Journal of Tuberculosis and Lung Disease in 2007 argues that exposure to cold can result in responses in the body that could leave one more susceptible to infection. I’d argue, however, that the focus of this paper was more on extended exposure to extreme cold (think potential hypothermia) than on the usual “cold weather leads to colds” argument.
These recent studies are also of cells, not all of them human, in the lab, under controlled conditions. We can’t make an easy leap to how bodies, let alone people, might be similar or different in real-world situations.
None of this is to fault the science or the methods of these experiments. The studies from this research group do seem to confirm findings that various viruses seem to thrive in the relatively warm temperatures of the nose rather than the slightly hotter temperatures found deeper in our bodies. They provide an explanation for why viruses affect some parts of the body more than others.
That’s not, however, the same thing as proving that cold weather makes you more likely to get a cold.