I’m giving him full credit right up there in the title. Twice in the last two weeks I was all riled up and feeling the need to blast out posts on how everyone needed to stop freaking out and pay attention to real risks and not the scream du jour. But before I could even get to it, there was Christopher Ingraham in the Washington Post, doing it for me.
First up was the horrific train accident on the East Coast. Let’s acknowledge that it’s a horrible tragedy, ok? It’s also totally reasonable that it captured our attention. I can’t even fault people for being concerned that our rail infrastructure might need some updating, although I don’t think it’s clear yet that this was the cause of the crash.
But then I started hearing from people complaining that rail travel was unsafe, period. Or at least unsafe compared to other forms of travel. You hear the same sort of thing whenever there’s a plane crash, even though that’s like the safest way to travel. And you all know that I hate when people ignore that car travel is pretty much the unsafest way to go, especially since accidents are the number one killer of children.
So I planned to make a chart on how all of these things compared to each other, but there was Christopher Ingraham, on the case already:
Yes, trains are less safe than planes, buses, or subways, but still WAY safer than driving. So deciding to cancel that 150 mile train trip and drive instead would not be rational. Thanks, Chris!
And then, this week, he took on laundry pods. those are those little prepackaged detergent things for the dishwasher or laundry. There were news stories in the fall about how kids were going to the ER in droves because they were eating them. The usual panic buttons got pushed. But, again, I wanted more information. How many is “droves”? How does this compare to other panics?
I was reminded of a bit I wrote about Plan B not too long ago, when people “worried” that would be taken inappropriately and people would overdose:
All drugs, when improperly used, carry significant effects. In 2009, there were over 70,000 calls to poison control centers for concerns about acetaminophen and more than 88,000 for ibuprofen. More than 30,000 calls were made for diphenhydramine, and 4 of those cases resulted in deaths. Just looking at kids 5 years of age and under, there were more than 130,000 calls for analgesics, 53,000 for vitamins, 48,000 for antihistamines, and 45,000 for cough and cold preparations. And yet, no one seems to be too concerned that these medications could be purchased “alongside bubble gum and batteries”. And, for the record, battery ingestions killed 4 kids in that age group that year.
It’s all about context. So I planned to write a post on how calls to poison control for laundry pods compared to other things. But there was Christopher Ingraham, on the case already:
And, of the 11,000 laundry pod calls in 2013, only 54 resulted in a major injury and only 2 resulted in death. In fact, only 29 kids aged 1-4 died of ALL accidental poisonings in 2013. Guns and assaults killed way more. Car accidents killed 454 (see above).
We need to keep these things in perspective. Chris is helping.
Yesterday, the Government Accountability Office (GAO) released a withering report on how Medicare sets the fee schedule for paying physicians.
The American Medical Association/Specialty Society Relative Value Scale Update Committee (RUC) has a process in place to regularly review Medicare physicians’ services’ work relative values (which reflect the time and intensity needed to perform a service). Its recommendations to [CMS], though, may not be accurate due to process and data-related weaknesses. First, the RUC’s process for developing relative value recommendations relies on the input of physicians who may have potential conflicts of interest with respect to the outcomes of CMS’s process. . . . . Second, GAO found weaknesses with the RUC’s survey data, including that some of the RUC’s survey data had low response rates, low total number of responses, and large ranges in responses, all of which may undermine the accuracy of the RUC’s recommendations. For example, while GAO found that the median number of responses to surveys for payment year 2015 was 52, the median response rate was only 2.2 percent, and 23 of the 231 surveys had under 30 respondents.
. . . [T]he evidence suggests—and CMS officials acknowledge—that the agency relies heavily on RUC recommendations when establishing relative values. For example, GAO found that, in the majority of cases, CMS accepts the RUC’s recommendations and participation by other stakeholders is limited. Given the process and data-related weaknesses associated with the RUC’s recommendations, such heavy reliance on the RUC could result in inaccurate Medicare payment rates.
This isn’t the first time the RUC has come in for seriouscriticism. Nor will it be the last. Rife with conflicts of interest and not especially transparent, the RUC is a specialist-dominated committee that “donates” more than $8 million of its own services each year to Medicare, presumably out of the goodness of its heart.
The RUC’s job is to tell CMS how much time and effort it takes to provide medical services in the hopes of influencing how Medicare pays physicians. Since CMS has been starved of the resources necessary to independently review physician services, the agency has little choice but to rubber-stamp most of the RUC’s recommendations.
In recent years, Congress has taken modest steps to fix the problem. The Protecting Access to Medicare Act of 2014, for example, appropriates $2 million each year to enable CMS to collect information directly from physicians about the relative value of their services. But CMS doesn’t have a plan about how it will spend that money, and in any event $2 million won’t go far when it comes to reviewing thousands of physician services.
Doing the job right would cost real money, but it’d be a pittance when compared to the $70 billion spent on physician payments in 2013. If we insist on running Medicare on a shoestring, we shouldn’t be surprised when it doesn’t work very well. Sometimes you get what you pay for.
I recommend Lisa Rosenbaum’s three-part NEJM series on financial conflicts of interest (links: part 1, part 2, and part 3). Though it is thought provoking throughout, this single sentence was enough to occupy my mind for several hours:
Once moral intuitions enter the picture, the need to rationally weigh trade-offs is often eclipsed by unexamined convictions about right and wrong.
It is now commonplace for authors to disclose potential financial conflicts of interest (COI) to journals and institutional review boards (IRBs) before paper publication and initiation of research, respectively. You can most easily find COI statements at the end of many published papers, or accompanying them online. Here’s just a part of one COI disclosure for a paper I pulled at random from the NEJM archives:
The paper is about a drug (bevacizumab) manufactured by Genentech (as Avastin), so this particular COI disclosure for this particular author is relevant. (This author is one of 18 or so on the paper. Most of the others have no such disclosed COI, though some do.)
If I’ve ever read any COI disclosures as part of reading or evaluating a published study, it’s only been a few times. I have purposefully avoided them for many years. Why?
I worry about bias: my own. I simply don’t know what to make of COI disclosures. It’s easy to detect a potential or appearance of a COI. It’s much harder to decide how to weigh that when evaluating a study. Sure, it’s a data point that could be meaningful. So could a myriad of “irregularities” that might show up in a full body MRI on a patient with no symptoms of disease. I worry about false positives and emotional harm. How does this author’s prior financial relationship with Genentech affect the published research? Does it affect my head even more?
I do not want to worry about COI (or worry about my worry about it) when evaluating a paper’s methods.
Several years ago I received an email encouraging me to consider the work of a certain author. The work was relevant to whatever I was blogging about at the time. But I knew that author had substantial industry funding for his work, and decided I wasn’t going to read or consider his work on that basis. I emailed back as much.
I regret that decision and that email. I should have considered the work on its merits. My assessment that it could not have been worthwhile was a biased one. I don’t read COI disclosures because I want to protect myself from that bias, acknowledging that I might be blinding myself to the authors’ own biases. There’s no way to win here.
For the same reason, for years I didn’t read authors’ bios. With respect to the quality of the work, why should their institution, titles, or other credentials matter? Either their study is sound or it isn’t. If I can’t assess that from a paper’s text and figures alone (as a blinded reviewer would), then that’s a problem, but it’s not one that can be resolved by knowing an author’s pedigree any more than it can be resolved by knowing her skin color.
In fact, for years I didn’t even read authors’ names on papers. I barely knew who wrote what, until it came time to cite stuff. Then I had to know names. Over the years I came to recognize some, got to know scholars across the country.
Now I’m friends with and colleagues of many. I know where they work. I know their credentials. I consider by lines along with article titles when deciding what to read. There are some authors whose work I never want to miss. Is this a bias? Time being finite, it certainly crowds out reading others’ work.
All this meta data—names, affiliations, degrees, potential COI—can bias. Once it enters my head, I cannot tell the extent to which it does. I could argue that I’m merely being Bayesian when I use prior knowledge of the authors’ work or their institutions. (This one has a well-earned reputation for good work; this other one is from a “lesser” institution widely thought to have an ideological perspective.) And maybe that’s right. But I could also argue that I’m using—even subconsciously—this meta data to unfairly evaluate the work.
Lisa is right that once intuitions—moral and otherwise—like these enter the picture, we’re already in difficult terrain. Problems arise by unexamined convictions, she wrote. But, for me, problems arise by examined ones as well. I do think money influences, as do relationships and beliefs. But when I examine my own feelings about these, I’m no closer to understanding the extent to which I use them in my own biased way, if at all.
Once I gather the meta data, what should I do with it? What have I already done?
Some economists have disputed the claim that low reimbursement rates paid to healthcare providers by public programs (including both Medicaid and Medicare), result in cost shifts to commercial insurance payers. They assert that the rates charged to commercial insurers by a hospital are affected primarily by market factors that are independent of the rates paid by public programs. That is, a hospital generally seeks to maximize net revenues, regardless of the mix of its commercially insured and publicly-funded patients. The extent to which hospitals increase or decrease prices charged to commercial insurers is dependent upon their market power in relation to those insurers and competing hospitals. By contrast, the idea that a hospital charges higher rates to commercial insurers in response to lower public program reimbursement rates implies that the hospital has the market power to dictate a higher price to commercial insurers that it would not otherwise exercise in the absence of low public program reimbursement rates. In support of this view, these economists cite evidence that suggests that hospitals either reduce costs in response to constrained revenues from public programs, or attempt to attract a larger pool of commercially insured patients by reducing the price charged to commercial insurers.
To improve the quality of care we need accountability, and one way to get it is documenting what clinicians do. This stuff matters: Checklists have saved thousands of lives.
But you can have too much of a good thing. On EconTalk, Russ Roberts interviews Leonard Wong, a retired military officer and a professor at the US Army War College. Wong and Stephen Gerras have written a paper describing the moral challenge experienced by military officers faced with high burdens of regulatory compliance. Like medicine, the military has a complex mission with life and death stakes, using tools that change constantly and are inherently unsafe. The military cultivates intensive safety disciplines that have, for example, reduced aviation fatalities per mile flown by many orders of magnitude.
To build an effective military and protect servicemen and women, the military sets many requirements. Officers must ensure that their units meet these requirements, and they have to certify conformity with these requirements with a signed report or checklist.
The problem is that there are now so many requirements that it is impossible to meet them. Moreover, the Army is in effect a zero-defects culture; failure to complete requirements disqualifies you from promotion.
Wong argues that as a result, officers are falsely documenting completion of requirements that either have not been met or that they have never inspected. This means that potentially important tasks are not getting done, and that nobody knows which things are not getting done. Perhaps more importantly, the professional norm of integrity is being undermined.
People in health care should study this in detail. My view is that the burden of documentation and compliance for health care providers is comparable to the burden on the military. Here are David Blumenthal and J. Michael McGinnis:
The budding enthusiasm for performance measurement, however, has begun to create serious problems for public health and for health care. Not only are many measures imperfect, but they are proliferating at an astonishing rate, increasing the burden and blurring the ability to focus on issues most important to better health and health care. Measures of the same phenomenon also vary in specification and application, leading to confusion and inefficiency that make health care more expensive and undermine the very purpose of measurement, namely, to facilitate improvement. Not uncommonly, a health care organization delivering primary care to a typical population is asked to report and collect hundreds of measures aimed at dozens of conditions.
In some ways, the situation in health care is worse than the military. The military has a single chain of command, but medicine has many bodies issuing standards, many of them redundant. Blumenthal and McGinnis illustrate their point with this Figure:
The Proliferation of Measurements.
So what do we do? What we shouldn’t do is give up the concepts of accountability or documentation.
But critics of excessive standardization have an important point. Clinicians are not automatons following recipes. They have to optimize their time by prioritizing from an indefinitely long list of tasks that could potentially benefit patients. Therefore any requirement that constrains clinician choice has an opportunity cost in terms of the other things that she might do for her patient. Ideally, before setting a standard we should compare the benefit that can be achieved by implementing the standard against that opportunity cost.
My sense is that most people engaged in quality research and improvement are aware of this tradeoff. But it isn’t addressed in any serious way.
We can’t even begin to address this problem unless the people setting standards look at the total burden of documenation and compliance. Blumenthal and McGinnis argue that what we need are “core metrics”, defined as
a parsimonious set [of measures] that provides “a quantitative indication of current status on the most important elements in a given field, and that can be used as a standardized and accurate tool for informing, comparing, focusing, monitoring, and reporting change.” [Core metrics should be] outcomes oriented, reflective of system performance, and meaningful and have utility at multiple levels of the health care system.
We can’t measure everything about medical practice, so we need consensus on a minimum set of requirements that are feasible to measure and maximally affect practice.
Specifying that minimum set is an enormous and challenging task. But it’s critical for the long run success of quality improvement and the preservation of the virtue of integrity in medicine.
Below is a video of me, Adrianna, and Nicholas trying to convince an audience of (mostly) researchers at the University of Michigan’s Institute for Healthcare Policy and Innovation to use social media to promote the scholarly work we all do. “A press release is not enough” was the title of the seminar.
The end of my talk includes an answer to the question people ask me most often: How much time do I spend blogging? Watch the video to find out. (Hint: The question and my answer are at the 27 minute mark. Hint 2: That link takes you right to that spot in the video.)
At the 48 minute mark, Nicholas begins to make what I think is an impressive case that what we’ve done at TIE makes a difference. That he mentioned the three sentence rule made my day. Watch that, if nothing else.
The following originally appeared on The Upshot (copyright 2015, The New York Times Company).
In [a recent] article, I reviewed the evidence behind coffee consumption and health in an effort to put to rest the idea that coffee is a “vice” or something we all need to cut back on.
We received many comments and questions from readers. In fact, we received so many that we thought it might be useful to respond to some of the most frequently discussed ones.
Are the same beneficial relationships seen with decaffeinated coffee?
Most studies did not include data on decaffeinated coffee, either because too few people drank it or because data were not available. The few studies that did, though, had differing results. With respect to cardiovascular disease, decaffeinated coffee did not seem to have the same protective effects as regular coffee. With respect to the one stroke meta-analysis, it seemed to be just as protective as regular coffee. In two breast canceranalyses, decaffeinated had the same nonrelationship as regular coffee. Decaffeinated coffee was also protective against lung cancer, not as protective against Parkinson’s disease, and protective against diabetes andoverall mortality, but perhaps to a lesser extent than regular coffee.
But for most studies, there just aren’t data available. The conclusion to take away: There’s less evidence overall for a potential benefit, but still, there’s no evidence of harmful associations.
What constitutes a cup of coffee?
Pretty much all studies defined a cup of coffee as an 8-ounce serving. That’s smaller than what I imagine most people drink. A grande-size coffeeat Starbucks (what is called simply “large” at most other coffee houses) is 16 ounces.
Are the same benefits seen with tea?
The literature on tea is about the same size as that for coffee, and reviewing it thoroughly would take more time than is appropriate for this column. However, a number of studies I reviewed did include tea in analyses, and those I can present here. People who drank more tea had a lower risk ofParkinson’s disease and of cognitive decline. Black tea had a potential protective effect against diabetes, but it was not statistically significant. Green tea had no relationship to the development of diabetes.
If we think there’s enough interest in tea, though, we could devote a future column to the evidence on that beverage.
Is the benefit from caffeine or from some other element in coffee?
It’s not known. I also don’t think it’s necessarily the same protective effect in each disease. I think that for many of the neurological issues, it could be caffeine acting as a stimulant in the brain. This hypothesis is supported by the fact that decaffeinated coffee doesn’t seem to be as protective, yet tea is. In some of the other diseases, though, the same benefits aren’t seen from other caffeine-containing beverages. No one is arguing that diet soda consumption is associated with less of a chance of getting cancer. Additionally, some protective effects are seen with decaffeinated coffee as well. It’s likely, therefore, that something else could be at work. We don’t know what, though.
A 2005 meta-analysis found that in randomized controlled trials caffeine was associated with an increase in blood pressure. When that caffeine was from coffee, however, the blood pressure effect was small. A 2011 studyfound that caffeine intake could raise blood pressure for at least three hours. Again, though, there wasn’t a significant relationship between long-term coffee consumption and higher blood pressure. A 2012 meta-analysisof 10 randomized controlled trials and five cohort studies could find no significant effect of coffee consumption on blood pressure or hypertension.
High blood pressure and high cholesterol would be of concern because they can lead to heart disease or death. Drinking coffee is associated with better outcomes in those areas, and that’s what really matters.
Some readers were upset that I neglected to mention some of the deleterious effects of caffeine. What about jitteriness and mood changes?
I want to reiterate that the point of the piece was not to tell people to drink coffee. As I said in my recent article on food recommendations, I don’t think there is much value in preaching or judging what others eat or drink. Moreover, this evidence is epidemiologic, that is, based on observations of patterns. I don’t want to fall prey to the mistake of recommending we change our eating behavior without evidence from randomized controlled trials.
The point of the article was to show that there’s no evidence that coffee is bad for the average person. Data do not support the idea that we are drinking “too much.” Coffee does not appear to be associated with poor health outcomes — the opposite is true. In light of this, we should stop telling everyone to avoid it, or judging others for drinking it. We should also stop feeling guilty or feel we need to consume less.
That is, unless it’s not making you feel well. As I also said before, individual trial and error is likely necessary when it comes to nutrition. Some people need to avoid caffeine for medical reasons, and they should. If coffee makes you feel bad, or makes it hard for you to sleep, or renders you a less likable person — then by all means feel free to cut back or stop.
The following originally appeared on The Upshot (copyright 2015, The New York Times Company). I answer readers’ questions about this article ina follow-up here.
When I was a kid, my parents refused to let me drink coffee because they believed it would “stunt my growth.” It turns out, of course, that this is a myth. Studies have failed, again and again, to show that coffee or caffeine consumption are related to reduced bone mass or how tall people are.
Coffee has long had a reputation as being unhealthy. But in almost every single respect that reputation is backward. The potential health benefits are surprisingly large.
When I set out to look at the research on coffee and health, I thought I’d see it being associated with some good outcomes and some bad ones, mirroring the contradictory reports you can often find in the news media. This didn’t turn out to be the case.
Just last year, a systematic review and meta-analysis of studies looking at long-term consumption of coffee and the risk of cardiovascular disease was published. The researchers found 36 studies involving more than 1,270,000 participants. The combined data showed that those who consumed a moderate amount of coffee, about three to five cups a day, were at the lowest risk for problems. Those who consumed five or more cups a day had no higher risk than those who consumed none.
Back to the studies. Years earlier, a meta-analysis — a study of studies, in which data are pooled and analyzed together — was published looking at how coffee consumption might be associated with stroke. Eleven studies were found, including almost 480,000 participants. As with the prior studies, consumption of two to six cups of coffee a day was associated with a lower risk of disease, compared with those who drank none. Another meta-analysis published a year later confirmed these findings.
Rounding out concerns about the effect of coffee on your heart, another meta-analysis examined how drinking coffee might be associated with heart failure. Again, moderate consumption was associated with a lower risk, with the lowest risk among those who consumed four servings a day. Consumption had to get up to about 10 cups a day before any bad associations were seen.
No one is suggesting you drink more coffee for your health. But drinking moderate amounts of coffee is linked to lower rates of pretty much all cardiovascular disease, contrary to what many might have heard about the dangers of coffee or caffeine. Even consumers on the very high end of the spectrum appear to have minimal, if any, ill effects.
But let’s not cherry-pick. There are outcomes outside of heart health that matter. Many believe that coffee might be associated with an increased risk of cancer. Certainly, individual studies have found that to be the case, and these are sometimes highlighted by the news media. But in the aggregate, most of these negative outcomes disappear.
The same holds true for breast cancer, where associations were statistically not significant. It’s true that the data on lung cancer shows an increased risk for more coffee consumed, but that’s only among people who smoke. Drinking coffee may be protective in those who don’t. Regardless, the authors of that study hedge their results and warn that they should be interpreted with caution because of the confounding (and most likely overwhelming) effects of smoking.
A study looking at all cancers suggested that it might be associated with reduced overall cancer incidence and that the more you drank, the more protection was seen.
Drinking coffee is associated with better laboratory values in those at risk for liver disease. In patients who already have liver disease, it’s associated with a decreased progression to cirrhosis. In patients who already have cirrhosis, it’s associated with a lower risk of death and a lower risk of developing liver cancer. It’s associated with improved responses to antiviral therapy in patients with hepatitis C and better outcomes in patients with nonalcoholic fatty liver disease. The authors of the systematic review argue that daily coffee consumption should be encouraged in patients with chronic liver disease.
A systematic review published in 2005 found that regular coffee consumption was associated with a significantly reduced risk of developingType 2 diabetes, with the lowest relative risks (about a third reduction) seen in those who drank at least six or seven cups a day. The latest study,published in 2014, used updated data and included 28 studies and more than 1.1 million participants. Again, the more coffee you drank, the less likely you were to have diabetes. This included both caffeinated and decaffeinated coffee.
Is coffee associated with the risk of death from all causes? There have been two meta-analyses published within the last year or so. The first reviewed 20 studies, including almost a million people, and the second included 17 studies containing more than a million people. Both found that drinking coffee was associated with a significantly reduced chance of death. I can’t think of any other product that has this much positive epidemiologic evidence going for it.
I grant you that pretty much none of the research I’m citing above contains randomized controlled trials. It’s important to remember that we usually conduct those trials to see if what we are observing in epidemiologic studies holds up. Most of us aren’t drinking coffee because we think it will protect us, though. Most of us are worrying that it might be hurting us. There’s almost no evidence for that at all.
If any other modifiable risk factor had these kind of positive associations across the board, the media would be all over it. We’d be pushing it on everyone. Whole interventions would be built up around it. For far too long, though, coffee has been considered a vice, not something that might be healthy.
That may change soon. The newest scientific report for the U.S.D.A. nutritional guidelines, which I’ve discussed before, says that coffee is not only O.K. — it agrees that it might be good for you. This was the first time the dietary guideline advisory committee reviewed the effects of coffee on health.
There’s always a danger in going too far in the other direction. I’m not suggesting that we start serving coffee to little kids. Caffeine still has a number of effects parents might want to avoid for their children. Some people don’t like the way caffeine can make them jittery. Guidelines also suggest that pregnant women not drink more than two cups a day.
I’m also not suggesting that people start drinking coffee by the gallon. Too much of anything can be bad. Finally, while the coffee may be healthy, that’s not necessarily true of the added sugar and fat that many people put into coffee-based beverages.
But it’s way past time that we stopped viewing coffee as something we all need to cut back on. It’s a completely reasonable addition to a healthy diet, with more potential benefits seen in research than almost any other beverage we’re consuming. It’s time we started treating it as such.
A couple of weeks ago, I gave a noontime talk on cost shifting at the University of Wisconsin School of Medicine and Public Health. You can watch the video here. (I recommend it at 1.5x normal speed.) You don’t have to watch too long to notice I use a lemonade stand to illustrate some cost shifting concepts.
I gave the same talk that morning in a Wisconsin state capitol briefing. Sadly, the video for the morning event failed. Too bad, because it was better than the noontime talk; I had more energy, and it included a response from Brian Potter of the Wisconsin Hospital Association.
Brian did not like my lemonade stand metaphor. He was quoted by Wisconsin Health News (no link available) as saying,
“Hospitals accept all payers or patients regardless of their ability to pay, which is different from a lemonade stand because you don’t have to sell lemonade to everybody,” Potter said. “Healthcare is a need whereas lemonade is an optional service. When you’re having a heart attack, your price sensitivity and your consumerism and things that happen in normal markets don’t necessarily happen in healthcare.”
I didn’t get an opportunity to respond to this. If I had, here’s what I would have said: First of all, a hospital is not obligated to participate in every payer’s network. Second, the entire point of my talk was that most empirical studies of hospitals don’t support cost shifting. (See also this post and that to which it links for that evidence.) As such, it hardly matters what metaphor I use. The conclusion is the same.
The point of the lemonade stand was to help a lay audience understand what cost shifting is and why most of the empirical studies don’t find it. Of course, all models are wrong, but some are useful. Judging from written feedback, most of the audience thought my hypothetical lemonade stand model was useful, even if Brian didn’t.