The following originally appeared on The Upshot (copyright 2019, The New York Times Company)
Researchers are in another fight about food.
This week the Annals of Internal Medicine published studies arguing that eating red meat poses minimal health risks for most people, and that even our certainty about that link is weak. With these conclusions in hand, the authors offered a set of recommendations that most people could continue their current levels of meat consumption.
Although not involved in the research, I co-authored an editorial for the journal summarizing the findings, arguing that our messaging about the harms of red meat may be falling on deaf ears. and then pointing out other messages that might work better to reduce consumption of it.
The conclusions and the guideline recommendations, made by an international team led by Bradley Johnston, an epidemiologist at Dalhousie University, run counter to many by established health authorities. This week, a number of nutrition researchers wrote me to say they vehemently disagree with the publication of these papers, and feel that they could do real harm.
They believe that red meat and processed meat consumption poses a health hazard to people, and that if people don’t reduce their consumption, they are putting themselves, and the planet, at risk.
I agree with them on the environmental argument for eating less meat. You can read about that here.
But readers may well be confused about the health risks. How can experts disagree so strongly?
The following questions may help you understand why even researchers of good faith can land on different sides of a debate.
Just how good can nutrition research be?
Part of the problem lies in the difficulty in doing research in this area. It’s almost impossible (and some would say unethical) to do the most rigorous type of experiment — a randomized controlled trial — in areas like red meat consumption. Because of that, we must rely on observational data; we ask people what they are eating and correlate that with outcomes.
Others, like John Ioannidis, an expert in research design and analysis at Stanford, argue that we can and perhaps should do long-term randomized controlled trials of dietary patterns before we make proclamations.
Is the ‘best we can get’ good enough?
Even observational trials are hard to do well. Most major health setbacks are pretty rare. It’s hard to see big differences in death, cancer and heart attacks in even large groups of people, unless you follow them over long periods. But quantifying what people are eating over long periods is challenging, too, because often people don’t remember.
Such studies are also difficult to interpret because of what are called confounding factors. Maybe people who eat more meat are poorer. Maybe they smoke, drink too much alcohol or don’t exercise. Those things would also lead to bad outcomes, and it’s hard to tease out individual components over time.
If you do trials of people at higher risk — those who have already had heart attacks, for example — it’s easier to see if changes matter. The Predimed trial, for instance, which studied the Mediterranean diet, focused on people who already had diabetes or a number of traits placing them at high risk for heart disease. But these people aren’t necessarily representative of the general public, for whom dietary recommendations are written.
All of this means that observational evidence, which is easier to obtain, will be ruled as “low quality” by some researchers. Others will argue it’s the best we can get, and therefore we should apply different standards to such research.
Should we care about signals like blood pressure, or only major events like heart attacks?
Because big outcomes are rare, research sometimes looks at intermediate measures. Those, like weight, blood pressure, cholesterol levels and more, can change in shorter periods. Some will point to studies in these domains and say that they prove that meat reduction has significant health effects. High blood pressure or cholesterol levels are widely believed to be major risk factors for adverse events. Others will disagree as to how much we should rely on intermediate measures. These new studies focused only on those end-stage outcomes.
If experts are uncertain, should they recommend anything?
Critics of the new meat studies argue that given the authors’ low certainty about their findings, they should have issued no recommendations at all. That’s not unreasonable. When the U.S. Preventive Services Task Force lacks sufficient evidence to publish recommendations on prevention, it gives recommendations an “I” rating, and says the current evidence is insufficient to assess the balance of benefits and harms. That’s all. Maybe that was preferable here, instead of publishing recommendations that people continue their current meat consumption.
Should we look at the individual, or the population?
Even in studies that find statistically significant effects, the absolute benefits in most studies are small. I’ve written about this before. Many will argue, however, that even if there is a small individual benefit, the benefits to the population can be large.
They are not wrong. Let’s say that the absolute risk reduction with respect to colon cancer is 0.5 percent. That would mean for every 200 people who reduced their meat consumption, one would see a benefit; 199 would not. To an individual, that might not seem like a big deal.
But it also means that if two million people made that change, 10,000 would see a benefit, which from a population standpoint is great. It also means, of course, that 1,990,000 would see no benefit.
There are many things that might make a difference at a population level for which people won’t make changes at an individual one. People accept major risks every day: to drive, to ski and more. Why? Because those activities bring benefits that individuals value over the harms. Should we care more about the individual or the population in recommendations?
Should we let people decide for themselves?
Some believe we shouldn’t bring preferences into play when we write guidelines: Focus on the health benefits alone, and not on other factors such as how much people like meat. After all, we don’t care if people “like” to smoke when we tell them not to.
Others might counter that a study in the International Journal of Cancer in 2012 found that men smoking more than 30 cigarettes a day had a 10,250 percent increased risk for developing squamous cell carcinoma. That’s huge. An increase of 18 percent (relative risk of 1.18) for processed meat consumption is not the same, and therefore it might be reasonable to think about how much people derive joy from their current diets.
Relative risk refers to the percentage change in one’s absolute (overall) risk as a result of some change in behavior (1.18, for example, is an 18 percent change from 1.0, and 1.0 represents a baseline of no difference in risk between an experimental group and a control group.)
Because these questions don’t have black and white answers, researchers can look at the same sets of data and come to very different conclusions. Should recommendations be more concerned with populations or individuals? How much risk should there be to matter? Should personal preferences be considered? What should we say in the face of less than optimal evidence? Nothing? Play it safe?
Unfortunately, too many of these arguments on meat consumption devolve into tribal sides.
On the other hand, there are points on which I’m not seeing disagreement. Eating beef, for example, is a major problem for the environment. Eating less to improve the long-term outlook for climate change could make a huge difference, and would be something on which a majority of those involved in these debates might agree.