Childhood Obesity Is a Major Problem. Research Isn’t Helping.

The following originally appeared on The Upshot (copyright 2020, The New York Times Company)

 

Childhood obesity is a major public health problem, and has been for some time. Almost 20 percent of American children are affected by obesity, as well as about 40 percent of adults. Over all, this costs the United States around $150 billion in health care spending each year.

Pediatricians like me, and many other health professionals, know it’s a problem, and yet we’ve been relatively unsuccessful in tackling it. About six years ago, some reports seemed to show that rates had stabilized in children and even decreased in those ages 2 to 5. Later studies showed this trend to be an illusion. If anything, things have gotten worse.

Efforts to help can backfire. People on diets often gain weight. Although individual studies have pointed to potential interventions and solutions, these have not yet translated into actual improvements. Part of the problem may be flawed research.

recent paper in Pediatric Obesity provided a guide on how to do better. Its suggestions fall into five general themes.

 

1. When things look better, it’s critical to ask “compared to what?”

In short, you need a control group. Over time, changes in behaviors or measurements often follow a pattern known as regression toward the meanOutliers (in this case those who are more overweight) tend to move toward the average. Thus, interventions might look as if they’re working when they’re not. Control groups — participants who don’t receive the intervention — can help ensure that we’re seeing real effectiveness.

Even then, things can get tricky. In a randomized controlled trial, it’s important to keep the comparisons directly between the intervention and control groups. A common mistake is comparing each group after the intervention with the same group before the intervention. In other words, people could compare a dieting group to itself, before and after, and compare the control group to itself, before and after, to see if the dieting group achieved a significant decrease.

This is known as a “differences in nominal significance” error. Doing this can make an intervention look as if it achieved a significant change against a baseline measurement when it probably did not against the control group.

Creating and studying large obesity interventions is hard and expensive. It’s only natural that researchers want them to work. But if your well-designed study doesn’t result in significant improvements in an intervention group over a control group, you can’t then fall back on claims that those who received the intervention still lost weight. Control groups are there for a reason. You can’t dismiss them after the fact.

2. Don’t change the analysis plan.

Before a study begins, its expected primary outcome should be clearly defined. For most obesity studies, that’s going to be a decrease in body mass index. You can’t later add in other outcomes that might show results even if the main outcome does not.

Sometimes, to get statistically significant results, researchers will adjust analyses in ways that achieve them. This is called p-hacking. Changing outcomes can result in different numbers of patients “qualifying” through inclusion and exclusion criteria in such a way as to change the actual groups being studied.

3. Be careful when designing studies and picking outcomes.

Too often, when trying to prove that subjects changed their diet or exercise habits, we simply ask them if they did. This risks getting results influenced by self-report bias. If a study’s focus is an educational intervention that tells students they should walk more and watch less TV, we shouldn’t be surprised that they say they did, even when there’s no change in body fat percentage.

Because interventions tend to be delivered in groups (randomly assigning by classes or schools), it’s important that we analyze results only by groups. There are only as many “participants” as there are groups. Too often, researchers conduct statistics on the individuals, and when they see improvements, it’s because of the differences between groups, not the interventions.

4. Not significant is not significant.

Negative results — those that do not back up the hypothesis of the researcher — should not be spun as positive. Researchers are often tempted to argue that these results are clinically significant, or that they have “promise.”

Sometimes, researchers want to test one intervention against an already proven one. If they find that there’s no difference, they conclude that the two are equally effective. This can be a mistake.

5. Don’t assume that an intervention is better than nothing.

Most studies conduct a two-sided analysis. This means they look at whether an intervention is better or worse, then consider the results significant if the p-value is less than 0.05. In some studies, though, researchers assume that interventions can only help people lose weight, not gain it. They therefore conduct a one-sided test, which effectively doubles the allowable p-value. Results that would not have been significant become so.

 

Some of these rules are technical. Others involve not overreaching on the results. And some acknowledge that researchers are human beings who are predisposed to want to get positive results. These are certainly true with respect to obesity, but they’re true for almost all health research.

To be effective, interventions and policies need to be built upon solid data. There are no assurances that interventions can only do good. It’s possible that interventions — almost all of which are done on a small scale — may not be the solution. Michelle Obama’s “Let’s Move” initiative — which was done on a large scale — was often credited in helping to slow or reverse childhood obesity, but there’s no evidence that’s true.

Processed food, and the advertising and marketing of it, is one driver of the problem. So is a lack of effort and resources put toward maintaining a healthy lifestyle. (If there are no sidewalks, you may be unlikely to walk to the store or to school, for example.)

Major problems like poverty can’t be overcome with a couple of workshops in a school or a doctor’s visit. Obesity is a major societal problem that probably requires a major societal response. We can’t allow our desire to make things better lead us to accept lower-quality research that might convince us otherwise.

@aaronecarroll 

 

 

Hidden information below

Subscribe

Email Address*