• The Evidence on Salt

    Thanks go to BradF for some pointers to the evidence and debate over salt’s effect on health. He directed me to Marion Nestle of the blog Food Politics who links to a dozen or so articles and commentaries on the issue. Disclosure: I have not read those articles and commentaries.

    I have read the 5 Feb. 2009 NY Times opinion piece in which Michael Alderman argues that a randomized controlled trial is necessary to determine salt’s effect on health. (Alderman is also the author of a 3 Feb. 2010 JAMA commentary that makes the same points and cites the academic literature.) In his NY Times column, Alderman writes,

    The best available evidence on how salt consumption affects our health comes from observational studies. … Nine such studies … have had mixed results. In four of them, reduced dietary salt was associated with an increased incidence of death and disability from heart attacks and strokes. In one that focused on obese people, more salt was associated with increased cardiovascular mortality. And in the remaining four, no association between salt and health was seen.

    People who advocate curtailing salt consumption typically prefer to discuss two other observational studies from Finland and Japan, where salt consumption is generally higher than in the United States. In both of these, more salt was associated with more cardiovascular problems.

    But observational studies do not demonstrate causality. …

    Nevertheless, the research on salt intake can help identify questions to address in randomized clinical trials, the most rigorous kind of medical research.

    Since I haven’t read it, I’m not going to comment on the evidence pertaining to a salt-health connection. I will make two points about Alderman’s view. First, I agree that observational studies are helpful in identifying hypotheses and issues to be explored in future studies, including randomized trials. So, even when observational studies don’t provide conclusive evidence, they can make scientific contributions.

    I disagree with Alderman’s statement that observational studies don’t demonstrate causality. That’s what most people seem to think, but it is far too broad. As I’ve been writing about for weeks, research designs that exploit randomness, whether purposeful (as in a randomized trial) or not (as in natural experiments) permit causal inference. Observational studies that exploit random factors that effect treatment but not potential outcomes do demonstrate causality. They differ from randomized trials only in degree.

    One possible advantage of cities imposing reduced salt regulations on restaurants is that it provides the type of randomness one can exploit to infer the causal effect of salt on health. I’m not necessarily advocating such regulations. Nor am I saying there should not be a randomized trial on salt. I’m just saying that we may be able to learn a few things even without a randomized trial. In particular, I’m saying, screaming, don’t give up on observational studies. Only some of them are uninformative about causal effects. Properly designed ones can reveal more than Alderman, and others, may think.

    Share
    Comments closed
     
    • I’m not convinced by your arguments. Without true random assignment, how can you know that other extraneous variables aren’t causative? For example, let’s say New York City limits levels of salt. But that action itself might be caused by the Bloomberg administration, which also banned trans fats recently. So, is any reduction in heart disease due to the salt law or the trans fat law? Or, more subtly, is something about the presence of the Bloomberg administration indicative of a mood in NYC that is based on the well-being of the city and whatever is causing that THAT is what is related to reductions in heart disease?

      The point is, with natural experiments the degree of control is so reduced as to not just be quantitatively different, but qualitatively different. I don’t think anyone is saying “Give up on observational studies”–not at all. They are very important, and they can suggest trends and the need for follow-up with good controlled studies. But in the end, won’t we want to bring to bear the full force of double-blinding, placebos, random assignment, counterbalancing, statistical significance, and all such tools, before we can even get close to saying we *know* that such and such a factor was causative? Anything less than that is a correlational study.

      • @cm – Your argument is that the instrument I proposed is not exogenous to outcomes. One way this could be so is if outcomes and the instrument are both related to something that is effectively unobservable (or practically unmodelable). Political regime might be such a thing, but I don’t think it quite dooms the approach (more below).

        Note that exogeneity of instruments is the only requirement. It doesn’t matter if other factors also influence outcomes, so long as the instrument isn’t related to them (if they are unobservable) or they are themselves in the model (if they are observable). Thus, just because one instrument is flawed doesn’t condemn the whole enterprise.

        It is possible there is no good instrument in the case of salt. But if many cities had their own salt regulation–not just NYC–I do think that would be a good instrument. One can also include city-fixed effects and data over time, not just geographically, thereby controlling for other effects of the city or political regime.

        Your final sentence is false. Randomized trials are stronger, but natural experiments are (or can be) more than correlational. There’s no reason to promote randomized trials by stepping all over observational studies that exploit natural randomness in sound ways. Unfortunately, too few observational studies in health care research do so because the methods haven’t been adopted widely in the field yet. Bad mouthing them doesn’t help.

    • If you find this interesting, you should considering reading the book Good Calories, Bad Calories. It’s main focus is on nutritional science, especially related to obesity, metabolic syndrome and heart disease. It has plenty of arguments about nutritional research and politics as well, and a whole chapter focusing on the politics and research of salt (the author’s conclusion is that the current science suggests salt plays a minor role at best).

      Mike

    • Keep up this crusade. The cost of the prospective, double blind study is prohibitive. I suspect it further increases our reliance upon industry to perform research, which leads to bias. See Angell.

      Steve