• Dartmouth Atlas researchers respond

    Just in case you missed it, Jonathan Skinner responded in the ongoing fight between Dartmouth Atlas researchers and NY Times reporters (links to prior rounds here). The meta-meme in his post is that Abelson and Harris don’t understand research or how to interpret research papers.

    Ms. Abelson and Mr. Harris are correct to note that the studies conclusively rule out the null hypothesis that more spending is associated with better outcomes – the major point of the paper.  But anyone who reads both articles will come away with more than that — a finding that outcome measures are worse on average in high-cost regions.  We did a quick tabulation of the large number of outcomes measures in the article – a total of 42 different measures (these are reported in more detail in our background paper).  Of the total, 23 showed significantly worse outcomes in high-spending regions, 14 showed no significant effects, and just 5 showed significant positive effects in high-spending regions.

    In other words, if one were to construct an index of quality, it would show nearly 5 significantly negative measures of quality of care in the high cost regions for every one positive measure.

    So when Abelson and Harris claim that Fisher and others are overstating the results of the papers, they are wrong.  Perhaps this simply reflects a lack of experience in reading and interpreting scientific papers.   But that is no excuse to be making unfounded accusations against us.

    It is important that journalists pay some attention to research in science and social science. Without the broader dissemination through the media, the hard work by researchers has almost no chance of having a significant impact. But understanding journal articles and the research process isn’t easy. When the same data or issues are examined using different methodology it can appear as if findings are contradictory. Anyone who wishes to find a seeming inconsistency in a body of work can do so. The truth can be made to appear false. When it’s about an issue as important as inefficient health care spending, such a distortion is a tremendous disservice.

    Comments closed
    • Can I quibble with their implicit assumption in your quote that “equal weighting” in a composite measure is the way to assemble those disparate outcome measures? Just for one, we know that this gives greatest relative weight to the most unreliable and rare measures and least relative weight to the most reliable and most common outcome measures.