• The problem with survival rates

    It’s one of those topics that I think can’t be stressed enough. Survival rates are really problematic when you’re trying to discuss improvements in mortality, or compare systems. We’ve even got a tag for the topic here on the blog. The BMJ has a piece that adds to the discussion (emphasis mine):

    Why is an increase in survival from 44% to 82% not evidence that screening saves lives? For two reasons. The first is lead time bias. Earlier detection implies that the time of diagnosis is earlier; this alone leads to higher survival at five years even when patients do not live any longer. The second is overdiagnosis. Screening detects abnormalities that meet the pathological definition of cancer but that will never progress to cause symptoms or death (non-progressive or slow growing cancers). The higher the number of overdiagnosed patients, the higher the survival rate. In the US a larger proportion of men are screened by prostate specific antigen testing than in the UK, contributing to the US’s higher survival rate.

    The important thing to understand is that the correlation between differences in survival rates and mortality rates is zero (r = 0.0 for the 20 most common solid tumours over the past 50 years). Thus the message is clear: the benefit of screening needs to be communicated in mortality rates, not survival rates.

    I’ll be the first to admit that we’re failing to get this message across adequately. I’m sure that many of you will still argue with me in the comments below. the authors of this piece have some suggestions:

    Firstly, risk communication needs to become a central skill in medical education. For decades medical schools have failed to teach students statistical thinking (biostatistics does not seem to help much). The basic structure for such a teaching programme already exists.

    Secondly, organisations responsible for continuing medical education and recertification programmes should ensure that doctors are trained in understanding evidence and in risk communication.

    Finally, journal editors and reviewers should no longer allow misleading statistics such as five year survival to be reported as evidence for screening. Editors should enforce transparent reporting of evidence, for the benefit of their readers and of healthcare in general.

    I think I should get credit for #1. I work on that here all the time, and I teach it to residents whom I train. I think #2 is a solid idea. But none will do as much good as #3. Editors need to be more selective and take a stronger hand in making sure that facts and statistics are discussed appropriately in manuscripts they publish.

    @aaronecarroll

    Share
    Comments closed
     
    • Dr. Carroll: how do you teach better communication of risk? I’m entering med school this year and would love to know more.

    • Survival rates are really problematic when you’re trying to discuss improvements in mortality, or compare systems.

      Very true and it is also true that life expectancy and infant mortality are really problematic when you’re trying to compare systems.

      It is funny that one side likes survival rates but does not like life expectancy and infant mortality and the opposite for the other side. It is no wonder that we cannot agree on changes to improve healthcare.

      • No, one side doesn’t like survival rates and admits that infant mortality and life expectancy are imperfect metrics. I’ve spent many posts discussing the latter. But, please, keep coming up with straw men.