• Erectile dysfunction 15 years after prostate cancer treatment

    From “Long-Term Functional Outcomes after Treatment for Localized Prostate Cancer,” by Matthew Resnick and colleagues (NEJM, 2013):

    At 15 years, the prevalence of erectile dysfunction was nearly universal, affecting 87.0% of men in the prostatectomy group and 93.9% of those in the radiotherapy group. Nonetheless, only 43.5% of men in the prostatectomy group and 37.7% of those in the radiotherapy group reported being bothered with respect to sexual symptoms. The possible reasons for the second finding include declining sexual interest with age, acceptance of sexual dysfunction over time, or both. Despite some evidence of stabilization or improvement of urinary and sexual symptoms from 2 to 5 years, long-term follow-up reveals consistent functional declines after 5 years. It remains unknown whether this continued decline is due to prostate cancer and its treatment, the normal aging process, or a combination of factors.

    For what it’s worth, by age 70, the low end for the study quoted above, one published estimate is that half of men experience moderate to complete erectile dysfunction (other studies cited therein report similar rates for older cohorts, though other estimates are higher). So, my best guess — and it is only that — is that prostatectomy or external-beam radiation therapy is associated with an absolute increase in risk of erectile dysfunction of, ballpark, 30 percentage points. Also relevant, to save one life after 11 years of follow-up, 37 cancers need to be detected and 23 of them* to be treated for prostate cancer.

    Thus, blithely combining the information above (which requires a few hidden assumptions**), about 7 men are made impotent by a treatment that does not extend their life. (I emphasize, this is a guesstimate based on a quick look at the erectile dysfunction article to which I link above.) We can each make our own decision about the risk trade-offs in situations like these. Just saying, these are the sorts of numbers treatment candidates might like to know.

    * I have not yet confirmed from source documents what is in the comment to which I’ve linked.

    ** The mortality study is after 11 years of follow-up. The erectile dysfunction one is after 15. And I’m approximating a guess of the increase in erectile dysfunction risk due to treatment.

    @afrakt

    Share
    Comments closed
     
    • Austin:

      The 37 to 1 ratio you cite is somewhat overstated for the purpose for which you are using it, to calculate ratios of side effects to lives saved from prostate cancer screening and treatment. It is based on the difference between the two groups in prostate cancers DETECTED.

      But because the screening group has prostate cancer detected earlier, a higher percentage of men in this group who are diagnosed end up only receiving watchful waiting. Examine Table 9B in the Supplement to the European study you are citing. After you subtract out the men who ONLY receive watchful waiting in the screening group and control group, we find that 7.4% of men in the screening group receive some treatment other than watchful waiting, compared to 5.1% in the control group. The difference in “treatments that might have some serious side-effects” is 2.3%. Because the screening group has a 0.1% less chance of dying from prostate cancer, the ratio of “men treated with some treatment that might cause serious side-effects” to “lives saved from prostate cancer death” is 23 to 1, not 37 to 1.

      If the serious side-effect probability is 30%, as you assume, then the ratio of serious side effects to lives saved is 7 to 1, not 11 to 1. Or to put it another way, if you’re treated, there is a 30% chance of serious side-effects, and a 4 or 5% chance it will save your life. Your numbers yield only a 2 or 3% chance that the treatment will save your life. I think these adjustments make at least some difference to how to think about the relative risks.

      Also relevant is that it is likely that the effect of treatment on prostate cancer mortality is likely to grow somewhat after 11 years. There have been some attempts to extrapolate this from limited data.

      • Thank you. I was vaguely aware that something wasn’t quite right about 37, but I couldn’t nail it down at the time I posted. I will try to confirm the numbers you’re citing in the source document/appendix later. For now, I’ve updated the post to flag the issue.

        • I should also note that the U.S. Preventive Services Task Force, in its various PR materials and reports on prostate cancer screening, states a ratio of 40 serious side-effects to at most 1 life saved due to prostate cancer screening.(This is in their final Recommendation statement, their fact sheet on prostate cancer screening statistics, and in the op-ed by their chair, Dr. Moyer.) However, their analysis seems to be comparing apples and oranges.

          The 40 serious side effects is based on assuming that the screening group will have about 110 our of 1000 (11%) diagnosed with prostate cancer, of which around 90% will be treated, or around 100, which will lead to 30 or 40 men with serious side-effects. But the implicit comparison here is to a control group without screening that has ZERO prostate cancer diagnoses and treatment. But we know that many people in the group without PSA screening will be diagnosed and treated for prostate cancer, so this comparison has no real-world relevance.

          Furthermore, the 1 life saved out of 1000 is based on a 0.1% reduction in the risk of prostate cancer death in the screening group versus the control group, conditional on both groups having some diagnoses and treatment for prostate cancer, just at different rates and timing. So the “1 life saved” figure and the “40 side effects” figure in the USPSTF PR are based on comparing the screening group to two dramatically different control groups.

          I think its important to get the relative risks right.

    • Some careful analyses of possible long-term risk tradeoffs in prostate cancer screening, based on the European study , have been done by James Hanley, and by Ruth Etzioni and her colleages:
      James Hanley: “Mortality reductions produced by sustained prostate cancer screening have been underestimated”. J Med Screen 2010;17:147–151

      Ruth Etzioni and many others, “Long-term projections of the harm-benefit trade-off in prostate cancer
      screening are more favorable than previous short-term estimates”. Journal of Clinical Epidemiology 64 (2011) 1412-1417.

      All of this assumes that the European screening study is the truth. If the U.S. study is the truth, then PSA screening doesn’t make any sense.

      As for the USPSTF, I have submitted comments to the Task Force about their numbers and the issues with them, but have never received a response. I haven’t seen anyone in the medical literature go into the problems with their statistics in great detail. When their recommendation statement was published in Annals of Internal Medicine in July of 2012, the response that was published by Catalona, D’Amico, Walsh, and others mentioned that the Task Force ” overlooked the fact that diagnostic procedures and related complications occur in unscreened populations as well, and at a later stage of cancer discovery.” Ann Intern Med. 17 July 2012;157(2):137-138 .But they didn’t really go after the specific numbers in the Task Force report. Maybe someone in the scholarly literature has done a detailed critique, but I have not seen it.

    • A just published paper by Etzioni and various co-authors, in Annals of Internal Medicine, simulates the “Number Needed to Detect” to eliminate 1 death from prostate cancer under various screening strategies. I think this provides useful evidence relative to this debate over the likely magnitude of various outcomes from different screening strategies. “Comparative Effectiveness of Alternative Prostate-Specific Antigen-Based Prostate Cancer Screening Strategies”, by Etzioni, Gulati and Gore, Annals of Internal Medicie 2013: 158: 145-153.