These results are not causally interpretable, prostate cancer edition. Tell a friend.

A new paper in The Lancet Oncology examines complications from prostate cancer treatment other than incontinence and impotence. (About those, look here.) The opening paragraph includes,

Patients want to know the frequencies and severities of various complications associated with different treatments.

Indeed they do. But for those frequencies and severities to be relevant, they have to be causally driven by treatment, not merely associated with it. This is crucial.

With a “population-based retrospective cohort study” that included the records of over 32,000 patients (i.e., nothing like a randomized controlled trial and not in any way exploiting natural randomness), the authors found that

Patients who were given radiotherapy had higher incidence of complications for hospital admissions, rectal or anal procedures, open surgical procedures, and secondary malignancies at 5 years than did those who underwent surgery (adjusted hazard ratios 2.08–10.8, p<0.0001). However, the number of urological procedures was lower in the radiotherapy than in the surgery group (adjusted hazard ratio 0.66, 95% CI 0.63–0.69; p<0.0001).

So, if you were deciding between radiotherapy and radical prostatectomy, could you use this information to guide your choice? I don’t see how. Given the study design, we cannot, with any certainty, reasonably attribute the results to treatment. They could be largely driven by who decided to get each type of treatment (i.e., selection). This is true even if the results are adjusted for age and comorbidities (they were). There’s plenty of room for omitted variable bias here, including that due to factors that are unobservable to the researcher from administrative data (disease severity, for example, but countless others).

This is precisely why exploiting randomness–whether purposeful or natural–is so important to research design. If people aren’t randomly assigned–again, on purpose or due to some fluke of nature/institutions–there’s no good basis for a causal interpretation. Random assignment is blind to the confounders that give rise to omitted variable bias. Lacking anything like random assignment, the results of this study cannot, within reason, be interpreted causally.

The authors subtly make this point in their concluding discussion. In an accompanying commentary, Michael Eble makes it three times. Independent of how explicitly it’s made in a paper or commentary, among the first things you should ask about a study is, “What is the basis for interpreting these results causally? What are the threats to doing so?” Tell a friend.

@afrakt

Hidden information below

Subscribe

Email Address*