No, physicians don’t understand screening statistics

Awesome new study out in Annals of Internal Medicine. It’s entitled, “Do Physicians Understand Cancer Screening Statistics? A National Survey of Primary Care Physicians in the United States“. First, some background. Longtime readers of the blog will know that I’m particularly sensitive to the difference between survival rates and mortality rates. I have blogged on this many, many, many, many, many times. If you won’t read all those, at least read this one.

So here we go:

Background: Unlike reduced mortality rates, improved survival rates and increased early detection do not prove that cancer screening tests save lives. Nevertheless, these 2 statistics are often used to promote screening.

Objective: To learn whether primary care physicians understand which statistics provide evidence about whether screening saves lives.

Design: Parallel-group, randomized trial (randomization controlled for order effect only), conducted by Internet survey. (ClinicalTrials.gov registration number:NCT00981019)

Setting: National sample of U.S. primary care physicians from a research panel maintained by Harris Interactive (79% cooperation rate).

Participants: 297 physicians who practiced both inpatient and outpatient medicine were surveyed in 2010, and 115 physicians who practiced exclusively outpatient medicine were surveyed in 2011.

Intervention: Physicians received scenarios about the effect of 2 hypothetical screening tests: The effect was described as improved 5-year survival and increased early detection in one scenario and as decreased cancer mortality and increased incidence in the other.

Measurements: Physicians’ recommendation of screening and perception of its benefit in the scenarios and general knowledge of screening statistics.

Let’s start with the theoretical problem. Almost half of the surveyed docs said that finding more cancer cases in screened people compared to unscreened people provided proof that the screening test saves lives. That’s wrong. Preventing death in those people is what saves lives. But the physicians didn’t understand the difference. Pretty much the same numbers of physicians thought that, in general, when it comes to proving that screening saves lives, 5-year survival rates constitute proof just like mortality rates (76% versus 81%).

But when confronted with data, their judgement got worse. The researchers provided two scenarios. In one, a screening test increased the five-year survival rate from 68% to 99%. In the other, mortality dropped from 2 per 1000 persons to 1.6 per 1000 persons. Then they were asked questions about those two tests. When provided the data on the increased 5-year survival rates (which is irrelevant), 69% of docs said they would “definitely recommend” the test. When provided the mortality data (relevant), only 23% would “definitely recommend” the test.

So basically,when it comes to saving lives, docs are three times more likely to recommend a screening test based on irrelevant data than they are to recommend it based on relevant data. I’m bracing myself for the hate mail, but this is part of the reason why I’m skeptical that just providing docs with more evidence will change the way they practice. Most docs just aren’t trained to understand this stuff.

Hidden information below

Subscribe

Email Address*