• No, physicians don’t understand screening statistics

    Awesome new study out in Annals of Internal Medicine. It’s entitled, “Do Physicians Understand Cancer Screening Statistics? A National Survey of Primary Care Physicians in the United States“. First, some background. Longtime readers of the blog will know that I’m particularly sensitive to the difference between survival rates and mortality rates. I have blogged on this many, many, many, many, many times. If you won’t read all those, at least read this one.

    So here we go:

    Background: Unlike reduced mortality rates, improved survival rates and increased early detection do not prove that cancer screening tests save lives. Nevertheless, these 2 statistics are often used to promote screening.

    Objective: To learn whether primary care physicians understand which statistics provide evidence about whether screening saves lives.

    Design: Parallel-group, randomized trial (randomization controlled for order effect only), conducted by Internet survey. (ClinicalTrials.gov registration number:NCT00981019)

    Setting: National sample of U.S. primary care physicians from a research panel maintained by Harris Interactive (79% cooperation rate).

    Participants: 297 physicians who practiced both inpatient and outpatient medicine were surveyed in 2010, and 115 physicians who practiced exclusively outpatient medicine were surveyed in 2011.

    Intervention: Physicians received scenarios about the effect of 2 hypothetical screening tests: The effect was described as improved 5-year survival and increased early detection in one scenario and as decreased cancer mortality and increased incidence in the other.

    Measurements: Physicians’ recommendation of screening and perception of its benefit in the scenarios and general knowledge of screening statistics.

    Let’s start with the theoretical problem. Almost half of the surveyed docs said that finding more cancer cases in screened people compared to unscreened people provided proof that the screening test saves lives. That’s wrong. Preventing death in those people is what saves lives. But the physicians didn’t understand the difference. Pretty much the same numbers of physicians thought that, in general, when it comes to proving that screening saves lives, 5-year survival rates constitute proof just like mortality rates (76% versus 81%).

    But when confronted with data, their judgement got worse. The researchers provided two scenarios. In one, a screening test increased the five-year survival rate from 68% to 99%. In the other, mortality dropped from 2 per 1000 persons to 1.6 per 1000 persons. Then they were asked questions about those two tests. When provided the data on the increased 5-year survival rates (which is irrelevant), 69% of docs said they would “definitely recommend” the test. When provided the mortality data (relevant), only 23% would “definitely recommend” the test.

    So basically,when it comes to saving lives, docs are three times more likely to recommend a screening test based on irrelevant data than they are to recommend it based on relevant data. I’m bracing myself for the hate mail, but this is part of the reason why I’m skeptical that just providing docs with more evidence will change the way they practice. Most docs just aren’t trained to understand this stuff.

    Share
    Comments closed
     
    • Considering patient centered care, SDM, or even “free market HC,” “where decisions are between docs and patients”….

      The public and physicians themselves overestimate their abilities to tease out correct courses of action based on data. I dont see the cycle breaking. If this was a household, the child would be going to mom or dad for help. For the profession, that would be AHRQ or PCORI.

      I wrote on a similar study, instead looking at absolute vs. relative risk. Docs dont get it.
      http://blogs.hospitalmedicine.org/SHMPracticeManagementBlog/?p=2796

      Brad

    • No one understands statistics. What’s worse here is the doctors neither understand statistics nor even understand basic definitions.

      Do they teach any of this in medical school?

      • yes, they do teach this in med school, but it does not get reinforced nearly enough. What’s worse is that very little time in clinical education goes into teaching preventive health measures such as screening. Most time is spent on disease management and acute medical problems at tertiary care medical centers where every dreadful disease winds up. If you don’t learn in the real world, you’re not ready for the real world.

    • Dear Aaron,

      I’m not so sure that looking at 5-year survival rates (or survival times) is as “irrelevant” as you make it out to be. I agree that under the null hypothesis of no treatment effect, screening will inflate survival times, making it appear that patients who have been screened live longer than those who have not been screened, simply because the screening started the clock earlier. (The technical jargon for this is “lead-time bias”.) But it can also be the case that the treatment really does have an effect, and people live longer! The problem is that often both the null and the alternative hypotheses can plausibly explain the observed differences in survival. But that just means that interpretation of the data is problematic, not that it’s completely “irrelevant”. And of course this is not a problem with survival rates per se, but rather with the non-controlled study design — if there was no screening, there would be no lead-time bias, and the interpretation of the survival rates would be straightforward.

      I don’t agree that mortality rates are a superior measure. Suppose a treatment does not cure a disease, but does prolong life (this is the case with a lot of cancer). In that case, most people would perceive that this was beneficial — sure, they’d like a cure, but if that’s not available, they will settle for a little more life. But in this case the mortality rate would not show any improvement for patients who receive the treatment, since everyone eventually dies from the disease. This is a real defect for mortality rates — they are simply insensitive to such a benefit, and this isn’t because of some defect in the study design. What does capture this effect? Survival rates!

      Where I think we both agree is that evidence-based medicine is messy, difficult, and often unclear. That doesn’t mean that we shouldn’t do it — of course we should base clinical practice on empirical data! — but it’s not as simple as some naive proponents make it out to be. And working M.D.s in particular are ill-equipped to critique these issues.

      • First of all “irrelevant” was the word the authors used, so I used it, too. You are also adding in information to the study that wasn’t asked. This wasn’t about improving quality of life. It was “[t]o learn whether primary care physicians understand which statistics provide evidence about whether screening saves lives” (emphasis mine). Survival rates do not do that like mortality rates do. I’ve written on this pretty extensively. Follow the links!

        • Dear Aaron —

          I did read your links.

          At some point this becomes a meaningless exercise in semantics — what exactly does it mean to “save a life”? Everybody dies eventually of something, so the ultimate mortality rate is always 1. Usually, we use the term “saving a life” to mean that we prevented *immediate* death; for instance, if someone is drowning, then if we pull them out of the water, we’ve saved their life, even if next week they are killed in a car crash. So in that sense, yes, extending a life is indeed “saving” it: if Suzie Q has breast cancer and would die very soon without treatment, but after receiving treatment her cancer goes into remission, most people would say that the treatment “saved” her life. Whether or not the cancer recurs 5 years later, this time fatally, wouldn’t affect the judgement for most people that the first time around her life had indeed been “saved”.

          How exactly am I “adding in information to the study that wasn’t asked”?

    • I’d think you’d appreciate the following, from Ezra Klein’s blog, as well:

      http://www.washingtonpost.com/blogs/ezra-klein/post/the-risk-of-mortality-for-everyone–prophets-included–is-10/2011/08/25/gIQAogN1uR_blog.html

      Chi Pang Wen and colleagues claim that exercising for 15 min per day results in a 14% reduced risk of all-cause mortality (0·86, 95% CI 0·81—0·91)…This cannot be true, however, since the risk of mortality is an absolute.

    • This should be pretty easy to correct. The editorial boards of the major journals could address this in a series of editorials. It would probably also require that we address how our studies are funded.

      Steve

    • steve – You’re assuming that doctors read editorials in journals; unfortunately that’s not necessarily the case. Too many physicians function in isolation without reading journals or getting great continuing education.

      foonish – It’s not really about understanding statistics, it’s about understanding some basic epidemiologic principles and, as you say, definitions. And they don’t teach this stuff well in medical school. Though there is lip service to teaching ‘clinical reasoning’, and ‘evidence based medicine’, and ‘statistics’, all these tend to get pretty superficial treatment – and often instead of getting taught important principles like these, they get too much of real statistics that they don’t need to know and forget almost instantly.

      And since I’m being pedantic: bulldog – remember that mortality is age specific. So there’s a reduction in mortality at any specific age. This effectively means that death is delayed.

    • Hey Aaron—

      It’s always interesting to read your thoughts. I will freely admit to being woefully undereducated with regard to statistical analysis. Every board exam I have ever taken will have the obligatory “positive predictive value” question which will evoke a guess on my part before I move on.

      Clinical docs are always battling to get accurate information. Our patients are reading the laypress and pharma will always help to create a need where one didn’t exist before. We are now experiencing this with “low-T”–the pharmaceutical industry creating a market which they can supply. But I digress.

      The main problem, in my opinion, is a major lack of consensus recommendations. In an ideal world we could all do our own due diligence, but busy clinical docs need a reliable source of information–data that has already been distilled. Or maybe I should just call you…

      Who gets a PSA and DRE? Who should I ask? The American Urological Association would have me doing those to healthy 40 year olds.

      Just my thoughts. Hope you’re well.

      Jon

      • Well, of course I should always be the source of truth! :)

        It’s a fair point, and one we discuss all the time. I tend to support those groups with maximum transparency and explanation, like the USPSTF.

    • Hi Aaron,
      Yes, it’s deplorable how innumerate doctors are. One has to have a basic understanding of just a few numbers and numerical concepts in order to practice medicine. Unfortunately, those numbers and concepts are nearly all from the world of statistics. But let’s not go crazy. The stuff doctors need are not difficult, they just need to be taught well and reiterated at every step of medical education.
      I know because that’s what I do for a living. I have taught and directed an entire pre-clinical curriculum in evidence-based medicine including the basic numbers one has to understand, and I do the same now at the residency level. Doctors get suckered into believing in interventions and doing them on patients by “thought leaders” and drug salesfolks alike, almost always based on disease-marker numbers rather than real patient-oriented outcomes and based upon relative rather than absolute patient benefit numbers. I think the newer generation of doctors is getting formal training in medical numeracy and should be better at this stuff, although not perfect.
      — Josh S.
      PS I went to pull the actual article from the Annals of Internal Medicine, but only the abstract is available. That’s another important tenet of evidence based medicine: when something matters, read it yourself and do your own thinking.

    • The researchers should give the same survey to a group of purported physician experts do systematic reviews or making recommendations, and see if they get any better or worse understand then the general population of physicians.

    • Awesome post. No hate mail from me. Just help me out if ever (or when) I mess up.

      Thanks for sharing your awesome brain with us all here.