• The rise and fall of bone marrow transplantation for breast cancer, a tragic success story

    In 1992, as an Applied and Engineering Physics major at Cornell, I had barely given any consideration to health care or the US health system. If you had asked me at the time what proportion of health care delivered is grounded in evidence, I am sure I’d have said “most” or “nearly all” or possibly even “all”. I would bet the vast majority of Americans think that today. If so, they’re wrong, just as I would have been in 1992.

    Though some areas of medicine had experienced significant, evidence-driven strides by 1992 (e.g., the treatment of cardiovascular disease), by that year the idea of systematically applying evidence in practice was still a relatively novel one. In a JAMA article that year Guyatt et al. wrote,

    A new paradigm for medical practice is emerging. Evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research. [...]

    We call the new paradigm “evidence-based medicine.”

    Twenty years on, it’s astonishing how little progress we’ve made. Could we have done better? Should we have? Even our relative successes at improving the efficiency and effectiveness of US medical practice through research leaves a lot of room for improvement.

    Consider, for example, autologous bone marrow transplantation (ABMT) for treating breast cancer. In a scholarly article as riveting as they get, Welch and Mogielnicki (BMJ, 2002) characterize it as

    a story of young women dying from aggressive disease, well meaning physicians trying to be equally aggressive in treating it, and lawyers arguing that insurers should pay the bill. It is also a story of professional interests, weak research, financial gain, politics, and fraud.

    Used to “rescue” breast cancer patients after they had been treated with super-lethal doses of chemotherapy or radiation, by the late 1980s, ABMT was being touted by some as an effective and important new breast cancer therapy. However, researchers’ enthusiastic public comments to the media were not supported by the evidence since the studies by that time were conducted without control groups.

    The media reports had major impact on patients, however. Armed with no compelling evidence but a great deal of hope they inspired, patients demanded that insurers cover ABMT for breast cancer.

    [In 1990] Pamela Pirozzi had been advised that her “best chance of surviving more than a year” was a transplant, but her insurance company had refused coverage, stating the procedure was still “experimental.” Armed with a “list of insurance companies in other states that, when challenged, have agreed to pay for the procedure,” the Pirozzis sued.

    A federal judge ruled in her favour: “To require that the plaintiff or other plan members wait until somebody chooses to present statistical proof . . . that would satisfy all the experts means that plan members would be doomed to receive medical procedures that are not state of the art.” The same month another federal judge ordered a Massachusetts insurer to pay for a Boston woman to receive a transplant in North Carolina. [...]

    By 1990 one group of insurers had been sued by over a dozen patients and consumer advocacy groups to cover the treatment.

    One plaintiff won the largest award ($8.1M) “ever levied against an insurance company for refusing to provide health coverage benefits.” In 1994, under pressure from Congress, the US Office of Personnel Management ordered all plans offering health benefits to federal employees to include ABMT for breast cancer among the services covered.

    In 1992 a JAMA article examined the cost effectiveness of ABMT for breast cancer even though the effectiveness had not yet been established. The authors reported the cost as $115,800 per life year, noting the relatively high price tag may be too much for society. Still, the fact that costs were being raised before effectiveness was established gave the impression that coverage was more a matter of how to pay for the therapy, not whether it should be offered at all.

    Finally, in 1995, WR Bezwoda and colleagues reported the results of the first randomized trial. They were impressive. Over half the women receiving ABMT had no subsequent evidence of tumor, compared to only 4% in the control group. Survival time was double (90 weeks vs 45) for those in the treatment group. However, four years later, only Bezwoda’s group could reproduce these results. Four other clinical trials contradicted them. A review team identified problems with Bezwoda’s protocol and was denied access to control patients. “The vice president of the society made the logical inference to the Guardian (London) ‘You could conclude that they might not exist.'”

    By 2000 and in light of the clinical trials showing it to be no better than alternatives, ABMT for breast cancer was regarded as ineffective and worthy of abandonment. Welch and Mogielnicki conclude,

    For over 10 years desperately ill women had sought bone marrow transplantation as their best chance for survival. Many physicians encouraged this judgment. Fearing bad publicity and lawsuits insurers reluctantly agreed to pay the considerable charges. A strong presumption of benefit and equally strong financial interests impeded progress towards finding an answer.

    The obvious lesson from these events was articulated in the New York Times by two of the treatment’s most visible critics. “As a society we have to accept that rigorous evaluation of a new treatment is essential . . . Skipping this step may seem like a compassionate act, but it can have devastating consequences.”

    There are other lessons to be drawn from the ABMT experience. Welch and Mogielnicki list them in their conclusion:

    • “It is premature to raise the question of cost effectiveness when effectiveness is unknown.”
    • “Establishing what is ‘experimental’ is an important role for government.”
    • “Public officials should not mandate coverage in the absence of clear data.”
    • “The news media watchdog role should be extended to health care.”

    To these I would add an additional lesson. As tragic as the ABMT/breast cancer tale is, it’s actually a relative success compared with other harmful aspects of health care delivery that continue, despite the fact that we know them to be harmful. Some of these are very simple, like lack of proper hand washing, or the failure to administer aspirin and a beta-blocker to AMI patients,  or over-provision of tests that lead to unnecessary and dangerous procedures. In other cases, we are doing and paying for things for which we do not know the benefits. Some of them are likely harmful and many are costly, like providing angioplasty for heart attack patients beyond a day of onset (more here).

    In the case of ABMT for breast cancer, at least we did the science and, eventually, science worked. It worked imperfectly and too slowly, and for that many were harmed. Yet, ultimately, science prevailed. That’s more than we can say for many other therapies. Evidence-based medicine may be 20 years old, but it is still in its infancy. I would have been shocked to know this in 1992, and, quite frankly, I’m shocked to know it today. For what we spend on health care and for what we expect it to do, we ought to be employing every weapon at our disposal to do it better. That we are not is one of the great tragedies of our time.

    AF

    Share
    Comments closed
     
    • Austin, meet Aaron:

      “We cannot, as a [medical] profession continue to think that we are immune to conflicts of interest (and yet want to hide them). We cannot, as a profession, not be truthful with those who entrust us with their care.”

      The juxtaposition of these posts was accidental, right?

    • I’m a little puzzled by this statement:

      “Evidence-based medicine may be 20 years old, but it is still in its infancy.”

      Huh? The results from the first randomized clinical trial were published in 1948, and physicians have been informally (and very imperfectly) trying to justify their procedures empirically for hundreds of years. What exactly was it about 1992 that constituted the start of evidence-based medicine?

      • I did not take it to mean that there has been no evidence in medicine. That would be silly. Rather, it’s a question of how the field is oriented. Is it an eminence- or evidence-based orientation? Though I cannot speak with first-hand knowledge, I’m willing to believe that 20 and more years ago it was far more eminence-based. Given what I’ve seen today, I’m willing to believe it is still true, but that evidence can and is gradually making a difference.

        Any docs out there want to comment? (Not to imply Theodore is not one, but I cannot tell from his comment. I have good evidence I am not one.)

        • From my POV (I am a doc), Austin is largely correct. An awful lot of medicine was practiced based upon a combination of tradition and institutional preference. It is changing, and I think it is probably better now than Austin states. Recommendations for therapies now come with very carefully stated levels of evidence. There is more emphasis on following guidelines, rather than intuition or personal experience. Below is the system generally used to rate the level of evidence we see in journals today.

          Level I: Evidence obtained from at least one properly designed randomized controlled trial.
          Level II-1: Evidence obtained from well-designed controlled trials without randomization.
          Level II-2: Evidence obtained from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.
          Level II-3: Evidence obtained from multiple time series with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence.
          Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.

          Steve

          • I do hope it is better than I state, only I’m not stating it so much as reporting and reacting to the evidence I have, for example, here: http://theincidentaleconomist.com/wordpress/dearth-of-evidence/

            Perhaps that’s dramatically better than decades ago. I’m very willing to believe it is. But it gives the impression that perhaps only half of medical care is driven by a solid body of evidence.

            What would be objective measures of progress on this front? What about reduction over time in variation in utilization patterns, controlling for health risk? Is there evidence of that? I confess I don’t know, but I know who to ask.

            • Good point. I am thinking back to the 70s and 80s. I also think it is good that we are much more likely to know when we are working w/o especially good evidence. When guidelines and protocols are published now, they nearly all include an assessment of the quality of the evidence used to make the guidelines. We are more likely to know when are practicing with good evidence and when we are not. It is also easier to gain access to our evidence since people publish books that deal exclusively with evidence based medicine.

              Steve

            • On geographic variation over time, check the charts in this PDF http://www.cbo.gov/ftpdocs/89xx/doc8972/02-15-GeogHealth.pdf . In time, I’ll take a closer look at it to see if we can conclude anything from it.

    • 1) There’s a move to identify low value services and procedures. So far oncologists have identified five changes in attitudes and five changes in behaviors they can make to stop offering services that are not backed by evidence. Link to initial article is here http://managinghealthcarecosts.blogspot.com/2011/06/changes-oncologists-could-make-that.html. Internists identified 37 diagnostic procedures that should NOT be provided based on evidence. Some details are here http://managinghealthcarecosts.blogspot.com/2012/01/internists-step-up-to-plate-and.html although the full article is behind a paywall. The impact internists’ recommendations on practice will be easier to measure.

      2) Austin – when you review variation, I hope you’ll review the illuminating (and entertaining) riposte between Richard Cooper and Dartmouth researchers in Health Affairs.
      http://content.healthaffairs.org/content/28/1/w91.abstract
      http://content.healthaffairs.org/content/28/1/w103.abstract
      http://content.healthaffairs.org/content/28/1/w116.abstract
      http://content.healthaffairs.org/content/28/1/w119.abstract
      http://content.healthaffairs.org/content/28/1/w124.extract

      There’s also a good review of why Medicare data and non-Medicare data diverge
      http://content.healthaffairs.org/content/29/12/2302.abstract

      And last winter the IOM weighed in
      My comments: http://managinghealthcarecosts.blogspot.com/2011/03/variation-redux-iom-weighs-in.html
      The IOM report: http://www.iom.edu/Activities/HealthServices/GeographicVariation/Data-Resources.aspx

    • I am a lucky recipient of a ABMT in 1992. I had recurrent breast cancer and my Dr. fought for me to have this done. It was done at Wilford Hall on Lackland Air Force base. I was followed via questionaire for many years. Not heard anything for 4-5 years. I would like to know how others from the period of time have done ? Also curious if others have had the side effects that I am experiencing. Anyone know what direction to point me too ?

      any help is appreciated,

      Laura

    • Something concerns me a lot about EBM, the Pharmaceutic Industries are the only capable to pay for a big trial, multicentric, randomized and well controlled. But They have their own agenda? Their design is not biased? Do they make public all the resarch? Do the researches and consultants have economical support by the industries? Is there no chance that we are becoming a well elaborated truth evdence biased?