Cancer Journal: WTF, I have a lung tumour?

The Cancer Journal is the story of my experience as a throat cancer patient during the COVID-19 pandemic. I finished my radiation sessions back on September 18, 2021. Well, I hear you ask, are you cured?

Before treatment started, I had imagined that I would get a regular computerized tomography (CT) scan. But my first follow-up CT was scheduled for December 21, 2020, almost 10 weeks after my last radiation session. It turns out that it is not as easy as you’d think to determine whether radiation is succeeding in destroying a tumour. The reason is that radiation traumatizes your flesh, leaving it burned and swollen. It’s meant to kill the tumour while not quite killing the tissue surrounding it. The problem is that the swelling of the burned tissue makes CT images difficult to read. So you have to wait for the flesh to cool before you take the image.

December 21 came, finally, and I had the image taken. I didn’t hear anything for a considerable time. But, first, it was the holidays, and second, COVID has put exceptional stress on provincial hospitals. So I get it: this is not a time when you can expect quick responses.

On January 11, the CT report appeared in MyChart, the patient portal to the hospital’s electronic health record (EHR) system.

The login page for MyChart, the patient portal to the EHR at my hospital.

Here’s how the report begins:

CT SCAN OF THE NECK

CLINICAL HISTORY: base of tongue cancer post XRT [x-ray therapy] response…

COMPARISON: Compared to prior CT dated July 3, 2020.

TECHNIQUE: 2.5 mm helical sections through the neck with administration of intravenous contrast.

FINDINGS: There is [sic] extensive posttreatment changes noted in the neck soft tissue. There is almost complete resolution of the primary right lung base tumor with small residual hyperdense area measuring 10 x 12 mm…

Wait a freaking minute. Let’s read that again.

There is almost complete resolution of the primary right lung base tumor…

I’ve been in treatment for throat cancer. Who said anything about a LUNG tumour? But the report referred twice to a lung tumour.

(By the way, the above was how I reacted to this report. My wife’s reaction — she is a physician — triggered seismographs across Eastern Canada. In a few years, alien astrophysicists on nearby stars will be perplexed by tremors in the fabric of space-time that register in their gravitational wave detectors.)

I wrote and called my oncologist. No response. Eventually, I called Patient Relations, the people who used to be called Patient Advocates. They got hold of the radiologist who had read the image. He was, the patient advocate told me, deeply sorry. He saw a residual tumour mass in my tongue and throat. Apparently, the speech recognition application that transcribed his dictated report misheard ‘lung’ for ‘tongue.’ I do not have a lung tumour.

A day later, I got a call from my oncologist. He, too, apologized. When read correctly, the CT was mostly positive, although it was not clear enough to rule out residual cancer. He promised to schedule a positron emission tomography (PET) scan to understand better whether the remaining tumour is dead, or alive and still dangerous.

What can we learn from this misadventure? It confirmed my impression that the health care system has yet to establish an effective way for caregivers and patients to communicate except through in-person, video, or telephonic visits. I’ve not been successful in getting questions answered using the Cancer Centre’s Patient Support Line. And so far, MyChart has mostly wasted my time or misled me.

However, global criticisms like the ones I just made in the last paragraph aren’t that helpful. Part of the process of building an effective caregiver-patient communication system is identifying specific problems and fixing them. How do we do this?

First, let’s acknowledge that the mistranscription of my CT report was a significant error. It stressed the hell out of us and, worse, it might have misled a caregiver who needed to learn about my health from my EHR.

But errors happen, and a ‘lung’ for ‘tongue’ confusion is understandable. One can assume that continued progress in speech recognition will reduce these errors. But we have to expect errors and given that, what processes can be put in place to catch them?

There’s a literature on transcription errors in radiologic reports and an even larger one concerning medication errors in prescriptions. I hope that these literatures describe ways to prevent and correct errors that do not force radiologists or oncologists to spend additional time on documentation. Nor do I think that hiring scores of human proofreaders is a solution that would scale.

The long-term solution may be an automated system that can efficiently screen medical communications for logical coherence and consistency with data in the EHR. I’m struck that when I read my CT report, I saw immediately that the reference to ‘lung’ was anomalous. If a layperson can see an anomaly, could we train an AI to catch one? Don’t dismiss the thought. I certainly don’t want a robot that autocorrects CT reports. But I do want one that can register surprise when something unexpected happens.


  • To read the Cancer Journal from the start, please begin here.
  • The next post, on what ‘health’ means, is here.
  • A table of contents for the Cancer Journal is here.
  • To get the Cancer Journal in email, go here.

@Bill_Gardner

Hidden information below

Subscribe

Email Address*