The following originally appeared on The Upshot (copyright 2018, The New York Times Company).
Promising health studies often don’t pan out in reality. The reasons are many. Research participants are usually different from general patients; their treatment doesn’t match real-world practice; researchers can devote resources not available in most physician offices.
Moreover, most studies, even the gold standard of randomized controlled trials, focus squarely on causality. They are set up to see if a treatment will work in optimal conditions, what scientists call efficacy. They’re “explanatory.”
Efficacy is important. But what we also need are studies that test if a treatment will work in the real world — if they have effectiveness.
Pragmatic trial design was described more than 50 years ago in the Journal of Chronic Diseases (in a paper reprinted nine years ago in the Journal of Clinical Epidemiology).
A pragmatic trial seeks to determine if, and how, an intervention might work in practice, where decisions are more complicated than in a strictly controlled clinical trial.
Studies are almost never purely pragmatic or explanatory: They fall on a continuum. A recent tool, known as Precis-2, can help researchers devise trials to lean one way or the other. It’s scored on nine domains — eligibility criteria, recruitment, setting, organization, flexibility (delivery), flexibility (adherence), follow-up, primary outcome and primary analysis — on a scale from 1 (explanatory, or “ideal conditions”) to 5 (pragmatic, or “real world”).
Why do we need all this? Let’s take chronic pain as an example. Those who suffer from it want relief, and they want it now. Because people know that opioids exist, it’s hard to get them into a trial where they might take less powerful pain medications, like acetaminophen or ibuprofen. It’s also hard to do the long-term studies we need, because patients often want to try other options if the first one doesn’t work.
In a purely explanatory randomized controlled trial, patients would be assigned to one medication or the other, in a blinded fashion. They would be required to take their assigned option (and most likely only their assigned option) for the entire study period. If they failed to stay on protocol, they might be deemed a “failure” in the study.
Under these conditions, it’s hard to get patients to participate, and the same with doctors. They know that opioids are thought to be stronger, and many already prescribe them.
The Strategies for Prescribing Analgesics Comparative Effectiveness trial sought to overcome these barriers with a pragmatic design. To be eligible for this study, patients had to have chronic back, knee or hip pain. They also needed to be on at least one analgesic drug already (and not be “all better” on that drug), so that opioid therapy might be considered an appropriate option. But patients could not already have been prescribed opioids for chronic use.
Participants were randomly assigned to one of two arms. Both involved stepwise progression from less to more potent medications. One arm involved opioid medications (a progression from hydrocodone/acetaminophen to sustained release morphine to fentanyl patches, for example) and the other involved non-opioid medications (a progression from ibuprofen to nortriptyline to tramadol, for example).
The medications were adjusted based on patient preferences and responses. Providers could switch patients to different drugs at the same level; change the dose or frequency of doses; add other drugs to manage side effects; and move patients up or down levels of intensity. They were also allowed to use any nonpharmacological pain therapies they liked.
That’s how actual care occurs. This way, you can measure how treating someone with opioids might compare with treating someone without opioids for a sustained period.
The study followed 240 patients for 12 months. Pain-related function, or how much pain affected their activity, was no different between the two groups. Pain intensity was actually better in the non-opioid group, and adverse symptoms were lower in that group as well.
Purists will argue, correctly, that this doesn’t prove that any one medication is better than another, or that some specific opioid might not outperform some specific non-opioid drug for certain conditions in certain settings for certain patients. They’re right. But what this study does prove is that you can treat patients with non-opioid analgesics in clinical practice just as well as you can with opioids to control pain and improve function.
As a side benefit, patients don’t become addicted to opioids.
Other pragmatic trials have also been illuminating. A trial published in the Lancet in 2012 showed that although antimicrobial catheters reduced bacterial contamination of urine (efficacy), they failed to reduce the rates of symptomatic catheter associated urinary tract infections in routine practice (effectiveness).
A 2011 trial published in The New England Journal of Medicine showed that a variety of strategies to treat asthma were roughly equivalent, so decisions could be tailored to patient preferences. A very recent study published in JAMA showed that a home-based electrocardiogram patch led to higher rates of diagnosis of atrial fibrillation in a high-risk population.
Pragmatic trials aren’t perfect for every research question. They’re also hard to do. One challenge is funding. Although drug companies are willing and ready to pay for randomized controlled trials to prove efficacy, it’s not clear who is going to finance studies like these. They use lots of different drugs — which is what happens in the real world — and no company wants to foot the bill for other companies’ products to be evaluated. Certainly no opioid-related companies would want to pay for this trial.
Moreover, studies like these are difficult to design and expensive. They’re logistically much more complicated than simple head-to-head trials. They involve complex protocols, significant education of providers, and lots of oversight.
Randomized controlled trials are great for certain things. They absolutely have their place in determining efficacy and causality. But sometimes pragmatic trials are better. If we want to see improvements in care in the real world, not just the lab, we may need to push for more of them.