Blindly applying cost effectiveness to coverage decisions is dumb. That’s why nobody does it.

As Aaron wrote in an Upshot post,

In the 1990s, Oregon’s Medicaid program began using a system in which 688 procedures were ranked according to their cost effectiveness, and only the first 568 were covered. Doing so freed up enough money to cover many more people who were previously uninsured.

But the plan hit a snag in 2008 when a woman with recurrent lung cancer was denied a drug that cost $4,000 a month because the proven benefits were not enough to warrant the costs. The national backlash to this illuminated our collective difficulty in discussing the fact that some treatments might not be worth the money. The Oregon health plan made things worse in this case, however, by offering to cover drugs for the woman’s physician-assisted suicide, if she wanted it. Even supporters of the plan found the optics of this decision difficult to accept.

This exemplifies why cost effectiveness shouldn’t be the sole arbiter of coverage decisions. Despite how this story played out, it’s not the sole arbiter in Oregon. And, contrary to what people think and say, that isn’t the case in the UK either. Nor is it how the Institute for Clinical and Economic Review in the US operates. And, it’s not what Amitabh Chandra, Nick Bagley, and I advocated in our paper.

[A cost-effectiveness] threshold need not be hard and fast across treatments. The clinical needs of particular subgroups, together with other ethical considerations—such as whether the treatment is for an underserved population or in an emerging, high-need area—might counsel for higher or lower thresholds in particular cases.

But, back to Oregon, a 2001 paper by Jonathan Oberlander, Theodore Marmor, and Lawrence Jacobs explains what the state implemented.

Through a process of community meetings, public opinion surveys on quality of life preferences, cost–benefits analyses and medical outcomes research, the commission then ranked these condition/treatment pairs according to their “net benefit.” These rankings were intended to reflect community priorities regarding different medical conditions and services, physicians’ opinions on the value of clinical procedures and objective data on the effectiveness of various treatment outcomes. The list itself was meant to create an objective and scientific vehicle for setting priorities for medical spending. The initial incarnation of the rankings was generated by a mathematical formula that integrated the data from clinicians, the public and outcomes research. Future reorderings and additions of services were to be incorporated into the list on the basis of that formula. The Oregon approach to rationing, which simultaneously drew on public preferences and cost–benefit analyses, thus represented an unusual marriage of health services research and deliberative democracy.

(More about Oregon’s approach and its evolution here.)

So, yes, the idea was to come up with a list and to draw a line, covering only more highly valued services “above the line” and not covering those “below the line.” This application of a “mathematical formula” that “integrated data” sounds very cold and bureaucratic. But the process included pathways for other criteria to influence coverage decisions too: public input that solicited community priorities and physicians’ opinions, for example.

Guess what? Ultimately every coverage decision in America does. And every coverage decision ends up in the same place: either something is covered or not. Every process by which an organization arrives at a coverage decision can be, in hindsight, harshly critiqued for arriving at the “wrong” one in this case or that. It always seems cold and bureaucratic in the end. Every process, even the warmest, most patient-centered, and least bureaucratic ones have flaws and limitations. Mistakes, like the one Aaron wrote about, always arise.

Oberlander et al. wrote that, in fact, Oregon Medicaid ended up excluding very few services. It covered more under its new system than it did previously, and it saved very little (2%). Even though Oregon did draw a line, of sorts, it was a “fuzzy” one. Lots of things got covered that, by the formula, shouldn’t have. To avoid or resolve controversies and ethical issues, some services were moved over the line “by hand.” What started as objective and formula-driven ended up with a large, subjective component.

This is as it should be. Mature calls for more consideration of cost-effectiveness in coverage decisions are purposefully not calls for cost-effectiveness to be the only consideration. Those who make them understand the limitations of cost-effectiveness analysis. Apart from the obvious fact that the public would, with good reason, reject pure, data-driven coverage determinations, it’s clear that such a process cannot and does not accommodate fairness and other ethical considerations. These must, somehow, be added to the mix, and organizations like ICER and NICE and Oregon’s Health Services Commission do so.

Oregon’s experience, though different from what many may think, is still a cautionary tale. But it cautions against a trap that I think we’re unlikely to stumble into. Cost effectiveness is absolutely worth bringing to bear on coverage decisions, but not to the exclusion of other criteria. Few think otherwise. Or few enough to not matter much anyway. If any public or private payer makes coverage decisions entirely from cost effective analysis in the US, I’ll freely admit I was wrong.

@afrakt

Hidden information below

Subscribe

Email Address*