Health care predictive analytics: Who is making choices, to what end?

Health care predictive analytics applies statistical modeling to massive data sets to develop tools to predict health outcomes for individual patients. Having tools that can accurately predict medical outcomes could greatly improve medical decision making. But this is a complex, powerful, and possibly risky technology, as discussed in a great article by I. Glenn Cohen in the current Health Affairs. I will survey some possible concerns by discussing three possible use cases for predictive analytics.

adhd-pillsFirst, a doctor might use a prediction tool to optimize treatment for a patient. Suppose she wants to prescribe pain medication for a patient with severe osteoarthritis. Suppose further that there’s a choice here between standard or extended release preparations. Taken as prescribed, the extended release drug is likely to be safer and less burdensome for the patient. But if a patient accidently takes more than one pill, or takes a broken or crushed pill, the resulting massive dose could be fatal. So, given the patient’s age, gender, cognitive status, other medications, prior history with medications, and so on, which preparation is likely to work best?

Treatment optimization is the kind of highly multivariate problem in which statistical decision rules routinely outperform human intuition. So there is great interest in assembling masses of data on patients’ experiences with medications to build predictive tools that could inform doctors writing prescriptions.

Patient optimization is also a case in which the use of predictive analytics seems to be all upside for patients. However, as Cohen et al. point out, assembling a big data set may put patients’ privacy at risk. Testing the prediction rule may challenge current norms about consent. These are serious problems and they apply to all the predictive analytic use cases that I discuss here.

Cohen et al, however, focus on a different predictive analytic use case, which I will call resource allocation.

Imagine a physician who is trying to decide whether to send a patient… to the intensive care unit (ICU). The patient might benefit from a stay in the ICU, but other patients might benefit more, and ICU beds are limited… Now imagine that there is a technology that could ascertain the risk accurately for a thousand separate patients and continuously update that evaluation every second to help a physician decide whom to send to the ICU… A model may recommend withholding a potentially beneficial intervention from some patients with a given condition because there is a significantly lower probability that they will benefit, while offering the intervention to others who are more likely to benefit.

Resource allocation is a controversial application, but not because of issues concerning statistical learning or big data. You might be willing to accept a situation where a predictive rule says that someone else should get an ICU bed instead of you, if the data make a convincing case that the other person needs it more. You might even accept a case when the rule denies you admission to an ICU with an open bed, because the rule predicts that a sicker patient is likely to arrive soon. ICU admissions are often made this way now. By and large, we think it is fair that ICU beds should be allocated to the patients who need them most.

But that’s because it’s obvious that ICU beds are scarce resources. Are we comfortable with doctors making similar resource allocation decisions in the rest of medicine? Suppose we were discussing whether a doctor should write a prescription for an expensive drug, or whether a patient should have to wait for an elective surgery? Would you be comfortable if the doctor withheld these treatments from you because a predictive rule told her that others were likely to need them more than you?

There is a traditional view that the doctor-patient relationship requires the doctor to advocate for her patient, rather than allocating care based on the optimal use for medical resources for the population as a whole. This traditional view contrasts with the ethics of parsimonious care recently advocated by the American College of Physicians (and discussed by Aaron here).

Physicians have a responsibility to practice effective and efficient health care and to use health care resources responsibly. Parsimonious care that utilizes the most efficient means to effectively diagnose a condition and treat a patient respects the need to use resources wisely and to help ensure that resources are equitably available. In making recommendations to patients, designing practice guidelines and formularies, and making decisions on medical benefits review boards, physicians’ considered judgments should reflect the best available evidence in the biomedical literature, including data on the cost-effectiveness of different clinical approaches.

Predictive analytics is essential if we are going to practice parsimonious care. However, we don’t yet have a social consensus about whether parsimonious care is a good thing.

Finally, consider a third use case: patient selection. Insurers don’t want to cover patients who will need a lot of care. Hospitals want to care for patients who will reliably pay their bills. The New York Times had a recent article about a company that uses predictive analytics to help health care firms select those patients.

MedSeek, a software and analytics company in Birmingham, Ala., offers services intended to help hospitals “virtually influence” the behavior of current and would-be patients. According to MedSeek.com, the company offers a “21st-century tool kit” that can refine health care marketing pitches based on sex, age, race, income, risk assessment, culture, religious beliefs and family status. One client, Trinity Health System in Michigan, used MedSeek’s services “to scientifically identify well-insured prospects,” among others, and encourage them to schedule screening tests and doctor visits, a company case study said…. Bill Andrae, MedSeek’s vice president for client strategy, described other techniques… Hospitals could send birthday messages “to all high-value men and women,” he wrote, or notify “profitable individuals 18 and above” about special round-the-clock health care call-in lines staffed by nurses, and encourage those revenue-generating patients to schedule medical tests or appointments.

Medseek uses predictive analytics to steer care away from those who need it toward those who can afford it. No one should be comfortable with this.

These use cases show that the essential problem isn’t predictive analytics per se. The problem, when there is one, is in the choice that the predictive tool is meant to inform. Cohen et al. call this choice architecture: who is making what decision, for what end, and are they doing it transparently? They make a critical recommendation about who designs the choice architecture:

[predictive analytic] developers [should] implement governance structures that include patients and other stakeholders starting in the earliest phases of development.

Predictive analytics are tools for better decisions. The most important questions about predictive analytics are not the tools, but the purposes they can be used for.

@Bill_Gardner

Adrianna interviews Glenn Cohen here.

Hidden information below

Subscribe

Email Address*