Why Not Another RAND Health Insurance Experiment?

I believe randomized experiments are often to be preferred over observational studies. I just don’t believe that is always so, and in at least one case I can identify why.

According to Robin Hanson’s petition there is a (hypothetical) half-billion dollars on the table for research. He argues that it be used for a ten-year repeat of the RAND health insurance experiment (HIE), which concluded in 1982 and tested the effect of generosity of health insurance on health care utilization and outcomes. I wrote in a prior post that the money would be better spent on the approximately 1,000 observational studies a half-billion dollars could fund.

Hanson agrees with me that many observational studies could in principle be more valuable than one huge randomized study. But in practice he doesn’t think it would work out that way. His point, which is a good one, is that observational studies are potentially biased.

No doubt a thousand “well-conceived” observational studies, neutrally executed and interpreted, could in principle give more total info than one big experiment.  But since a great many funders, researchers, publishers, and meta-analysts seem much more willing to accept pro than anti-medicine results, then having a thousand varied studies would give many thousands of opportunities for such biases to skew their results.

But the potential for bias in general does not necessarily mean that this randomized study in particular should be preferred over 1,000 observational studies. There are enough problems with the idea of a ten-year RAND HIE repeat that it is reasonable to advocate rejecting it when the opportunity cost is so high. (Never mind a third option which is to fund fewer than 1,000 observational studies and use the balance of funds to set up some kind of review process by which bias is revealed and reduced.)

First, even experimental studies can suffer from bias introduced by the human researchers who carry them out or the behavior of participants under study. Contamination of experimental arms, attrition, or other flaws in implementation of randomization can and do affect randomized experiments. Bias can also be introduced in statistical corrections to such flaws or in the selective reporting of results. Moreover, experimental studies also frequently suffer from limitations in their generalizability. Even the original RAND HIE has a few imperfections that critics exploit (and investigators defend). It is not at all a given that a second RAND HIE would be superior to a large number of observational studies in these respects.

Second, ten years is a very long time in health care. By the time a second RAND HIE study is complete the practice of medicine and the design of insurance will be quite different from that which it studied. With generalizability so easily threatened it isn’t self-evident that the undertaking is worthwhile. Within a few years the new results will be stale and there will be calls for a third half-billion dollar study. Pretty soon we’ll be talking real money!

Meanwhile, the dramatically shorter turn-around time of observational studies increases the chance that their results are relevant to the world to which they’re delivered. No doubt the results of 1,000 such studies would not be unanimous, and some would be biased. But I am certain there would be a general consensus on some questions among studies that are widely regarded as well-designed, soundly implemented, repeatable, and even unbiased. To be sure there would be room for debate over results from such studies, just as there is in the case of the RAND HIE and as there would be with another one.

And that is really my main point. No study, or collection of studies, can ever be the definitive word on a subject. There will always be debate between those who are persuaded by the strengths of a study and those who focus on its flaws and biases. Whether studies are randomized experiments or observational, the best we can hope is that they inform, not that they settle, debate. Spending a fortune on a single randomized-design study that delivers results to a world different than it investigated isn’t likely to be any more informative or decisive than spending it on three orders of magnitude more observational studies would. Sometimes, and in this instance, a randomized experiment is not the optimal mode of scientific inquiry.

Hidden information below

Subscribe

Email Address*