The only time I had lunch with Jon Gruber he told me the biggest problem in policy debates is a lack of clear understanding of the counterfactual. Since then, I’ve been paying more attention, and he’s right that it is not well understood. Since I used the concept repeatedly in the comments on a post earlier today, I thought I should probably explain what I (and Gruber) mean.
When you want to know the causal effect of an intervention (policy change, medical treatment, whatever) on something, you need to compare two states of the world: the world in which the intervention occurred and the world in which it did not. The latter is the counterfactual world. Since most of us only get to live in one world (most of the time), observing the counterfactual is a rather tricky thing to do. Of course, there are various worthy techniques.
One technique that is usually pretty bad, but is probably the most common one people’s minds seem to turn to, is a comparison of the world after the intervention to the world before it, a pre-post analysis (with the “pre” serving as the counterfactual, a stand in for how the world would be in the absence of intervention). Quick, what would happen if we offered tax subsidies for cell phone purchases? The natural presumption is that cell phone sales would go up, and I am sure they would. But, they’re probably going up anyway. So a pre-post comparison of annual sales figures would not reveal the true effect of the policy. There’s an underlying trend that has to be accounted for.
What we really want to know is how the world is different due to the intervention and only the intervention. A randomized controlled (or experimental) design is the gold standard approach. The assumption in that case is that, statistically, the two parts of the world (the treatment group and the control group — the counterfactual) are similar enough in all respects other than the intervention that comparing them gives you the true effect of the intervention. You’ve constructed a plausible counterfactual world. Good trick!
Sometimes we don’t have the luxury of an experimental design. In that case, we have to exploit something special about the world, like the intervention only occurred to people in this region but not that region and we can control for all the meaningful differences between the regions. Suffice it to say, this requires some assumptions (that you’ve controlled for all the meaningful differences), as do other approaches. Still, often this is the best we can do.
The most important point is that almost nobody is explicit about this in policy debates, even when the policy is crucially important. Will health reform cause employers to drop coverage or not? Well, we only have one world. The counterfactual needs to be constructed. It can’t simply be assumed to be the pre-reform world, because employers have been dropping coverage for years. There’s a trend. Other things may change (the economy, the nature of the labor market, etc.), so we’d want to control for those. And so forth.
This is worth thinking about. Next time you’re involved in a policy debate, ask your opponent, what (s)he is taking as the counterfactual? If (s)he doesn’t even know what you’re talking about, you’ve already won, even if (s)he won’t admit it.