I thank the readers who suggested publications that might include the concept I blogged about a few days ago, the extent to which IV estimation diminishes precision (power) relative to a straight-up RCT estimate.

Though it is possible some of that literature implied the result Steve had obtained, most of it seemed either not quite on target or more complex than necessary. So, I took the trouble of ~~writing up the proof more carefully (PDF) and confirming it for myself ~~writing it up and subsequently refining it (see update in a later post).

I still think this must be a known result, and if you are aware of a publication that includes a proof in a simple form, please let me know.

by Sam Richardson on May 11th, 2013 at 15:12

I’ve been at a small labor econ conference at UT the last couple days, and no one I’ve talked to has heard of this result. I think it’s because economists rarely have a reason to do power calculations, since we usually rely on natural experiments (which, by definition, we don’t get to design).

by Austin Frakt on May 11th, 2013 at 16:51

Actually, this result pertains directly to the precision of the coefficient estimate. It must also relate to power. But, as derived, it would seem to be of great importance to economists.

OTOH, economists believe that there is one right model. Either you’ve got it (or a decent approximation) or you don’t. Then, whatever precision you get, you get.

by Jared on May 12th, 2013 at 09:09

Speaking of the “right” model, these results evidently assume additive, homogenous treatment effects. We should be worried about that too, no? (Or maybe it’s moot if the sample size is too small by a factor of 16….)

by adam on May 11th, 2013 at 20:42

This discussion of how IV affects power has been really helpful, but I went through the calculations after your first post and I think some of the results are slightly off due to the treatment of X and Z as vectors even though they are matrices (due to the constant term).

I demeaned everything to get rid of the constants and ended up with a slightly different formula that also implied the variance of the Oregon Health Study estimates was about 16 times larger due to imperfect compliance. This matches some of the commenters in the first post.

I set up the equations as:

outcome = alpha + beta*enroll + eps

enroll = delta + gamma*lottery + nu

I found:

Var(Beta IV) = sigma^2 / [ N*(gamma^2)*p*(1-p) ]

Where N is the total number of observations in the entire trial (lottery winners + lottery losers) and p is the proportion winning the lottery. gamma is the increase in enrollment due to winning.

When lottery=enroll, this is a randomized trial with perfect compliance and OLS and IV are equivalent. Then gamma=1 so:

Var(Beta OLS) = sigma^2 / N*p*(1-p)

Where N is the total number of observations and p is the proportion winning the lottery or equivalently the proportion enrolling.

Winning raised takeup by 25pp, so gamma=0.25. To replicate the power of a randomized trial with perfect compliance, the study must be 1/gamma^2 times bigger — that is 16 times bigger here.

by Austin Frakt on May 11th, 2013 at 21:29

It’s not clear to me why we don’t get the same result. I played around to see if the two results really are the same in disguise, but I couldn’t make it work. If you can be more specific about a problem with what I posted, I’d be grateful. What if we assume everything is zero mean?

by adam on May 12th, 2013 at 00:44

No problem. If you assume everything is zero mean (or just de-mean everything) then you can get a little farther.

The trouble is between (8) and (9). The de-meaned X’X is not equal to the n=number of observations in the treatment group but rather N*t*(1-t)=n*(1-t) where N is the total number of study participants and t is the fraction of all study participants who enroll in Medicaid (i.e. X=1).

So in your proof the denominator of Var(BetaIV) should be n*(1-t)*R^2. You can show that n*(1-t)*R^2=N*t*(1-t)*R^2 equals the denominator I gave before of N*(gamma^2)*p*(1-p) because these are the left and right-hand sides, respectively, of (8) when X is demeaned.

Also, the denominator of Var(BetaOLS) becomes n*(1-t)=N*t*(1-t) which matches what I gave before, since in perfect compliance t=p.

by Austin Frakt on May 12th, 2013 at 10:38

Thanks. I am very appreciative of your careful read and response. I will go through this again carefully with Steve soon (hopefully Monday) and post an update.

by Austin Frakt on May 12th, 2013 at 12:03

X is the vector of trement indicators. So, X’X = n = Np. Therefore, the mean of X’X is X’X/N = p. So why isn’t the de-meaned version Np – p?

by adam on May 12th, 2013 at 12:36

The issue is that X has to be de-meaned before the interaction X’X. If you define demeaned X as Xd = X-X_bar where X_bar is a vector of the mean of X=p, then

Xd’Xd = X’X – 2X’X_bar + X_bar’X_bar

= n – 2pn + np

= n – np

= n(1-p) = N*p*(1-p)

Another way to think about this is to see the x_i as iid Bernoulli with success probability p. So (1/N)X’X -> E[ x_i^2 ] = p whereas (1/N)Xd’Xd -> Var[ x_i ] = E[ x_i^2 ] – E[ x_i ]^2 = p*(1-p).

by Austin Frakt on May 12th, 2013 at 12:54

Perfect. Thanks.

by Gabriel Odom on May 13th, 2013 at 13:34

Dr. Frakt, I am a graduate student at Baylor University studying Biostatistics. I would like to thank you for these posts, they have greatly helped me understand the Oregon Medicaid study.

Concerning the variance discussed above, do you mind explaining to me why we cannot assume that the sample variances of the non-Medicaid enrollees and Medicaid enrollees would be different? I assumed that – with respect to health and socioeconomic factors – the variances of the two groups should be very similar to one another.

by Austin Frakt on May 13th, 2013 at 14:10

Where did I suggest differences in sample variances?

Anyway, there will be an update to all this soon. I’ve continued to work on and refine.

by Gabriel Odom on May 13th, 2013 at 14:51

My mistake, it seems that I misread adam’s first post.