Last week, in Inquiry, my latest paper with Steve Pizer and Roger Feldman was published. An ungated, working paper version is also available. Note also that I wrote a bit about a portion of it in a prior post, though even that does not describe what the paper is about. I’ll write more about the results in the paper in another post. If you can’t wait, click through for the abstract. For now, I want to focus on another technical detail, which is likely to interest all of five readers. You know who you are from the title of the post.
Until fairly recently, my colleagues and I thought overidentification tests of instruments were worth doing. We no longer feel that way. Still, in order to be published, we have little choice but to do them when a reviewer demands them, but we still think they’re not very valuable.
Though these are typically discussed as tests of excludability, they are, in fact, joint tests of excludability and homogeneity of treatment effects (Angrist 2010). Consequently, instruments that are excludable may be rejected due to local average treatment effects.
Passing overid tests may convince some reviewers that one’s instruments are excludable from the second stage model, but it shouldn’t. Failing to pass doesn’t prove they are not. This is a rather weak case for their scientific value. Many papers in top economics journals using IV methods do not include overid tests. That’s just fine.
“Angrist 2010” is a personal communication with Josh Angrist.
@afrakt