Josh Angrist and Jörn-Steffen Pischke sent me a copy of their modestly titled book Mostly Harmless Econometrics: An Empiricist’s Companion (let’s call it MHE for short). If the title sounds slightly familiar then you’re probably a Douglass Adams fan–he wrote a Mostly Harmless book too–and you’d be correct in assuming that MHE is not your ordinary econometrics text.

Angrist and Pischke claim their style has a “certain lack of gravitas.” With an emphasis on the practical and intuitive use of the most common, widely applicable, and relatively simple econometric models they provide a far less intimidating tour than most texts of techniques for the evaluation of social experiments, whether artificially or naturally randomized. Nevertheless, this book has math, more than I cared to study closely on first read, particularly in later chapters covering more advanced material.

Yet, the writing style is far less stodgy than typical academic texts. The fun begins in Chapter 1 (Questions about Questions), in which Angrist and Pischke write,

Research questions that cannot be answered by any experiment are FUQs: fundamentally unidentified questions. What exactly does a FUQ look like? …

Suppose we are interested in whether children do better in school by virtue of having started school [at age 7 instead of 6]. … To be concrete, let’s look at test scores in first grade.

The problem with this question … is that the group that started school at age 7 is older. And older kids tend to do better on tests, a pure maturation effect. … The problem here is that for students, start age equals current age minus time in school. … [T]he effect of start age on elementary school test scores is impossible to interpret even in a randomized trial, and therefore, in a word, FUQed.

Putting aside the FUQed, Angrist and Pischke explain the essentials of causal analysis for observational studies, beginning with a gentle introduction to the selection problem and regression in Chapter 2 (The Experimental Ideal). One can gain tremendous insight with little heavy lifting by reading that brief, 12 page chapter alone.

The real guts of the subject are presented in Chapter 3 (Making Regression Make Sense) and Chapter 4 (Instrumental Variables in Action). Slightly more advanced material is found in the final four chapters, covering fixed effects, differences-in-differences, regression discontinuity, quantile regression, and standard error estimation. I skimmed those final chapters only closely enough to know what’s there, for future reference. My main interest was in improving my understanding of IV basics, for which close reading beyond Chapter 4 is not necessary.

MHE is not only an econometrics reference and tutorial, it’s also a guide to a subset of the observational study literature that applies sound technique. Every method is motivated and illuminated by reference to or examples from published work. That’s particularly valuable to the publishing practitioner who needs to demonstrate adherence to proven methodology by reference to prior studies.

Thus, MHE is better than “mostly harmless,” and I recommend it highly, particularly to those who evaluate social programs, clinical trials, or otherwise wish to estimate causal effects from experimental or observational data. Yet I can think of a few, small ways MHE could be enhanced. My least important suggestion is an index of stylized facts. There are a handful of main points that the practitioner should carry around in his head, knowing he can look up the details when necessary. These might include, for example, that propensity scores only control for observable differences between treatment and control groups (pp. 86-87); the fact that the instrument is independent of potential outcomes is a different idea than an exclusion restriction (p. 155; this, by the way, is a mind-bender and took me some time to appreciate); don’t include an outcome as one of the regressors (pp. 64-68); that non-linear models are very rarely necessary and very often lead to trouble (p. 190); among others.

One problem with nonlinear models is that they generate biased results with two-stage prediction substitution, a fact Angrist and Pischke discuss in Chapter 4. It deserves to be mentioned, but they didn’t, that one can obtain unbiased estimates of causal effects with nonlinear models using two-stage residual inclusion (2SRI), which is surprisingly simple and easy to implement (Terza, Basu and Rathouz, 2008). This is only important in the small subset of circumstance in which linear models won’t do. One such circumstance arises in my work in which models are put to use for policy simulations. In that case, linear approximations that don’t reproduce crucial nonlinear features of a distribution can be a problem, if only in presentation (which is important).

I’ll conclude by noting a large issue lurking in the background to which Angrist and Pischke only allude. That’s theory (by which I mean anything outside the data). What’s it for? Can one really conclude causality from data alone? The answer is “no,” but the reason is subtle. The topic almost arises twice, once in a discussion of how to decide whether a control variable is or is not an outcome variable. When one can’t use time to determine what can be the cause of what then “clear reasoning about causal channels requires explicit assumptions about what happened first, or the assertion that none of the control variables are themselves caused by the regressor of interest.” (p. 68) That’s theory folks.

Later, on page 156 the authors write, “There is nothing in IV formulas to explain *why* [treatment] affects [outcomes]; for that, you need a theory.” OK then! Theory has a role. In fact, its role is larger than implied by these quotes. I assert that one can’t begin to understand if or when selection on observables or unobservables (or endogeneity in general) might occur without theory. Put it another way, the model one chooses to estimate and the manner in which one does so comes in part from theory, a point stressed by Andrew Gelman in his review of MHE (a review worth reading, by the way).

In many cases, that theory is our own intuition, not some formal mathematical model. We know something about the world, about what can affect what, that we bring to the data. Without those extra-data notions, we wouldn’t even know what to study or how, let alone how to interpret what we find. I think this is something applied economists should appreciate. The data can reveal the size of causal effects, but only after we have decided what can cause what. Without such ideas, finding potentially valid instruments would be next to impossible. If you don’t believe me, next time you approach your analysis, ask a colleague to rename all your variables v1, v2, v3, etc. (and not provide you with a crosswalk to their actual names). Good luck!

*Later*: See also the Mostly Harmless blog.

**References**

Terza JV, Basu A, Rathouz PJ. Two-stage Residual Inclusion Estimation: Addressing Endogeneity in Health Econometric Modeling. J Health Economics 2008: 27: 531-43.

by Jim Burgess on June 1st, 2010 at 06:40

Thanks, Austin, for a reference to a book I might have missed (and will get). I’m wondering if it has enough to be used as a supplementary text for the Health Economics course Ted and I teach to the PhD students (I think from you and Andrew Gelman it might).

You have of course read Judea Pearl’s book on Causality, right (that our PhD students really enjoyed…. well, sort of)? I like Pearl because he hits that issue head on, even if he is too “computer sciencey” at times. But his book needs to be what it is for its applications to AI issues.

Jim

by Austin Frakt on June 1st, 2010 at 07:47

Thanks Jim. I’ll add Pearl’s book to my list. See also the Mostly Harmless blog and website: http://www.mostlyharmlesseconometrics.com/blog/

by A on June 3rd, 2010 at 01:23

Thanks for this. I’m in law school but I don’t want to forget my undergrad econometrics (slowly being replaced by statutes and provisions) so this is the book for me! I even prefer the lighter tone.