Leamer’s EconTalk Interview

Ed Leamer’s EconTalk conversation with Russ Roberts this week was among the more interesting episodes and well worth a listen. Much of it focused on issues raised in my summary of Angrist’s and Pischke’s response to Ed Leamer’s 1983 American Economic Review paper titled Let’s Take the Con Out of Econometrics. By way of quick review,

Leamer and others in the early 1980s were distressed by the lack of testing of implications of assumptions in specification and functional form of econometric models. His proposed solution was to analyze the changes in results based on model variations (sensitivity analysis). Angrist and Pischke make a strong case that Leamer was correct in his diagnosis but not necessarily in his prescription. They argue that the “credibility revolution” experienced in empirical microeconomics since Leamer’s critique is due principally to a greater focus on research design not on sensitivity analysis.

Angrist and Pischke argue that methodological innovations that exploit purposeful or natural randomness, including instrumental variables (IV) methods are responsible for taking the con out of econometrics.

That doesn’t mean sensitivity analysis doesn’t have a role. On EconTalk, Leamer made a good argument that it is still relevant, important, and rare. He notes that the model published in an economics paper is just one of the many the authors probably estimated. Did they report the only one that worked, or did many produce similar results? Is the result fragile or robust? These are important questions, and the reader cannot answer them with what is provided in a typical empirical economics paper. (See also Leamer’s written counterpoint to Angrist and Pischke and other responses found on MIT’s website.)

Nevertheless, exploiting randomness does help make econometric models more credible because (1) it removes much of the ambiguity in making causal inferences and (2) it reduces the number of needed control variables. In the case of a randomized design in which subjects are randomly assigned to treatment or control groups without any tilt to selection, no controls are required. The simple difference in means is the estimate of the average treatment effect (or the “intent to treat” effect). There are no other econometric specifications to explore to estimate this quantity. Thus, the result is trivially robust to specification.

Similarly, in IV estimation, one can omit variables without loss of validity, though with some loss of precision, with the exception of any required for the conditional independence of the instrument and potential outcomes (the set of “conditioning variables”, which could be empty). This fact automatically increases the range of robustness of the results if they are significant without any but the conditioning variables included as controls. Of course it relies on the validity of instruments, which is either asserted (with a good argument) and/or tested (where possible). This turns much of the robustness exercise into stipulation of assumptions, which is far more compact and easier to assess.

I believe Leamer agrees with these points because when asked on EconTalk about whether exploiting randomness and IV help address robustness, he did not deny that they did. Instead he turned to the issue of generality. He pointed out that the results of an IV study don’t necessarily generalize to other settings. That’s true. But it’s true of any study, even randomized trials. It isn’t necessarily related to the robustness issue. Leamer makes different points in his written response to Angrist and Pischke, focusing on limitations of asymptotic results. He writes that Angrist and Pischke “persuasively argue [that] either purposefully randomized experiments or accidentally randomized ‘natural’ experiments can be extremely helpful, but [they] … overstate the potential benefits of the approach.”

All of these points–Leamer’s and Angrist’s and Pischke’s–are valid and important. They all relate to the key underlying question: when does a fact reveal a larger truth? Empirical results, no matter how obtained, provide some facts about the data. Can one apply those facts more broadly? Where’s the boundary of validity in doing so? Nobody can answer such questions generally. One hopes (or one should hope) that the range of truth revealed by econometric facts is broad enough to be of use. If not, we’re all in trouble.

Any attempt to increase the range of validity of econometric results should be applauded. Any assertions that all econometrics is not to be trusted is overly broad. Some econometrics may still include some degree of “con,” but with correct application of modern technique a substantial amount can be driven out.

Hidden information below

Subscribe

Email Address*