Center for Studying Health System Change: (1) publications on insurance coverage and costs, (2) publications on health care markets, (3) all publications. That should be enough to keep you busy for a while.
Despite widely documented variations in health care outcomes by insurance status, few nationally representative studies have examined such disparities in the inpatient setting. [Our objective is to] determine whether there are insurance-related differences in hospital care for 3 common medical conditions. … For each diagnosis, we compared in-hospital mortality, length of stay (LOS), and cost per hospitalization for Medicaid and uninsured patients with the privately insured. Compared with the privately insured, in-hospital mortality among AMI and stroke patients was significantly higher for the uninsured (adjusted odds ratio [OR] 1.52, 95% confidence interval [CI] [1.24-1.85] for AMI and 1.49 [1.29-1.72] for stroke) and among pneumonia patients was significantly higher for Medicaid recipients (1.21 [1.01-1.45]). Excluding patients who died during hospitalization, LOS was consistently longer for Medicaid recipients for all 3 conditions (adjusted ratio 1.07, 95% CI [1.05-1.09] for AMI, 1.17 [1.14-1.20] for stroke, and 1.04 [1.03-1.06] for pneumonia), although costs were significantly higher for Medicaid recipients for only 2 of the 3 conditions (adjusted ratio 1.06, 95% CI [1.04-1.09] for stroke and 1.05 [1.04-1.07] for pneumonia). … Americans hospitalized for 3 common medical conditions, significantly lower in-hospital mortality was noted for privately insured patients compared with the uninsured or Medicaid recipients. Interventions to reduce insurance-related gaps in inpatient quality of care should be investigated.
This paper provides an introduction and “user guide” to Regression Discontinuity (RD) designs for empirical researchers. It presents the basic theory behind the research design, details when RD is likely to be valid or invalid given economic incentives, explains why it is considered a “quasi-experimental” design, and summarizes different ways (with their advantages and disadvantages) of estimating RD designs and the limitations of interpreting these estimates. Concepts are discussed using examples drawn from the growing body of empirical research using RD.
This paper compares the structural approach to economic policy analysis with the program evaluation approach. It offers a third way to do policy analysis that combines the best features of both approaches. I illustrate the value of this alternative approach by making the implicit economics of LATE explicit, thereby extending the interpretability and range of policy questions that LATE can answer.
Two recent papers, Deaton (2009) and Heckman and Urzua (2009), argue against what they see as an excessive and inappropriate use of experimental and quasi-experimental methods in empirical work in economics in the last decade. They specifically question the increased use of instrumental variables and natural experiments in labor economics and of randomized experiments in development economics. In these comments, I will make the case that this move toward shoring up the internal validity of estimates, and toward clarifying the description of the population these estimates are relevant for, has been important and beneficial in increasing the credibility of empirical work in economics. I also address some other concerns raised by the Deaton and Heckman-Urzua papers.
There is currently much debate about the effectiveness of foreign aid and about what kind of projects can engender economic development. There is skepticism about the ability of econometric analysis to resolve these issues or of development agencies to learn from their own experience. In response, there is increasing use in development economics of randomized controlled trials (RCTs) to accumulate credible knowledge of what works, without overreliance on questionable theory or statistical methods. When RCTs are not possible, the proponents of these methods advocate quasi-randomization through instrumental variable (IV) techniques or natural experiments. I argue that many of these applications are unlikely to recover quantities that are useful for policy or understanding: two key issues are the misunderstanding of exogeneity and the handling of heterogeneity. I illustrate from the literature on aid and growth. Actual randomization faces similar problems as does quasi-randomization, notwithstanding rhetoric to the contrary. I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority. I illustrate using prominent experiments in development and elsewhere. As with IV methods, RCT-based evaluation of projects, without guidance from an understanding of underlying mechanisms, is unlikely to lead to scientific progress in the understanding of economic development. I welcome recent trends in development experimentation away from the evaluation of projects and toward the evaluation of theoretical mechanisms.