The recent NBER paper by Janet Currie, W. Bentley MacLeod, and Jessica Van Parys is extremely well written. Few papers are as easy on the reader, and more should be.
I’m going to defer discussion of findings and just make note of some of the referenced claims in the background section, but here’s the abstract if you really must know:
When a patient arrives at the Emergency Room with acute myocardial infarction (AMI), doctors must quickly decide whether the patient should be treated with clot-busting drugs, or with invasive surgery. Using Florida data on all such patients from 1992-2011, we decompose physician practice style into two components: The physician’s probability of conducting invasive surgery on the average patient, and the responsiveness of the physician’s choice of procedure to the patient’s condition. We show that practice style is persistent over time and that physicians whose responsiveness deviates significantly from the norm in teaching hospitals have significantly worse patient outcomes, including a 7% higher probability of death in hospitals among the patients who are least appropriate for the procedure. Our results suggest that a reallocation of invasive procedures from less appropriate to more appropriate patients could improve patient outcomes without increasing costs. Developing protocols to identify more and less appropriate patients could be a first step towards realizing this improvement.
As you can tell from the abstract, the paper is about the extent to which modifying physician practice variation with protocols that are sensitive to patient characteristics (to which some physicians are less so) would improve outcomes. Hence, the following claims from the background, linked to references, are highly relevant:
- In a 1954 publication, Meehl “argued that predictions based on these simple models were generally more accurate than those of [clinical psychologists]. A more recent meta-analysis of 136 studies in clinical psychology and medicine also found that algorithms tended to either outperform or to match the experts.”
- “The advantage of the algorithms arises mainly because the algorithms are more consistent than the experts.”
- There are many possible reasons for mistakes by experts in medicine: (1) defensive medicine, but “Baicker et al. (2007) argue that there is little connection between malpractice liability costs and physician treatment of Medicare patients”; (2) financial incentives, but “a recent national survey of general surgeons which used hypothetical clinical scenarios suggested that the decision to operate was largely independent of malpractice concerns and financial incentives”; (3) patient preferences, but Cutler et al. “conclude that patient demand is a relatively unimportant determinant of regional variations and that instead the main driver is physician beliefs about appropriate treatment that are often unsupported by clinical evidence.” (Bill wrote about this study here.) Work by Finkelstein et al. is consistent with this.
- That leaves (4) influence by peers—indeed, “knowledge spillovers are the main theoretical driver of small area variation in procedure use in” the model by Chandra and Staiger; and (5) “Doyle et al. (2010) suggest that some doctors may just be more competent than others.”
- The paper by Currie et al. builds on the work cited in (4) and (5). “Our main focus is on identifying doctors who, for whatever reason, are making poor use of the observable data about their patients when making treatment decisions. We will show that patients of these doctors tend to have worse outcomes than other comparable patients.