If hospital readmission rates measure and rank hospitals by quality, what are we to make of this?
Objective. To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS).
Data Sources. 2000–2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file.
Study Design. We calculated 30-day readmission rates using three metrics, for three disease groups: heart failure (HF), acute myocardial infarction (AMI), and pneumonia. Using each metric, we calculated the absolute change and correlation between performance; the percent of hospitals remaining in extreme deciles and level of agreement; and differences in longitudinal performance.
Principal Findings. Average hospital rates for HF patients and the CMS metric were generally higher than for other conditions and metrics. Correlations between the ACR and CMS metrics were highest (r = 0.67–0.84). Rates calculated using the PPR and either ACR or CMS metrics were moderately correlated (r = 0.50–0.67). Between 47 and 75 percent of hospitals in an extreme decile according to one metric remained when using a different metric. Correlations among metrics were modest when measuring hospital longitudinal change.
Conclusions. Different approaches to computing readmissions can produce different hospital rankings and impact pay-for-performance. Careful consideration should be placed on readmission metric choice for these applications.
That’s the abstract of the new Health Services Research study by Sheryl Davies and colleagues. The paper includes good literature reviews and thoughtful discussions in the introductory and concluding sections, covering some same work and themes found in prior TIE posts on hospital readmissions.