Predicting hospital readmission rates from past rates

The current issue of Medical Care has several articles on hospital readmissions. One of them is by Jason Hockenberry and colleagues, who conclude,

Previous hospital readmission rates are poor predictors of readmission for future individual patients, therefore, policies using these measures to guide subsequent reimbursement are problematic for hospitals that are financially constrained. Our findings indicate current diagnosis related group payments would need to be raised by 10.0% for AMI, 11.5% for CAP, and 16.6% for CHF if these are to become 30-day bundled payments.

In an editorial, my colleague Steve Pizer questions a key methodological choice.

Unfortunately, Hockenberry and colleagues make a fundamental modeling error that renders their key finding invalid. […]

Hockenberry and colleagues estimate a linear probability model that relates individual probability of readmission to the hospital’s readmission rate in the previous quarter, the length of stay in the index hospitalization, a set of risk-adjustment variables, and a hospital fixed effect. […] The problem here is that the average hospital readmission rate over the 5-year dataset is absorbed by the hospital fixed effect. What is left in the previous quarter’s readmission rate is simply that quarter’s deviation from the 5- year mean. There is no reason to believe that last quarter’s deviation from the hospital’s long-term readmission rate should predict readmission risk today. Hockenberry and colleagues, perhaps inadvertently, test an irrelevant hypothesis.

Hockenberry et al. respond,

Dr Pizer is correct that fixed-effect modeling relies on within-hospital rather than between-hospital variation to estimate the impact of readmission in our model. This would be a matter of concern, as one of our original peer reviewers pointed out, if there was not much variation in quarterly readmission rates within a hospital. However, as noted in our original work, there is substantial variation in the quarterly readmission rates within hospitals, as the within-hospital standard deviation is of nearly the same magnitude as between-hospital variation in readmission for each condition.

Hockenberry et al. and Pizer go on to disagree that a random, not fixed, effects specification would have been more appropriate.

In another editorial, Claude Setodji and Michael Shwartz explain the differences between the two specification types and the implications.

If one believes that the causes of hospital readmission rate effects are closer to unmeasured time-invariant factors that can lead to serious bias, one should rely on fixed-effect models to take such confounders into account. This modeling strategy will come at a price: only within-hospital changes are studied. If within-cluster variation in predictors is largely random noise, there is no reason to expect the change in predictors to be associated with the outcome. Hockenberry and colleagues argue that within-hospital variation is meaningful, that is, nonrandom and systematic, because it is large and similar in magnitude to between-hospital variation. However, with small denominators, one might expect large random variation within hospitals. In contrast, if one collects enough covariates [] that can be used as proxies for unmeasured confounders, random-effect models can result in more efficient estimators and avoid the necessity of relying on inference based on within-hospital changes. In this case, between-hospital inference can be made.

Who is right? Given my relationship with Steve, I’m as biased as a random effects specification for which the time-invariant, omitted variables are correlated with the predictor of interest. So, judge for yourself.

@afrakt

Hidden information below

Subscribe

Email Address*