• Chart of the day: Mortality and readmissions – ctd.

    Maybe I’m not crazy. A mortality-readmission trade off has also been illustrated in a NEJM letter to the editor (lead author: Gorodeski).

    mortality-readmit 2

    We examined the association between risk adjusted readmission and risk-adjusted death within 30 days after hospitalization for heart failure among 3857 hospitals included in the CMS Hospital Compare public reporting database (www .hospitalcompare.hhs.gov) that had no missing data. We used linear regression analysis with restricted cubic splines (piecewise smoothing polynomials). […]

    Our findings suggest that readmissions could be “adversely” affected by a competing risk of death — a patient who dies during the index episode of care can never be readmitted. Hence, if a hospital has a lower mortality rate, then a greater proportion of its discharged patients are eligible for readmission. As such, to some extent, a higher readmission rate may be a consequence of successful care. Furthermore, planned readmissions for procedures or surgery may represent appropriate care that decreases the risk of death.

    An exceedingly blunt way to put it is, if we encourage hospitals to reduce readmission rates are we encouraging them to kill people?

    UPDATE: I had suggested that this letter to the editor was peer reviewed. That may not be the case, though I do not know NEJM’s policy on this.

    UPDATE 2: I’ve gotten some push-back on this by email, so let me explain a bit more of my thinking. I’m not saying that hospitals will kill people with the intention to reduce rehospitalization rates. What I’m saying is that a hospital or health system might do some things that improve mortality but make readmission rates go up. Some interventions might actually find people who NEED to be readmitted or they will die.

    It is also true that if one dies before 30-days after the index admission, one is less likely to be seen to be readmitted within 30 days. That’s a bias in some of the research that seems to be rarely acknowledged.

    Finally, the real fix here is to examine potentially preventable readmissions. I haven’t read deeply yet about those, but the concept and measurement of them exists. A dead person can’t have a potentially preventable readmission, so he’d be excluded from both the numerator and denominator. That makes sense to me. Almost nobody is using PPRs as the dependent variable though, so I still claim a lot of work is biased in a way we don’t want.

    For all that, as much as I’ve read, I’ve still only scratched the surface of the readmissions literature. Maybe I’m off base on some of my thinking. That’s where you come in. If you’ve got some expertise in this area, keep me honest, please!

    @afrakt

    Share
    Comments closed
     
    • I wondered about this.
      The work-around, would be to somehow not just track re-admissions, but also track deaths, no?

    • Hi Austin,

      I’ll withhold comments on your ‘craziness’ at least until we meet.

      The relationship illustrated in the figure 1 is likely to be confounded by the effect of supply variables such as the number of beds, the proportion of specialists, and measures of care intensity.

      Start here a classic take on the relationship between hospital utilization and mortality –

      http://www.nejm.org/doi/pdf/10.1056/NEJM198910263211706

      Thom

    • It’s hard, isn’t it? Reduce spending, improve outcomes, measure and reward just the right things, none of the wrong things, build a system of incentives and penalties one piecemeal part at a time. Ah, if we would all just do the right thing.

    • Statistically, why restrict the cubic spline? With the size of your sample, seeing the full range would better illustrate the relationship (rescale x). Much error in the tails for restricted transformation. Also, after transformation the marginal effect would need to be depicted by calculating derivativemortalitly / derivative readmission on y axis. In addition, remember both variables are risk-adjusted as imperfect patient-level controls for potential confounds.

      Substantively, only unplanned readmissions are in analyses. Also, your observation of the inverse relationship is correct, but your causal inference is wrong. Specifically, you’ve conditioned on a collider (selection bias) by conditioning on a common effect of exposure and outcome. The actual mortality rate to readmission rate over a fixed time period is likely negligible. Finally, it is not to say that ‘preventable’ readmissions may be unrelated to hospital care, but that better care should reduce readmissions, controlling for other risk factors.

        • Hi Austin,

          I was making some armchair observations on the calculation and illustration of the analysis. I, admittedly, have received the special-issue on hospital readmissions from JAMA, but have not taken a close look at the primary studies – there are likely statistical and methodological strengths and challenges across.

          I read Suissa (2008) on immortal time bias, and it is an interesting design and measurement flaw that is probably quite prevalent. I thought it interesting that they referred to it as a, sort of, inherent selection bias. I wanted to call your attention to a similar phenomenon as selection bias (Stratification Bias, or Conditioning on a Collider) –>

          http://ije.oxfordjournals.org/content/39/2/417.full

          2 brief notes from Suissa (2008):
          On the surface, I understand that those deceased after discharge could not be readmitted (outcome), but:

          1) As an event-based cohort study, the time period for everyone is 0 (individual-level data), and there is no ‘qualifying’ period for exposure versus unexposed. So, there really is no comparison across groups being made and there is also a fixed time period of observation for all (except deceased). Another option may be to collect risk-adjusted non-hospital mortality data from public records and apply Suissa’s (2008) correction formula.

          2) Please note that the magnitude of the bias is dependent on the ratio of ‘unexposed’ to ‘exposed’ (k) in Suissa (2008), tending toward 0. Considering the odds ratio is biased to RR based on (a) the magnitude of OR and (b) the base-rate event, I’m not sure I’d wager that the mortality rate outside hospitals is greater than that within. In other words, if the readmit data is not corrected for those deceased after discharge, it would also need to be counter-corrected for mortality rate on admission (RR).

          Just some thoughts.

          Thanks for pointing me to this provocative phenomenon – probably needs much more attention in continuing observational studies.

          Best,
          Matt