• Identifying “good” and “bad” hospitals

    A new paper in JAMA Internal Medicine by McCrum, Joynt, Orav, Gawande, and Jha delivers good news about hospital quality ratings. Of course, it’s wonky good news, so come with me into the weeds and I’ll explain.

    In an accompanying commentary Smith and Shannon begin with Los Angeles County’s experience with publicly reported restaurant quality ratings. Hospitalizations for food-related illnesses fell 20% after LA restaurants were required to display a quality-based letter grade in their front windows. Could the same happen for hospitals?

    There are many reasons to be skeptical. One of them is health care is far more complex and varied than is the delivery of a restaurant meal. (For one, the nature of the outcome depends far on many more consumer (patient) factors.) As a consequence, even if consumers were motivated to use quality information, they might reasonably be worried that the few measures available aren’t applicable to their condition. As Smith and Shannon write, “Does it matter how a hospital does on cardiac surgery when you are going in for hip replacement surgery?”

    The study by McCrum et al. suggests that it does matter. Using Medicare data of discharges from 2,322 hospitals during 2008-2009, they examined the extent to which hospital performance on the three publicly reported, 30-day mortality measures — for heart attack, heart failure, and pneumonia discharges —  predict mortality for other medically and surgically treated conditions. This gets at the question as to whether top performing hospitals are optimizing over what’s being measured and observed (akin to teaching to the test) or whether high quality on a few mortality measures is indicative of high quality more generally.

    Or, put more simply, can “good” and “bad” hospitals be identified with a few mortality measures? The following two figures from the paper illustrate that they can. In Figure 1, hospitals are grouped by quartile of a statistic that blends the three publicly reported mortality measures into one. Separately for all conditions, medically treated conditions, and surgically treated conditions, the figure shows mortality by these quartiles. High performing hospitals — those in the lowest quartile of mortality based on reported measures — also have low mortality rates overall and by medical and surgical conditions. Vice versa for low performing hospitals. (Controls for patient comorbidities are included in the analysis.)

    mortality rates quartiles

    Figure 2 shows that top performing hospitals (those in the lowest quartile of mortality based on the publicly reported measures) have over five times better odds of being top performers on overall mortality rates and over eight times better odds of being a top performer for medical mortality rate. The results aren’t as dramatic for surgical mortality, for which a publicly reported top performing hospital has a 2.7 times better odds of being a top performer. Of note, the signal from the publicly reported mortality rates is stronger than is that of large or teaching hospital status, which consumers might otherwise take as indicators of quality.

    top odds

    A hospital’s 30-day mortality rates for Medicare’s 3 publicly reported conditions—acute myocardial infarction, congestive heart failure, and pneumonia—were correlated with overall hospital mortality rates, even in clinically dissimilar fields. Hospitals in the top quartile of performance on the publicly reported mortality rates had greater than 5-fold higher odds of being top performers for a combined metric across 19 common medical and surgical conditions, translating into absolute overall mortality rates that were 3.6% lower for the top performers than for the poor performers. Finally, performance on the publicly reported conditions far outperformed 2 other widely used markers of quality: size and teaching status. [...]

    Our results suggest that there may be substantial value in efforts to engage and empower patients to use publicly reported hospital performance to make informed choices regarding where to seek care, irrespective of the condition that brings them to the hospital.

    Of course it is hard to engage and empower patients to pay attention to clinical quality. However, to the extent we can, the results of this study are good news. They suggest there is a genuinely valuable signal in the reported mortality measures, one that does discriminate hospital quality more generally than in the dimensions they specifically measure.

    @afrakt

    Share
    Comments closed
     
    • It seems to me that league tables would need an absolute measure of acceptable mortality or complications. Otherwise, like schools, you might end up with the highly organized and motivated patients picking the best hospitals and those who are less organized or motivated and, therefore, less likely to comply with any medication or physical therapy instructions ending up at the lower quality hospitals, making it very difficult for those hospitals to ever catch up.

    • My understanding is that USNews ranks hospitals based on where doctors are most likely to refer their most serious patients. I’m a fan of this method because it happens to rank my department as best in the country, though I suspect that it heavily biases the results in favor of large hospitals, not just high-quality hospitals. http://health.usnews.com/best-hospitals/pediatric-rankings

    • Interesting data, but as you say, how do you empower patients to use this data. The conditions you say are used to develop these metrics (MI, CHF, pneumonia) are mostly acute conditions – particularly MI, where the patient is more apt to end up in the closest hospital, wherever the ambulance takes them, not necessarily the one they would choose. In mostly rural areas, I would suspect there is no choice.

      Even for elective procedures like hip replacement, my experience is that patients tend to pick the doctor/surgeon, and go to whatever hospital he/she has admitting rights to. Again not a direct choice by the patient.

      • I agree that patients are unlikely to choose a provider based on their performance on some of these quality metrics, especially acute conditions like MI. However, patients still benefit from these measures because hospitals do not like bad press. Most hospitals do not want to be categorized as poor performers and it is amazing to see how much more effort and money is being put into hospitals’ quality of care departments in reaction to the increased use of these measures. We are seeing readmission and mortality rates drop as the use of these measures increases and while this may simply be correlation I do think there is a causal link.

      • The choice of doctor is often driven by network/coverage factors. It may be just because I’ve spent my entire adult life in an HMO, but where to go to do whatever needs to be done – deliver a baby, have a biopsy, make an ER visit- has always been dictated by where my coverage would work. Can’t wait to get to Medicare age so I can experience all this choice everybody keeps talking about!

    • “Of course it is hard to engage and empower patients to pay attention to clinical quality.”

      I found this sentence of your article to be a bit condescending to patients/healthcare consumers. If the real details about quality and safety are visible to patients, ie visible report cards for each hospital and or provider in the media, in hospital lobbies, in advertisements, on TV PSAs etc, they certainly will look at it and make choices based on appropriate information. The problem is that there is very little accurate and detailed data. The data is generally manipulated and often countered by glowing advertising and media blitzes. When we give patients opportunities to publicly report and talk about their healthcare experiences, both good and bad, we will have more accurate data. I certainly would have appreciated more candor on infection rates when I was choosing which hospital and doctor to use for uterine cancer surgery.

    • Your helpful statistical analysis relating relatively easily accessible mortality data in hospitals to hospital performance in general is a welcome contribution to the hospital ratings discussion – and I thank you for it. However, your closing comment is mystifying and insulting to civilians like me who want desperately to know the true safety of our local hospitals.

      “Of course it is hard to engage and empower patients to pay attention to clinical quality.”

      Your comment fails to appreciate the reality of barriers, some of which are both real and intentional, to keep the general public as far away as possible from information – like yours – which might drive hospital use decisions.

      My 36 year-old daughter died – probably unnecessarily – in a hospital which had 16 Plans of Correction in place on the day she entered the hospital. The hospital had had 72 citations against it in the five years preceding her admission. We couldn’t know that for reasons I note below.

      The hospital in question is part of a hospital system that holds approximately 50% of the beds in my metropolitan area, is located in a wealthy suburb with a new state-of-the-art ER and ICU (new when my daughter was admitted) and is one of the largest private employers in our economically-stressed region.

      Although my state – New York – department of health publishes notices of citations, fines, and various other hospital ratings, the posted hospital information is generally very far behind – between one and two years behind in most cases.

      Existence of this information is not well-known to the public. Service stories about hospital ratings have only recently been published in our one-city newspaper where the hospital system often runs glittering ads. Stories published in 2013 may help future patients, but those stories were not being published in 2009 when my daughter became ill.

      In my area, as is true throughout the country, quality of care reports are never shared with the public.

      I would love to have known the true status of the hospital where my daughter died prior to her admission. My daughter would love to have known.

      We didn’t and couldn’t have known because current information was not available when she died in 2009 in a hospital where Medicare’s Hospital Compare now notes that mortality rates are higher than expected for treatable conditions. Even so, there is a catch. Most rating systems identify hospitals by their Medicare participation number. “My” hospital’s participation number includes five hospitals. For most measures, I have no way of knowing an individual hospital’s score – even in 2013.

      It would be significantly helpful if, instead of blaming patients for not knowing hospital hazards, you would focus your efforts instead on pressing hospitals and state health departments to be forthcoming with what they know, and if you encouraged public disclosure of accreditation surveys (the top-bottom surveys conducted every two or three years), quality of care issues, and infection and mortality rates to more places where potential patients (all of us) could find them.