The following originally appeared on The Upshot (copyright 2017, The New York Times Company).
The relatively recent movements toward transparency and quality in health care have collided to produce dozens of publicly available hospital quality metrics. You might consider studying them in advance of your next hospital visit. But how do you know if the metrics actually mean anything?
There are valid reasons to be suspicious of measurements of hospital quality. One longstanding concern is that some hospitals may disproportionately attract sicker patients, who are more likely to have worse health outcomes. That could cause those hospitals to appear less effective than they actually are. Statistical techniques can mitigate but not completely eliminate this bias.
A related problem is that measurement of the quality of a hospital can be biased if it doesn’t take into account the socioeconomic status of the population it serves — and many such metrics do not. For example, a hospital in a wealthy region serves patients with more resources, relative to a hospital in a poorer region. If greater patient resources translate into better health — and a lot of research suggests they do — the hospital in the wealthy region may appear to be of higher quality. But that isn’t necessarily because of the care it delivers.
Because of issues like these, one study found that approaches to rating hospitals don’t agree on which hospitals are high or low in quality. “We have a vast number of quality measures,” said Dr. Ashish Jha, a co-author of the study and a scholar of health care quality at the Harvard T.H. Chan School of Public Health, “but which are signal and which are noise? It can be incredibly tricky to sort out.”
A recent study, however, shows that there is at least a bit of signal within the noise. The study, by health economists at M.I.T. and Vanderbilt, found that hospitals that score better on certain metrics reduce mortality. Among the ones they examined were patient satisfaction scores.
“We found that hospitals’ patient satisfaction scores are useful signals of quality, which surprised me to some extent,” said Joseph Doyle, an economist at M.I.T. and one of the study’s authors. “Hospitals with more satisfied patients have lower mortality rates, as well as lower readmission rates.”
According to the study, a hospital with a satisfaction score that is 10 percentage points higher — 70 percent of patients satisfied versus 60 percent, for example — has a mortality rate that is 2.8 percentage points lower and a 30-day readmission rate that is 1.9 percentage points lower. This is consistent with earlier work, described by my colleague Aaron Carroll, that found an association between better Yelp ratings of hospitals and lower mortality rates and readmission rates for certain conditions.
Mr. Doyle’s study, published as a National Bureau of Economic Research working paper, is exceedingly clever in its design. The ideal study would be to randomly assign patients needing hospital care to facilities with high or low quality. Then, this ideal study would see what happened to those two groups of patients: Did the group randomized to more highly rated hospitals live longer and stay out of the hospital longer? If so, the metrics are, in fact, providing useful guidance.
For ethical as well as practical reasons, we cannot randomly assign patients to hospitals. But it turns out that in emergency situations, like heart attacks, which ambulance service picks up patients who live in the same neighborhood is effectively random in many cases.
In some locations, patients are assigned to services in an orderly rotation. In others, services compete to see which can reach a patient first. In others still, it’s the ambulance that happens to be closest to the patient that gets the business. In all of these cases, exactly which ambulance picks up a given patient with a given condition is random. It also turns out that ambulance companies have preferences for certain hospitals, and the random assignment of ambulance companies to patients leads to an effectively random selection of the hospital at which they receive care.
The authors exploited this randomness as a natural experiment to test how different kinds of hospital quality measures predicted mortality and readmissions. Using data from 2008 to 2012, they compared Medicare patients needing emergency care who lived in the same ZIP code but were served by different ambulance companies and, therefore, tended to be delivered to different hospitals with different quality scores. The approach was validated in earlier research that showed that higher-cost hospitals have lower mortality rates than lower-cost ones.
In addition to testing the predictive ability of satisfaction scores, Mr. Doyle’s study examined indicators of high-quality care — things that a hospital does that are believed to improve outcomes, like the rate at which a hospital gives heart attack patients aspirin upon arrival.
Here, too, hospitals with better such indicators had lower mortality and readmission rates. The very best hospitals by these measures can reduce the odds of death within a year by 14 percent relative to the very worst hospitals, for example.
“Though hospital quality measures are not perfect, our work provides some reasons to be optimistic about some of them,” Mr. Doyle said. “Hospitals that score well on patient satisfaction, follow good processes of care and record lower hospital mortality rates over the prior three years do seem to keep patients alive and out of the hospital longer.”