• Readmissions revisited: A response from the authors

    The following is a guest post from Steven H Sheingold, Director, Division of Health Financing Policy, Office of Health Policy, Office of the Assistant Secretary for Planning and Evaluation, Department of Health and Human Services; Rachael Zuckerman, Economist, Office of Health Policy, Office of the Assistant Secretary for Planning and Evaluation, Department of Health and Human Services; and Adele Shartzer, Research Associate, Health Policy Center, The Urban Institute.

    On January 27, Garret Johnson and Zoe Lyon, research assistants to Dr. Ashish Jha, provided a guest post titled Readmissions Revisited concerning our recent Health Affairs paper. We thank them for their excellent summary of the article and also want to recognize Professor Jha’s contributions to our understanding of readmission differences among hospitals.

    There has been concern since the implementation of Medicare’s Hospital Readmission reduction program (HRRP) that safety net hospitals would be unfairly penalized. Whether or not to account for socioeconomic factors is an important and controversial policy issue for the HRRP and for all other health care payment systems that are based on quality indicators. Therefore we thought it useful to clarify a few issues raised by Johnson and Lyon.

    First, they raise the issue that our analyses compared the safety net hospitals (the top 20% of hospitals based on their disproportionate share ratios) to all other hospitals rather than to the bottom 20% of hospitals. While the latter might be an interesting comparison, it is not fully relevant to the purposes of our paper. The HRRP penalties are not based on differences between the best and worst performers. Instead, the measure used to determine the HRRP’s penalties — called excess readmission ratio — compares each hospital to an adjusted national average.

    Second, Johnson and Lyon were concerned about clustering of patients within hospitals which would make the model look like it has more data than it truly does, meaning that the standard errors are smaller than they should be. While we did not make this explicit in the paper, all of the models were estimated using generalized linear models (GEE) with exchangeable correlation structures. These models do account for correlation within hospitals.

    Johnson and Lyon seem to infer that our objective was to slow and refocus the policy debate on this issue. In contrast, our paper provides some answers to move the discussion forward, albeit not as quickly as Johnson and Lyon would prefer. Our recommendations are in line with the evidence driven approach the Congress took under the IMPACT Act of 2014 by mandating and funding extensive research on the relationships between socioeconomic factors, quality and payment. This research, which is now well underway with the Department of Health and Human Services, will better inform policy development in the near future.

    Johnson and Lyon advocate immediate implementation of an adjustment using the socioeconomic factors we already have in administrative data since our research shows these factors explain 25% of the difference in readmission rates between safety net and other hospitals, after accounting for the HRRP’s risk adjustment factors. This position misses some key issues policy makers might consider.

    First, it presumes that the active debate over whether to adjust quality indicators for socioeconomic factors in payment systems has been resolved in that direction. We do not believe it has.

    Second, they presume that the differences in readmission rates translate directly to penalties. In fact, as we noted in comparing penalties between the safety net and other hospitals, the current method of calculating excess readmissions has already eliminated a substantial share of the differential. Therefore, simply adding readily available socioeconomic factors to the current risk adjustor would not affect existing penalties appreciably — even after accounting for the more vulnerable financial position of safety net hospitals. Thus, additional consideration might be given to the costs of the regulatory and systems changes needed to implement such payment modification relative to the potentially very small impact they would have.

    We share Johnson and Lyon’s concern for safety net hospitals. Our paper is clear that the plight of providers that treat the most vulnerable patients must be carefully evaluated as we move forward with a greater number of quality based payment mechanisms. In addition to the results of our statistical models, we simply point out what the data show — safety net and other providers have about the same size penalties despite the wide difference in raw readmission rates. At this point, we have not judged this result as fair or unfair as Johnson and Lyon suggest — policy makers must make that call after they are well informed with research and policy analysis.

     
    item.php
  • Readmissions revisited

    The following is a guest post by Garret Johnson and Zoe Lyon, both research assistants for Dr. Ashish Jha at the Harvard T.H. Chan School of Public Health. Garret graduated from Brown University in 2014 with a degree in French. Zoe graduated from Kenyon College in 2015 with a degree in Religious Studies. Find them on twitter @garretjohnson22 and @zoemarklyon.

    On January 27, CMS released a new guide to preventing readmissions among diverse populations, as part of its “Equity Plan for Improving Quality in Medicare.” This latest initiative represents a new chapter in the controversy over the hospital readmissions reduction program (HRRP), one of the many initiatives introduced by the ACA to improve U.S. healthcare quality. In effect since 2012, the HRRP aims to reduce costly readmissions by financially penalizing hospitals with “excess readmissions” by up to 3 percent of their total base DRG payments.

    While the HRRP adjusts for age, sex and comorbidity differences across hospitals, there is now substantial evidence that high readmission rates — especially for medical conditions, as opposed to surgical ones — are driven more by patient factors outside of hospitals’ control (e.g. poverty, lack of social supports) than by hospitals’ quality of care and discharge planning. Studies show that patients who are poorer, sicker, and of racial minorities are readmitted at higher rates than other patients, which raises concern that the HRRP is simply punishing hospitals for the groups of patients that they serve. Advocates for the HRRP argue that the program removes the perverse incentive wherein hospitals get more money if patients are readmitted than if they are not, and that tying payment to readmissions is an effective way to hold hospitals accountable for discharge planning and care transitions. Though blunt, the penalties are forcing hospitals to think beyond their walls to ensure that patients receive effective care throughout their interaction with the healthcare system.

    With its new guide to preventing readmissions in diverse populations, CMS seems to be acknowledging the issues with the readmission measure while attempting to take a leadership role in improving care for patients of low socioeconomic status. Accomplishing that and eliminating disparities in readmission rates is far better than adjusting them away. But, we should at least agree that disparities exist, and that patient factors matter.

    Enter the latest paper on the subject, written by Steven Sheingold (director of the Division of Health Financing Policy at the Department of Health and Human Services) and colleagues in Health Affairs. Using Medicare data, the authors attempt to determine how much of the difference in readmission rates between safety-net (which they define as hospitals in the top 20 percent of disproportionate share or DSH ratio[1]) and non-safety-net hospitals (defined as all other hospitals) is due to observable hospital, patient and geographic area characteristics as opposed to unobservable factors, like the ever-elusive and complex “quality of care.” Sheingold et al. also present the level of penalties actually incurred under the HRRP by safety-net status, which they say “has received little attention.”

    Without adjustment, they find that patients admitted to safety-net hospitals are 16 to 17 percent more likely to be readmitted than those admitted to other hospitals. Then, they sequentially adjust a logistic model of readmissions on safety-net status for:

    • Age, sex, and comorbidities (the only covariates in the actual HRRP risk adjustment scheme)
    • Socioeconomic status (race, dual eligibility, rural residence, traveling to an urban hospital)
    • Admission characteristics (length of stay, discharge destination)
    • Hospital characteristics (e.g. teaching status)
    • Area characteristics (e.g. local unemployment rate)

    After these adjustments, the differential between safety-net hospitals and other hospitals drops by 10 percentage points to 6 to 7 percent. As the authors put it, the observed factors listed above accounted for 60 percent of the increased likelihood of readmission at safety-net hospitals. It is important to note that the HRRP only adjusts for age, sex and comorbidities, meaning that a whole host of observable socioeconomic and other characteristics associated with increased readmission risk is not accounted for.

    Using these adjusters, but this time applied to separate models for safety-net and non-safety-net hospitals, the authors find that even if safety-net hospitals treated the “easier” patients (those that are wealthier, less sick, and more likely to be white) that currently seek care at other hospitals, they would still have higher readmission rates. In other words, it’s not just the patient characteristics that are driving readmissions. Some underlying characteristic(s) (unobservable to us using these data sources) is causing higher readmission rates at hospitals that serve poor patients. The authors suggest that “quality of care” could be a key driving factor.

    Finally, the authors show that the HRRP methodology for penalties has not resulted in dramatic differences in actual penalties incurred:

    …few hospitals in either group received penalties of more than 1 percent in fiscal years 2013 or 2014. Moreover, the difference in mean penalty between the groups was 0.1 percent in [2013 and 2014] and 0.03 percent in fiscal year 2015.

    HRRP

    We have a few concerns about this study. First, its definition of safety-net hospitals as the top 20 percent of DSH payment ratio and non-safety net hospitals as all others likely washes out some key differences that might exist between high- and how- DSH payment ratio hospitals. We’d like to see the study pitting the top 20 percent of DSH ratio hospitals against the bottom 20 percent. It also doesn’t seem to account for clustering of patients at the hospital level. In other words, the authors treat patients that visit different hospitals as all coming from one general population of patients, when in reality we know that certain sub-types of patients tend to cluster at certain hospitals (e.g. more complex patients generally end up at teaching hospitals). This effectively makes the model look like it has more data than it truly does, meaning that their standard errors are smaller than they should be.

    More broadly, to the extent that this paper was meant to slow down and refocus discussions about adjusting the HRRP, we’re not sure that it is effective. The authors themselves note that “socioeconomic status as measured here explained approximately one-quarter of the difference in the odds of readmission that remained after accounting for the risk-adjustment factors that are a part of the HRRP.” If we know that these measurable markers of socioeconomic status are associated with higher readmission rates, no matter what the hospital does, why not adjust for them? Obviously we should aim to more effectively measure socioeconomic status (Medicare claims data are very disappointing in this regard) as the authors note in their discussion, but they do not put forward a good argument against adjusting for what we can today. The authors argue that the relative disparity in actual penalties in small, but they neglect to mention their finding that safety-net hospitals had about half the financial cushion (as measured by margins) as non-safety-net hospitals (+2.0% vs. +3.9%; see their Appendix). A small cut to Medicare payments might make a big difference to a struggling safety-net hospital, especially given that private payers are following CMS’ lead as they implement their own readmission penalties.

    Essentially, we don’t really know the precise balance of hospital quality and patient factors that cause readmissions. On one hand, the federal government’s new guide to preventing readmissions among diverse patients suggests that differences in patient mix can seriously affect both readmission rates and prevention strategies. Yet this paper from Sheingold and colleagues makes the case that the current form of the HRRP is not as unfair to safety-net providers as it seems. We’re far less certain that’s the case.

    [1] The DSH ratio is “based on the proportion of Medicare inpatient days attributable to patients eligible for Supplemental Security Income (SSI) and the proportion of total days attributable to Medicaid patients.”

     
    item.php
  • The hidden financial incentives behind your shorter hospital stay

    The following originally appeared on The Upshot (copyright 2016, The New York Times Company). It also appeared on page A3 of the print edition on January 5, 2016. I thank Jennifer Gilbert for provision of research assistance for this post.

    After one of her operations, my sister-in-law left the hospital so quickly that she couldn’t eat for days; after other stays, she wasn’t discharged until she felt physically and mentally prepared. Five days after his triple heart bypass surgery, my stepfather felt well enough to go home, but the hospital didn’t discharge him for several more days.

    You undoubtedly have similar stories. Patients are often left wondering whether they have been discharged from the hospital too soon or too late. They also wonder what criteria doctors use to assess whether a patient is ready to leave.

    “It’s complicated and depends on more than clinical factors,” said Dr. Ashish Jha, a Harvard physician who sees patients at a Boston Veterans Affairs hospital. “Sometimes doctors overestimate how much support is available at home and discharge a patient too soon; sometimes we underestimate and discharge too late.”

    Changing economic incentives — which are not always evident in individual cases — have also played a role in how long patients tend to stay. Recent changes to how hospitals are paid appear to be affecting which patients are admitted and how frequently they are readmitted.

    What is clear is that hospital stays used to be a lot longer. In 1980, the average in the United States was 7.3 days. Today it’s closer to 4.5. The difference isn’t because hospitalized patients are becoming younger and healthier; by and large, today’s patients are older and sicker. Yet they’re being discharged earlier.

    One big reason for the change came in the early 1980s. Medicare stopped paying hospitals whatever they claimed their costs were and phased in a payment system that paid them a predetermined rate tied to each patient’s diagnosis. This “prospective payment system,” as it is called, shifted the financial risk of patients’ hospitalization from Medicare to the hospital, encouraging the institutions to economize.

    One way to economize is to get patients out of the hospital sooner. The prospective payment system pays a hospital the same amount whether a Medicare patient stays five days or four. But that extra day adds costs that hit the hospital’s bottom line.

    So it’s in a hospital’s financial interest to encourage doctors to discharge patients sooner. A physician who practices at a Boston-area teaching hospital told me that hospital administrators exert social pressure on doctors by informing them that their patients’ stays are longer than that of their peers. It’s now easier for doctors to discharge patients sooner to a skilled nursing facility — where they’ll be monitored and professionally cared for — because so many more of them have been built in recent years.

    Almost since the prospective payment system started, experts have raisedconcerns that it would lead to higher rates of readmissions. After all, patients discharged more quickly may tend to be sicker, more prone to complications or require a level of care that’s harder to provide outside the hospital. It seems logical, therefore, that more of them would need to return to the hospital. Evidence backs this logic. In the United States andother nations, when lengths of stay decline, readmissions rise.

    Until recently, hospitals did not suffer financially when a patient was readmitted, so long as it was more than 24 hours after discharge. Indeed, readmission represented only additional revenue. If reducing lengths of stay increased readmissions while decreasing costs of each stay, hospitals benefited financially on both ends of the equation.

    But Medicare and private insurance companies picking up the tab lose money when a patient is readmitted. In some cases, a longer initial hospital stay that avoids a readmission is worth the additional upfront investment.

    The federal government has created several new programs that penalize hospitals for readmissions. Under Medicare’s Hospital Readmissions Reduction Program, hospitals now lose up to 3 percent of their total Medicare payments for high rates of patients readmitted within 30 days of discharge. This fiscal year — the fourth one of the program — Medicare will collect $420 million from 2,592 hospitals that had readmission rates higher than deemed appropriate.

    Since 2010, when almost one in five Medicare hospital patients returned within 30 days, hospital readmissions have fallen considerably. Though this fact was highlighted by the Obama administration, some people are seeing evidence that hospitals are gaming the metric. For instance, patients who are placed under “observation status” are not counted in the readmissions metric even though they may receive the same care as patients formally admitted to the hospital. Likewise, patients treated in the emergency room and not admitted to the hospital do not affect the readmissions metric either. As readmissions have fallen, observation status stays and returns to the emergency department after a discharge have risen.

    “When asked by hospital administrators to keep patients in observation status, many physicians comply,” Dr. Jha told me. “Some hospitals’ electronic medical systems will alert emergency physicians when a patient has been recently discharged, and they’re encouraged to keep them in the emergency department and not readmit them.”

    The influence of hospital financing is hardly perceptible to an individual patient. But the record is clear: Financing matters, and it affects both hospital admission and discharge decisions.

    @afrakt

     

     
    item.php
  • Hospital readmissions and length of stay

    The following is a guest post by Jennifer Gilbert, a Clinical Research Coordinator at Massachusetts General Hospital. She graduated from Boston University in 2014. You can follow her on Twitter: @jenmgilbert.

    Since the introduction of Medicare’s prospective payment system (PPS) in 1983, which pays hospitals a fixed price per admission diagnosis, U.S. hospitals have been financially incentivized to reduce inpatient length of stay (LOS). Consequently, average LOS has decreased dramatically according to studies of the National Hospital Discharge Survey. Despite average patient age and complexity increasing, the average LOS has dropped from 7.3 days in 1980 to 4.8 days in 2003.

    One could imagine that the financial pressures to reduce LOS could lead to poorer patient outcomes, but past studies have shown mixed data on whether the two are correlated.

    Shorter LOS has been associated with higher risk of readmission (more on this below), and mortality resulting from pulmonary embolism complications, though not with harm associated with AMI (acute myocardial infarction) or CABG (coronary artery bypass graft). Unfortunately, many of these studies have been confounded by patient-level factors, particularly severity of disease—sicker patients tend to stay longer in the hospital, which can be difficult to separate statistically from any potential adverse effects of a shorter LOS.

    The study mentioned above by Southern et al. controlled for these patient-level factors and found that short LOS was significantly correlated with all-cause 30-day mortality. Researchers compared mortality rates at a single medical center for admitted patients who were assigned to physicians that tended to have long versus short LOS admissions. In a sample of over 12,000 admissions, patients receiving care from short LOS physicians had a significantly increased risk of 30-day mortality relative to propensity-score matched patients receiving care from long LOS physicians. This suggests that policies incentivizing shorter lengths of stay may be associated with worse patient outcomes.

    Shorter LOS has also been correlated with a significantly higher risk of readmissions in multiple studies, including a study of LOS in Norway and another comparing 27 different countries. Similarly, studies of patient outcomes suggest an increase in readmissions following the implementation of PPS.

    Importantly, there is a tradeoff between the additional cost for each additional day of a hospital stay and the cost of any readmissions that might be the result of a shortened one. A cost-savings study by Dr. Kathleen Carey found that some of the cost of an additional day of stay for a heart attack patient would be offset by the expected cost savings. Though her estimate of the offset varies (due to different model specifications) she found that 15%-65% of the cost of an additional day of stay is effectively avoided by a reduction in the risk of readmission.

    Dr. Carey’s study accounts for the comparison of cost without factoring in outside readmissions penalties from government programs. However, the Hospital Readmissions Reduction Program (HRRP) and Shared Savings program could be tipping the scales toward longer LOS by adding different payment incentives. Hospital leaders may be faced with a new calculus as increasing financial pressures are put in place to reduce readmissions.

    CMS uses a variety of payment programs to incentivize hospitals to reduce readmissions, which may offset the financial incentives to reduce LOS even further. In the HRRP, hospitals can now lose up to 3% of their Medicare payments for high rates of 30-day readmissions for patients with one of five conditions (chronic heart failure, heart attack, pneumonia, chronic lung problems, and elective hip or knee replacements). In the first year alone, over 2000 hospitals received penalties, costing an average of $125,000 per hospital.

    The Shared Savings program, or Pioneer ACO program, pushes ACOs to reduce their readmissions by allowing them to share the savings they create by lowering readmissions. They are also penalized if their readmissions rate rises above what CMS predicts. This incentivizes hospitals to coordinate care and lower readmissions.

    Another program, the Bundled Payments for Care Improvement (BPCI) Initiative, financially incentivizes hospitals to coordinate care. Hospitals are given a set amount of money for episodes of hospitalization that fit into one of the four BPCI payment models, and then use this sum of money as efficiently as possible to care for the patient. This links the payment for multiple services that a beneficiary might receive during an episode in each of the four circumstances, and supports less fragmented care efforts. If another day of stay predictably offsets the cost of a readmission, this should lead to increasing LOS.

    It is worth mentioning that readmissions rate is a very complex quality metric, and does not catch all of the nuances in care. Hospitals can “game the system” and artificially lower their readmissions rate by placing many patients under observational status. These visits are technically considered outpatient, and thus do not count as hospital readmissions. However, their care is often indistinguishable from inpatient care.

    This is also true of Emergency Department (ED) visits—patients who are treated in the ED when they return to the hospital, but not ultimately readmitted, do not affect the statistic. A study in Annals of Emergency Medicine found that over 50% of returns to the hospital from January to June of 2010 did not result in an admission, and thus did not contribute to the hospital’s readmissions rates.

    Since the HRRP began, there has been an increase in observational status along with a decrease in readmissions. These factors may make the link between LOS and readmissions rate more challenging to unravel. However, a recent study in Health Affairs that looked into this phenomenon found that, at least in New York from 2008-2012, the expansion of HRRP in general did not lead to many of the unintended consequences above.

    Depending on the relative costs of an additional hospital day versus the costs of a readmission plus any penalty for it under the new programs described above, it may become cost-effective to increase LOS, countering PPS’s incentive to decrease it. To my knowledge, there have not been any studies examining whether LOS has increased as readmissions rates have decreased in recent years.

     
    item.php
  • The potential impact of community-based HIEs

    My latest at the AcademyHealth blog:

    I’m a huge supporter of information technology. I’ve spent almost my entire career creating systems in order to improve outcomes for pediatric patients. But that doesn’t mean I don’t carry a healthy skepticism for how much of a difference HIT is actually making in practice. The evidence for how much health information exchanges are impacting care is somewhat equivocal as well. A new study is being discussed as showing HIEs can significantly reduce hospital admissions. It’s worth reviewing in detail. “The potential for community-based health information exchange systems to reduce hospital readmissions“…

    Go read the whole thing!

    @aaronecarroll

     
    item.php
  • What are the effects of making residents work fewer hours

    When my dad was a resident, he regularly spent every other night in the hospital. I was usually on every fourth, sometimes every third. These days, residents are required to work many fewer hours in the interest of patient safety.

    Some physicians argue that this is depriving them of necessary educational opportunities. They also argue that more handoffs from one doc to the next lead to worse outcomes. So what’s right? Two papers on point in this week’s JAMA. First, “Association of the 2011 ACGME Resident Duty Hour Reform With General Surgery Patient Outcomes and With Resident Examination Performance“:

    IMPORTANCE In 2011, the Accreditation Council for Graduate Medical Education (ACGME) restricted resident duty hour requirements beyond those established in 2003, leading to concerns about the effects on patient care and resident training.

    OBJECTIVE To determine if the 2011 ACGME duty hour reform was associated with a change in general surgery patient outcomes or in resident examination performance. DESIGN, SETTING, AND PARTICIPANTS Quasi-experimental study of general surgery patient outcomes 2 years before (academic years 2009-2010) and after (academic years 2012-2013) the 2011 duty hour reform. Teaching and nonteaching hospitals were compared using a difference-in-differences approach adjusted for procedural mix, patient comorbidities, and time trends. Teaching hospitals were defined based on the proportion of cases at which residents were present intraoperatively. Patients were those undergoing surgery at hospitals participating in the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP). General surgery resident performance on the annual in-training, written board, and oral board examinations was assessed for this same period.

    EXPOSURES National implementation of revised resident duty hour requirements on July 1, 2011, in all ACGME accredited residency programs.

    MAIN OUTCOMES AND MEASURES Primary outcome was a composite of death or serious morbidity; secondary outcomes were other postoperative complications and resident examination performance.

    RESULTS In the main analysis, 204 641 patients were identified from 23 teaching (n = 102 525) and 31 nonteaching (n = 102 116) hospitals. The unadjusted rate of death or serious morbidity improved during the study period in both teaching (11.6% [95% CI, 11.3%-12.0%] to 9.4% [95% CI, 9.1%-9.8%], P < .001) and nonteaching hospitals (8.7% [95% CI, 8.3%-9.0%] to 7.1% [95% CI, 6.8%-7.5%], P < .001). In adjusted analyses, the 2011 ACGME duty hour reform was not associated with a significant change in death or serious morbidity in either postreform year 1 (OR, 1.12; 95% CI, 0.98-1.28) or postreform year 2 (OR, 1.00; 95% CI, 0.86-1.17) or when both postreform years were combined (OR, 1.06; 95% CI, 0.93-1.20). There was no association between duty hour reform and any other postoperative adverse outcome. Mean (SD) in-training examination scores did not significantly change from 2010 to 2013 for first-year residents (499.7 [ 85.2] to 500.5 [84.2], P = .99), for residents from other postgraduate years, or for first-time examinees taking the written or oral board examinations during this period.

    CONCLUSIONS AND RELEVANCE Implementation of the 2011 ACGME duty hour reform was not associated with a change in general surgery patient outcomes or differences in resident examination performance. The implications of these findings should be considered when evaluating the merit of the 2011 ACGME duty hour reform and revising related policies in the future.

    So the changes did not lead to differences in patient outcomes. It also didn’t lead to residents’ performing worse on testing. Granted, neither of these are perfect measures, but they point to the changes not having obvious deleterious effects.

    Next up, “Association of the 2011 ACGME Resident Duty Hour Reforms With Mortality and Readmissions Among Hospitalized Medicare Patients“:

    IMPORTANCE Patient outcomes associated with the 2011 Accreditation Council for Graduate Medical Education (ACGME) duty hour reforms have not been evaluated at a national level.

    OBJECTIVE To evaluate the association of the 2011 ACGME duty hour reforms with mortality and readmissions.

    DESIGN, SETTING, AND PARTICIPANTS Observational study of Medicare patient admissions (6 384 273 admissions from 2 790 356 patients) to short-term, acute care, nonfederal hospitals (n = 3104) with principal medical diagnoses of acute myocardial infarction, stroke, gastrointestinal bleeding, or congestive heart failure or a Diagnosis Related Group classification of general, orthopedic, or vascular surgery. Of the hospitals, 96 (3.1%) were very major teaching, 138 (4.4%) major teaching, 442 (14.2%) minor teaching, 443 (14.3%) very minor teaching, and 1985 (64.0%) nonteaching.

    EXPOSURE Resident-to-bed ratio as a continuous measure of hospital teaching intensity.

    MAIN OUTCOMES AND MEASURES Change in 30-day all-location mortality and 30-day all-cause readmission, comparing patients in more intensive relative to less intensive teaching hospitals before (July 1, 2009–June 30, 2011) and after (July 1, 2011–June 30, 2012) duty hour reforms, adjusting for patient comorbidities, time trends, and hospital site.

    RESULTS In the 2 years before duty hour reforms, there were 4 325 854 admissions with 288 422 deaths and 602 380 readmissions. In the first year after the reforms, accounting for teaching hospital intensity, there were 2 058 419 admissions with 133 547 deaths and 272 938 readmissions. There were no significant postreform differences in mortality accounting for teaching hospital intensity for combined medical conditions (odds ratio [OR], 1.00; 95% CI, 0.96-1.03), combined surgical categories (OR, 0.99; 95% CI, 0.94-1.04), or any of the individual medical conditions or surgical categories. There were no significant postreform differences in readmissions for combined medical conditions (OR, 1.00; 95 CI, 0.97-1.02) or combined surgical categories (OR, 1.00; 95% CI, 0.98-1.03). For the medical condition of stroke, there were higher odds of readmissions in the postreform period (OR, 1.06; 95% CI, 1.001-1.13). However, this finding was not supported by sensitivity analyses and there were no significant postreform differences for readmissions for any other individual medical condition or surgical category.

    CONCLUSIONS AND RELEVANCE Among Medicare beneficiaries, there were no significant differences in the change in 30-day mortality rates or 30-day all-cause readmission rates for those hospitalized in more intensive relative to less intensive teaching hospitals in the year after implementation of the 2011 ACGME duty hour reforms compared with those hospitalized in the 2 years before implementation.

    This study looked at whether duty hour changes were associated with 30-day mortality or readmission. And, once again, there were no significant differences between hospitals with more or less intensive schedules before and after reforms were put in place.

    The accompanying editorial concludes:

    First, with regard to potential short-term policy decisions on duty hour requirements, is it important to decide whether a null association with safety and education metrics is a positive or negative finding? In our roles as residency review committee chairs, we think this is the wrong question to ask because there was no justification for making the rules more complex or restrictive, as occurred in 2011.

    Second, in the absence of improvement in patient outcomes in these 2 studies, how should the 2011 duty hour revisions be judged? … Many program directors have expressed great concern about the potential negative effects of this second set of changes, including effects on resident education, preparedness for senior roles, patient safety, and continuity of care. Thus, in the absence of clear data demonstrating benefit, the concerns of the educational community should be given credence and not be dismissed as mere perceptions.

    Third, although high-quality observational studies such as these are very helpful, randomized data are lacking. Recognizing this gap in research, the educational community has proposed 2 randomized trials on duty hour requirements in medical and surgical residents that may provide more definitive information.

    Discuss.

    @aaronecarroll

     
    item.php
  • Do falsification tests. (Day of week as an instrument for length of stay.)

    In an ideal health care system, you’d get the same (very good) care whether you were admitted to a hospital on a Monday, Wednesday, Friday, or Sunday. We don’t have an ideal health care system, and it turns out that day of admission matters. A new paper by Ann Bartel, Carri Chan, and Song-Hee Kim illustrates this fact, and then exploits it as an instrumental variable (IV) in an analysis of mortality and hospital readmissions.

    Prior work by Varnava et al. (2002) and Wong et al. (2009) showed that hospitals would rather not keep patients over the weekend if they can discharge them on a Friday. Examining three hospitals in the UK, Varnava et al. found that discharges were most common on Fridays. Considering a hospital in Toronto, Wong et al. found that “[w]eekend discharge rate was more than 50% lower compared with reference rates whereas Friday rates were 24% higher. Holiday Monday discharge rates were 65% lower than regular Mondays, with an increase in pre-holiday discharge rates.”

    Bartel, Chan, and Kim found something similar among US Medicare patients hospitalized for heart failure (HF), pneumonia (PNE), or acute myocardial infarction (AMI) in 2008-2011. The following chart from their paper plots the logarithm of length-of-stay (LOS) versus admission day-of-week for HF patients, controlling for age, gender, race, comorbidities, receipt of surgery, enrollment in Medicare Advantage, seasonality, and hospital fixed effects. (That’s why the figure’s caption calls this a “residual.”) As shown, HF patients admitted on Sunday-Tuesday have shorter lengths of stay than those admitted on a Wednesday-Saturday. A similar pattern exists for PNE and AMI patients.

    day of week

    Why? The hypothesis is that there is an incentive to get patients out of the hospital before the weekend, unless it’s pretty clear they’ll need to stay through the weekend. This could be due to patient demand (e.g., they want, or their family wants them, to be home on weekends). Or it could be due to provider factors (e.g., less staff on the weekend makes it harder for the hospital to provide care or plan discharges). Also, under diagnosis-based payment that Medicare uses, staying an extra day that could be avoided is all cost for no additional revenue.

    Whatever the reason if the admission day is random with respect to outcomes, it could be a good instrument, a way to estimate a causal relationship between length of stay and things like mortality or hospital readmissions. If admission day is a good instrument, stratifying by it should balance observable factors, like comorbidities. If, for example, patients admitted earlier in the week are also sicker, then their outcomes could be worse not because they are discharged earlier (before the weekend) but because of their more severe illnesses, invalidating the instrument.

    In principle it isn’t absolutely necessary that observable factors like comorbidities be balanced across values of the instrument, because they can be controlled for. However if there is not balance across instrument strata among observable factors, it should reduce our confidence that there is balance among unobservable factors, which is the key hypothesis for IV. So, checking balance on observables, like comorbidities, is a falsification test, something every IV study should include. (If one’s theory suggests that there ought not be balance on some, specific observables, then we might forgive that, and the analysis should control for them. But there must be some observables for which balance occurs or else why should we believe it does so for all unobservables correlated with outcomes?)

    This falsification test is a direct analog of the typical “Table 1” in a publication of results from an RCT. A standard table 1 shows balance of observable factors across treatment/control arms. If you ever saw an unbalanced Table 1, you’d suspect a breakdown in the randomization. The study would be fatally flawed. Well, one can and should do this type of test with IV too.

    Considering HF patients, Bartel, Chan, and Kim do find balance of comorbidities when stratified by Sunday/Monday admissions versus admissions on any other day, but only for those with greater severity of disease. The reason could be that day of admission is more random for high severity patients; they may have less control over when they enter the hospital than other, less severely ill patients, the relatively sicker* of whom seem disproportionately to be admitted on Sundays and Mondays. Therefore, their instrument is probably not valid for less severe HF patients. A similar falsification test did not reject the validity of the instrument for the AMI and PNE study cohorts.

    Main lesson: Do falsification tests. Adjust analysis accordingly.

    The paper’s principal results are as follows:

    • “For HF patients with high severity, one more hospital day decreases readmission risk by 7%. This relationship between LOS and readmissions does not exist for PNE or AMI patients, but we show that longer LOS can reduce their mortality risks by 22% and 7% respectively.”
    • “Keeping all FFS [Medicare fee for service] PNE patients in the hospital for one more day would save 19,063 lives [over four years].”
    • “Keeping all FFS AMI patients in the hospital for one more day saves 2,577 lives [over four years].”

    These results suggest that discharges to avoid weekends and that shorten LOS harm patients, as does shorter LOS in general (at the margin examined). However, we should only believe them to the extent we believe the instrument. The falsification tests in the paper should increase our confidence in the validity of findings.

    * To help you parse this: I’m talking about the relatively sicker among the less severely ill subset. This is bloggerrifically vague, but details are in the paper.

    @afrakt

     
    item.php
  • Hospital readmissions are down, but are they appropriately measured?

    The Department of Health and Human Services (HHS) released some news that suggests patients are receiving better care from hospitals:*

    The data in this report shows a substantial nine percent decrease in harms experienced by patients in hospitals in 2012 compared to the 2010 baseline, and an eight percent decrease in Medicare Fee-for-Service (FFS) 30-day readmissions. National reductions in adverse drug events, falls, infections and other forms of harm are estimated to have prevented nearly 15,000 deaths in hospitals, and saved $4.1 billion in costs, and prevented 560,000 patient harms in 2011 and 2012.

    Hospital readmission rates continue to fall, as HHS’s figure shows:

    readmissions trend

    You can read more about this by Jordan Rau.

    Among the remaining issues pertaining to hospital readmissions are the extent to which they should be adjusted for sociodemographic factors. The National Quality Forum recently wrestled with this. New research by two sets of investigators and published in Health Affairs adds to the debate. Some key passages follow.

    Elna Nagasako and colleagues:

    [W]e compared results for hospitals in Missouri under two types of models. The first type of model is currently used by the Centers for Medicare and Medicaid Services for public reporting of condition-specific hospital readmission rates of Medicare patients. The second type of model is an “enriched” version of the first type of model with census tract–level socioeconomic data, such as poverty rate, educational attainment, and housing vacancy rate. We found that the inclusion of these factors had a pronounced effect on calculated hospital readmission rates for patients admitted with acute myocardial infarction, heart failure, and pneumonia. Specifically, the models including socioeconomic data narrowed the range of observed variation in readmission rates for the above conditions, in percentage points, from 6.5 to 1.8, 14.0 to 7.4, and 7.4 to 3.7, respectively. […]

    These findings have led to some controversy and a great deal of public debate on whether the readmissions measures used by CMS to penalize hospitals should control for socioeconomic factors. On one side of the debate are supporters of the existing policy to exclude socioeconomic factors from risk-adjustment models in order to maintain the visibility of differences in health outcomes for groups with different socioeconomic characteristics. The opposing argument supports controlling for socioeconomic factors to avoid disproportionately penalizing hospitals that care for a large number of patients from disadvantaged backgrounds and communities. The question underlying the debate is centered on whether the quality of care received in the hospital can influence the portion of the patient’s risk of readmission that is attributable to his or her socioeconomic circumstances.

    Jianhui Hu, Meredith Gonsahn, and David Nerenz:

    Patients living in high-poverty neighborhoods were 24 percent more likely than others to be readmitted, after demographic characteristics and clinical conditions were adjusted for. Married patients were at significantly reduced risk of readmission, which suggests that they had more social support than unmarried patients. These and previous findings that document socioeconomic disparities in readmission raise the question of whether CMS’s readmission measures and associated financial penalties should be adjusted for the effects of factors beyond hospital influence at the individual or neighborhood level, such as poverty and lack of social support. […]

    CMS’s rationale for not adjusting for patients’ socioeconomic characteristics is that differences in the quality of care received by groups of patients of different socioeconomic status can contribute to readmissions. Therefore, hospitals should not be held to different standards of care based on the demographic characteristics of their patients, and specifically should not be held to lower standards for socioeconomically disadvantaged populations. […]

    However, some stakeholder groups and scholars have expressed concern that the current CMS policy would disproportionately affect hospitals that provide care to patients of low socioeconomic status. They argue that the policy assumes that readmissions are a result of poor quality care, but instead readmissions are driven largely by patients’ circumstances after discharge, such as lack of social support at home or in the community, and are therefore outside the control of hospitals.

    Also, Hu, Gonsahn, and Nerenz demonstrate exceptional taste and brilliance** in their references, with two citations to this blog and one to Brad Flansbaum’s:

    HA cite

    * Caution is warranted. Just because a few indicators look better doesn’t mean care is better overall.

    ** Words deliberately chosen to reflect an actual or appearance of bias.

    @afrakt

     
    item.php
  • Hospital readmissions after surgery

    Yesterday I wrote about a study of emergency department visits within 30 days of a surgical procedure. Last night NEJM published a study, by Thomas Tsai and colleagues, on 30-day hospital readmissons after surgery. This surgery focus is motivated by the fact that the Medicare program is considering including imposing penalties on hospitals with high surgical readmission rates.

    The investigators examined 2009-2010 Medicare data for patients who had received one of the following surgical procedures: coronary-artery bypass grafting (CABG), pulmonary lobectomy, endovascular repair of abdominal aortic aneurysm, open repair of abdominal aortic aneurysm, colectomy, and hip replacement.

    The paper is packed with findings. In brief, from the abstract:

    The median risk-adjusted composite readmission rate at 30 days was 13.1% (interquartile range, 9.9 to 17.1). In a multivariate model adjusting for hospital characteristics, we found that hospitals in the highest quartile for surgical volume had a significantly lower composite readmission rate than hospitals in the lowest quartile (12.7% vs. 16.8%, P<0.001), and hospitals with the lowest surgical mortality rates had a significantly lower readmission rate than hospitals with the highest mortality rates (13.3% vs. 14.2%, P<0.001). High adherence to reported surgical process measures was only marginally associated with reduced readmission rates (highest quartile vs. lowest quartile, 13.1% vs. 13.6%; P=0.02).

    So, hospitals with higher volume, lower mortality rates, and better surgical process measures (all traditional indicators of quality) had lower readmission rates. With respect to the volume findings, here’s a pretty picture:

    As readers might recall, hospital readmission rates for heart attacks, heart failure, and pneumonia are not as strongly aligned with mortality rates and other measures of quality. So, why the difference for surgical readmission rates? Tsai et al. explain:

    The reasons that bring surgical patients back to the hospital soon after discharge are probably different from those that bring medical patients back. Whereas medical patients may come back because of poor social support at home, inability to access primary care, or general poor health, surgical patients are more likely to return as a consequence of complications arising from the surgery.

    In other words, surgical readmission rates more closely measure hospital (surgical) quality, as opposed to the nature of care beyond hospital walls. That being the case, it makes sense that they’d be more highly correlated with other measures of hospital (surgical) quality.

    @afrakt

     
    item.php
  • Emergency department visits after surgery

    The recent paper about emergency department (ED) visits by Medicare beneficiaries within 30 days of discharge after surgery by Keith Kocher and colleagues in Health Affairs is interesting. It’s interesting for what’s in it, and it’s interesting for what’s not.

    What’s in it: using 2005-2007 Medicare data, a national analysis of ED visits within 30 days of discharge for percutaneous coronary intervention, coronary artery bypass grafting, elective abdominal aortic aneurysm repair, back surgery, hip fracture repair, or colectomy.

    Across all procedures, 17.3 percent of patients had at least one ED visit within thirty days of hospital discharge, and 4.4 percent of patients had multiple ED visits. Among those patients who were readmitted, more than half were readmitted during an ED visit. In addition, use of the ED and related readmissions were associated with substantial variability—as much as fourfold—across hospitals.

    ED after surgery

    The authors correctly point out that many such ED visits likely reflect poor coordination of care and insufficient, outpatient follow-up. They are also correct that EDs can play a role in preventing additional hospital readmissions, though unnecessary trips to the ED are bad enough. It’s also true that half of readmissions are from sources other than the ED: office, clinic, nursing home, etc.

    What’s not in the paper: an analysis of 30-day follow-up ED use after discharges for conditions that are subject to Medicare’s mortality and hospital readmissions quality measures: heart attacks, heart failure, and pneumonia. Given the possibility that ED use might be serving as a substitute for readmissions, this would be a direction worth exploring. Related, it’d be nice to see the analysis conducted again for a period during which readmissions have fallen, after 2011.

    This is not a criticism of the paper. I’m happy that there’s more work for researchers to do!

    @afrakt

     
    item.php