• Oregon Medicaid – Power problems are important

    This is a joint post by Aaron Carroll and Austin Frakt. This is part of our continuing coverage of the new Oregon Medicaid study paper. Prior posts are here: Post 1, Post 2, Post 3. More are forthcoming.

    People who assume we’re partisan hacks are going to take the following as a nitpicky defense, or obfuscating, or dissembling. It’s not, and we’re not. This is about the proper interpretation of research. The reason we did not have the discussion below after the initial round of Oregon Health Study results rolled out was that the debate was clearer. The results were significant, but people disagreed on their real-world importance. That’s a debate worth having, but not a technical research problem. This time, we’re disagreeing on the interpretation of the analysis, and that’s more weedy and jargony.

    You see, the study did not prove Medicaid hurts people. Nor did it prove that Medicaid doesn’t help people. It failed to prove that Medicaid improved some metrics like A1C a certain predetermined amount. But what was that predetermined amount? That question is vitally important, because the study found that more people on Medicaid did improve their A1C, just not “enough”. Is that because the study was underpowered (had an insufficient number of participants)? We think that may be the case. But that question should be answerable…

    So an eagle eyed reader pointed us to the Supplementary Appendix. We’re in the weeds here, yes, but Table S14c, on page 45, is “Mean Values and Absolute Change in Clinical Measures and Health Outcomes: prerandomization specific diagnoses”.

    Before randomization, there were 2225 people with hypertension. If we assume that half got randomized to each arm, and then take the 24.1 percentage point increase in coverage the study reports, that means there were only 280 people with hypertension who got Medicaid, and who could be studied for this outcome. Further, those people had a baseline average blood pressure of 130/83. That’s remarkably well controlled! So there’s not nearly the room for improvement that you might assume.

    There’s a similar story for diabetes. Before randomization, there were 872 patients with diabetes. Half to each group, and then the 24.1% who actually got new  Medicaid, and you’ve got about 110 patients with diabetes in the Medicaid group. And again, their average baseline A1c was 6.7%, which is pretty well controlled. How much could the Medicaid do? With respect to the percentage of patients with a A1C>=6.5, there appears to be so much imprecision in the measurement that it’s in the 95% confidence interval that they got every single person in Medicaid with diabetes under control: the baseline percentage of A1C>=6.5 was 54.0, and the reduction in the Medicaid group was -27.0 (95% CI -71.91 to 17.92).

    But let’s say that these numbers were artificially low because people were undiagnosed. They still would have given us pause. With so few participants with disease, it’s hard to believe that you’d eventually amass enough people to detect a clinically significant difference. And when you look at the actual numbers in Table 2, concerns still exist. Take diabetes, for instance. Only 5.1% of the control group had an A1C>=6.5 (diabetes). Let’s assume that the starting prevalence was the same in the intervention group. That means that only 624 people (312 in each group) actually had a high A1C in the study. It appears they may also have been relatively well controlled. (Aside: With such low rates of poor health, by these measures, how generalizable are the results? We’ll consider that question another time.)

    This same discussion holds for the other metrics. This smacks of being underpowered.

    It appears that uncontrolled diabetes was not, in fact, especially prevalent in this population. That being the case, we’re not sure what effect, if any, you could expect Medicaid to have on this population with respect to A1C. Can we agree that if there are relatively few people in the study with diabetes, and that those who have it are relatively well controlled, then the study itself probably can’t detect a clinically significant change? This is a BIG difference than saying Medicaid could have had a significant impact, but didn’t.

    It should be possible to say something like the following, only with the numbers filled in: “We believed that there would be about X people in the study who would have diabetes, and that Y% of them would have A1Cs greater than 6.5. We believed that a clinically important reduction in this percentage would be Z, and given the variability in A1C levels, the study was powered to detect that change.” We’ve reached out to the study authors to try to fill in those numbers.

    Our concern remains that it appears unlikely that there were enough people with uncontrolled diabetes that you could detect a clinically significant change with statistical significance. If we’re wrong, we’re more than happy to be proven so. But if we’re right, then it has some pretty big implications for how this study should be interpreted. If we’re right, then it’s not possible that Medicaid could have achieved a statistically significant difference. The deck was stacked against the program.

    Lots of people are claiming this is a smoke screen and that the ex post confidence intervals are enough of a power calculation. They’re not. Let us put this another way: Our problem with the ex post way many are talking about the study is that the analysis did show improvements. And no one is claiming that the improvements aren’t good enough clinically. (See, for example, the annotated table at the end of this Kevin Drum post.) They’re only claiming they aren’t statistically significant.

    So was it that the improvements weren’t big enough, or that the sample was too small? We can clearly see from the confidence intervals how much bigger the improvements would have to be in order for them to be statistically significant with the sample available. But it’s also true that if the study was larger, by some amount, and found the same point estimates as statistically significant, we’d not be having this conversation. With a big enough sample, even the smallest differences are statistically significant, yet the study’s point estimates aren’t small. Was this study capable of finding clinically and statistically significant effects of reasonable size? This is what an ex ante power calculation is for. It informs the researcher as to what is even worth trying to examine.

    We understand there are people who will claim we’re changing our tune by now questioning things. We’re not. The design of the study is fantastic. The choice of these specific outcomes and this specific analysis is what we now question.

    More to come.

    @aaronecarroll and @afrakt

    Comments closed
  • Oregon and Medicaid and Evidence and CHILL, PEOPLE!

    This is a joint post by Aaron Carroll and Austin Frakt. Relevant to this post, recently we have published three papers arguing for expansion of Medicaid, not relative to all possible other reforms, but relative to the status quo.

    First of all, we’re somewhat annoyed that the NEJM sent out press releases and the study to journalists, but not people like us, because we now have to rebut the gazillion stories that have already been written on a study we just found out about an hour ago. Maybe they should let some knowledgeable people see it early, too. Or just wait until it goes live to tell everyone. But we digress. Let’s get into it.

    To recap: Oregon ran an RCT of Medicaid, because of a lack of funds to expand it fully. Early results showed some promising evidence that Medicaid improved process measures, self-reported health, and enhanced financial protection. This update, at 2 years, was intended to give us some harder outcomes. The results are “mixed”:

    We found no significant effect of Medicaid coverage on the prevalence or diagnosis of hypertension or high cholesterol levels or on the use of medication for these conditions. Medicaid coverage significantly increased the probability of a diagnosis of diabetes and the use of diabetes medication, but we observed no significant effect on average glycated hemoglobin levels or on the percentage of participants with levels of 6.5% or higher. Medicaid coverage decreased the probability of a positive screening for depression (−9.15 percentage points; 95% confidence interval, −16.70 to −1.60; P = 0.02), increased the use of many preventive services, and nearly eliminated catastrophic out-of-pocket medical expenditures.

    Let’s review. The good: Medicaid improved rates of diagnosis of depression, increased the use of preventive services, and improved the financial outlook for enrollees. The bad: It did not significantly affect the A1C levels of people with diabetes or levels of hypertension or cholesterol.

    This has led many to declare (and we’re not linking to them) that the ACA is now a failed promise, that Medicaid is bad, and that anyone who disagrees is a “Medicaid denier”. How many people saying that are ready to give up insurance for themselves or their family? If they are arguing that Medicaid needs to be reformed in some way, we’re open to that. If they’re arguing that insurance coverage shouldn’t be accessible to poor Americans in any form, we don’t agree. Medicaid may not be perfect, but we don’t think being uninsured is better. This new study supports this view, though certainly not as strongly as it might have.

    From our full reading of the paper, let us add the following to the conversation:

    1) Improvements in mental health are still improvements in health outcomes. The rate of positive screens for depression dropped from 30% to 21% in the Medicaid group. The rate of medication use for depression went from 16.8% to 22.3%. It wasn’t statistically significant (though it was close, p=0.07), but that doesn’t mean Medicaid failed. Which leads us to…

    2) Non-statistical significance does not mean failure. It means that either (a) there is no treatment effect or (b) the study is underpowered. Since there does not seem to be a power calculation, we can’t tell. How much of a difference would there need to be in order for statistical significance? We can’t tell. But just because this difference wasn’t significant in with the sample studied doesn’t mean it wouldn’t be significant with a larger sample. Indeed, the authors note this in the discussion:

    Hypertension, high cholesterol levels, diabetes, and depression are only a subgroup of the set of health outcomes potentially affected by Medicaid coverage. We chose these conditions because they are important contributors to morbidity and mortality, feasible to measure, prevalent in the low-income population in our study, and plausibly modifiable by effective treatment within a 2-year time frame. Nonetheless, our power to detect changes in health was limited by the relatively small numbers of patients with these conditions; indeed, the only condition in which we detected improvements was depression, which was by far the most prevalent of the four conditions examined. The 95% confidence intervals for many of the estimates of effects on individual physical health measures were wide enough to include changes that would be considered clinically significant — such as a 7.16-percentage-point reduction in the prevalence of hypertension. Moreover, although we did not find a significant change in glycated hemoglobin levels, the point estimate of the decrease we observed is consistent with that which would be expected on the basis of our estimated increase in the use of medication for diabetes.

    This is important, because the point estimates show that blood pressure did fall in Medicaid. Sure, it was a small amount. Medicaid lowered the percentage of people with elevated blood pressure from 16.3% to 15%. (p=0.65). It also increased the chance of being on meds from 13.9% to 14.6%. Remember that A1C “failure”? The percent of people with diabetes with a high A1C went from 5.1% off Medicaid to 4.2% (p=0.61). The percent of people with high total cholesterol went from 14.1% to 11.7% (p=0.45). In all of these, Medicaid improved the numbers, but not in a statistically significant manner. Was it powered to detect these differences? Moreover, what should we expect? Which brings us to…

    3) What is reasonable to expect? How much does private insurance affect these values? Do we know? No. There is no RCT of private insurance vs. no insurance. No one claims we have to have one. We just “know” private insurance works. The RAND HIE did not compare insurance to no insurance. It just looked at cost-sharing of insurance. That’s not the same.

    There has never been an RCT of Medicare vs. no insurance either, though we could point to some suggestive observational work (admitting that is not the same thing).

    So Medicaid, and Medicaid only, needs an RCT to prove it works. Never mind that it’s just intuitive that easier access to health care (by any insurance means) seems likely to improve your chance of getting it and getting it when you need to keep you healthy, if not alive.

    4) Financial hardship matters. Here Medicaid shined. It hugely reduced out of pocket spending, catastrophic expenditures, medical debt, and the need to borrow money or skip payments.

    5) Preventive care matters. We’ve been cautious about the ability of prevention to save money. But some preventive care improves outcomes. More people on Medicaid got colonoscopies, cholesterol screenings, and prostate cancer screens (whether or not you support them). The percent of women over 50 who got mammograms doubled from 28.9% to 58.6%. Results once again weren’t always “statistically significant”, so people can claim “Medicaid failed”. But colonscopies in people over 50 went from 10.4% to 14.6% (p=0.33). Failure?

    6) Health insurance is necessary, but not sufficient to improve health. It’s just the first step. We have never claimed that quality would go up just because of the ACA. Access will improve. We need to do a lot more work to improve quality. And, yes, maybe that will require a change to how Medicaid operates, but will quality improve if more poor people don’t have access to the means to afford care? We don’t see how.

    7) Most of these measures are still process measures. A1C is a marker. So is cholesterol. Did real outcomes change? Patient centered ones, like health related quality of life, did. Did mortality? Did morbidity? We still don’t know. That would take more time to see.

    So chill, people. This is another piece of evidence. It shows that some things improved for people who got Medicaid. For others, changes weren’t statistically significant, which isn’t the same thing as certainty of no effect. For still others, the jury is still out. But it didn’t show that Medicaid harms people, or that the ACA is a failure, or that anything supporters of Medicaid have said is a lie. Moreover, it certainly didn’t show that private insurance or Medicare succeeds in ways that Medicaid fails.

    People claiming otherwise need to go read the study and rebut these points.

    @aaronecarroll and @afrakt

    Comments closed
  • Harlan Krumholz on hospital readmissions

    Harlan M. Krumholz, MD, SM, is a cardiologist and the Harold H. Hines, Jr. Professor of Medicine and Epidemiology and Public Health at Yale University School of Medicine. He is the Director of the Yale-New Haven Hospital Center for Outcomes Research and Evaluation (CORE) and Director of the Robert Wood Johnson Clinical Scholars Program at Yale. His research has directly led to improvements in the use of guideline-based medications, the timeliness of care for acute myocardial infarction, public reporting of outcomes measures, and the current national focus on reducing the risk of readmission. He led the work that underlies the risk adjustment model for Medicare’s Hospital Readmission Reduction Program (HRRP). We (Aaron and Austin) asked him some questions about the HRRP via email. Our exchange is below.

    Note: Karen Joynt offers a different perspective in response to a similar set of questions.

    1. Based on our reading of the literature, it seems like the purpose and motivation of Medicare’s Hospital Readmissions Reduction Program (HRRP) is to use financial penalties and rewards to motivate hospitals to improve discharge planning and transitions of care. Readmissions would not be the first metric for this purpose I would think of. How did the idea to use them arise? In what ways are they well and not so well designed for this role?

    First, it is important to know that we focused on readmission as an important marker of quality of care, knowing that there were many deficiencies in the care of patients in the transition from inpatient to outpatient. Anyone who has been in the hospital or had a family member or friend in the hospital has experienced the lack of communication and coordination that occurs – and the stressors in the hospital – that make it difficult to have a successful recovery.

    We have studied readmissions and interventions to reduce risk after discharge for two decades. We did not develop the measures for a particular policy – but to highlight our performance in this area. We felt this was a very patient-centered measure that focused on a neglected area of care. The HRRP seized on these measures so there would be an incentive for hospitals and health care systems to invest in improving care for patients. Before this policy readmissions were simply more revenue for a hospital and there was no reason to invest in reducing the risk of these individuals who were entering a very hazardous period.

    For patients, readmissions are often a marker of a catastrophic adverse health event that has occurred within a short period after hospitalization. I have described the period of generalized risk that patients seem to have after a hospitalization as “Post-Hospital Syndrome” – to focus our attention on the fact that after leaving the hospital people appear susceptible to a wide range of ailments, the majority of which are different than what brought them in the hospital in the first place. It is a dangerous period for them as they are vulnerable to all sorts of health problems. These events are unwelcome, disruptive and often life threatening. Interestingly, we never learned in medical schools about the risks that patients face soon after leaving the hospital. Our textbooks lack chapters on this topic. Most of us learned about inpatient and outpatient care – but somehow the transition from inpatient to outpatient was lost.

    In our work with CMS [Centers for Medicare & Medicaid Services] we have tried to bring this period into bright relief, a period of great vulnerability for patients. Patient do not experience the hospitalization as a singular event, they experience an episode of illness that spans physical locations. If we reflect on our care processes we see so many deficiencies in our transitional care – errors in communication, collaboration, cooperation among health care providers. We do not recognize the disabilities that a patient acquires as a result of their illness and the hospitalization. We spend little time seeking to mitigate the stressors of the hospitalization (physical, psychological, social). And until recently there was no financial incentive to do so – in fact, readmissions generated revenue. A sad chapter in medicine is that many grant-funded programs that showed reductions in readmissions were discontinued when the funding ended likely because the health care system did not see a sustainable business model for them.

    We selected readmission because success in reducing rates holds the possibility of improving patient outcomes and decreasing costs. We chose it because there seemed so little attention on ensuring that patients did well in the transition from inpatient to outpatient, that there would be ample opportunities for improvement. We have pushed readmissions because for too long the medical profession has ignored this extraordinary dangerous time for patients – in part, we believe, because health care professionals were unaware of how high the rates were. In a patient-centered health care system, we would be looking out for patients at the time that they are at greatest risk – and generate strategies that can lower their risk and help them be safe in this period.

    I also want to be clear, I did not write the law. Our group sought to focus attention on readmission, developed the measure in collaboration with experts at CMS, and encouraged the public reporting of it. I like the idea that there is now an incentive for hospitals to work on this problem, though I might have had a different approach to the policy.

    2. It’s not likely that we want all readmissions to be avoided. Some are probably appropriate and unavoidable. How will the HRPP account for these, or differentiate them from bad readmissions?

    This effort should be about reducing the risk of unplanned readmissions. We are improving the measures by removing readmissions that seem likely to be planned. But this measure is not about a single readmission, but about a pattern of performance. The idea is that we should be able to reduce risk in the post-hospitalization period and have the need for fewer readmissions. In any effort we need to watch other measures to be sure that there are no unintended adverse consequences for patients as a result of actions that are not in their best interests. We need to track mortality, for example. But it would be a mistake to try to assess the need for each readmission – whether it was truly preventable or not. If we lower the risk generally then the rate will drop. And the goal of a health care system and the clinicians should be to prevent a patient from getting to the point of needing to be readmitted. We do not expect the rate to ever be zero – but do people really think that 20% rate of return in 30 days is the best we can do for our patients, especially given the evidence of the gaps in care.

    3. Is there a realistic danger that the HRRP could encourage hospitals to resist readmissions, even if that practice is to the detriment of patients? Might hospitals dump patients to alternate facilities instead of readmitting them? Are there mechanisms in place to monitor or prevent such practice?

    I would hope that health care professionals would not seek to excel on a measure at the expense of a patient. I do not believe that the vast majority of individuals or institutions would ever consider such an act. It is impossible to develop measures that are resistant to those who would disregard the best interests of patient. What I hope will occur is a realization that our current practices are not serving patients well and that care can be improved – better coordinated – and that we can recognize in our patient populations what most conspires against their success in this dangerous period – and help them through it. As I said, it will be important to monitor other outcomes, such as mortality, to ensure that the policy is not resulting in harm.

    4. You’ve argued that hospital readmission rates aren’t very sensitive to socioeconomic status. What’s the evidence for this?

    Overall, the effects have tended to be small at the patient level. Race seems to be stronger than SES. We have a series of articles that we are preparing that will put this in better perspective. I am not saying that SES and race do not have some effect – patients with fewer resources often face greater challenges in our health care system. But I am saying that it is not the dominant factor. Look at the variation in readmission rates – the risk for all types of patients with all types of demographic characteristics is not that different. And if we adjust for SES in the models, it does little to the result, the SES variable is significant but the effect is small.

    5. Even if readmissions are related to socioeconomic status, you’ve written that it would be problematic to risk adjust the HRRP model to reflect that. Why is this so?

    Well, if it were the dominant factor, then we would need to determine how best to proceed because it would be a principal cause of the patient risk. However, we lack evidence that is true. But even if it were true, to adjust for that factor would be to hide differences in our population – and I think it is best to confront the differences and then determine the best policy response to reduce or eliminate the disparity. Social determinants of health are real – and important – and deserve our attention so we do not want to hide them. But as I said, I do not believe that they have a strong influence on an institution’s readmission rates.

    Comments closed
  • Karen Joynt on hospital readmissions

    Karen Joynt is a practicing cardiologist in the Veterans Health Administration and an Instructor at Harvard Medical School and the Harvard School of Public Health. Her research focuses on understanding differences in quality, outcomes, and costs between hospitals, and the policies that may impact these metrics. She is an expert on Medicare’s Hospital Readmission Reduction Program (HRRP) and has published several papers relevant to the readmission rate model that underlies it, as well as its limitations. We (Aaron and Austin) asked her some questions about the HRRP via email. Our exchange is below, followed by the full references she cites in her responses.

    Note: Harlan Krumholz offers a different perspective in response to a similar set of questions.

    1. Based on our reading of the literature, it seems like the purpose and motivation of Medicare’s Hospital Readmissions Reduction Program (HRRP) is to use financial penalties and rewards to motivate hospitals to improve discharge planning and transitions of care. Do you think the HRRP is well designed for this role? If not, what are some better alternatives, in your view?

    If done right, the HRRP could really help push hospitals to forge new connections with their communities, create partnerships with primary care practices, and innovate around how we define the continuum of care. I see some major problems with the HRRP, however.

    a) Incenting hospitals to improve readmissions is one thing; comparing hospitals to one another on readmission rates and penalizing those that do worse is quite another.

    It just doesn’t have face validity to argue that a hospital with a patient population that struggles with homelessness, limited literacy, lack of access to primary care, and a high burden of substance abuse and mental health issues should be able to achieve the same readmission rate as a hospital with a wealthy patient population with a great deal of resources. Once a patient leaves the hospital, there are myriad factors that will influence their likelihood of returning to the hospital. Some of those may be medical; some social; some due to poor adherence or poor understanding – but regardless, penalizing a hospital for taking on the care of vulnerable populations sets up potentially harmful incentives, and seems to me to be the wrong approach.  We need to find ways to help hospitals and the communities in which they are located create a more comprehensive safety net for their patients, and it’s not clear that these penalties will do that.

    b) There are a number of confounding factors that make readmission rates hard to interpret.

    • Hospitals with high mortality rates may have low readmission rates because the patients who die can’t be readmitted (though this is likely not a big enough problem to explain much).
    • Hospitals with a tendency to admit less-sick patients may have lower readmission rates than hospitals with a higher threshold for admission.  Note that in the current fee-for-service environment, admitting less-sick patients is a win-win for dealing with penalties (increase inpatient volume to offset dollars lost from penalties AND decrease readmission rates to avoid next year’s penalties).
    • Improving access to care for a population may increase readmission rates (Weinberger, Oddone et al. 1996).
    • Hospitals that implement programs to improve longitudinal community care may then only admit the sickest patients, and thus have higher readmission rates.  Again, note that under fee-for-service these hospitals could lose twice (decrease volume AND worsen next year’s penalties).

    c) Incenting hospitals to improve readmissions – given that they have many competing goals and responsibilities – means that resources are not being spent on other things, like reducing medical errors, or improving inpatient quality.

    There are a few fixes to the HRRP that could improve some of these issues, though some are easier than others given that this program is written into law. We could take socioeconomic status into account. We could compare hospitals to a group of peer hospitals, or to themselves (i.e. assess improvement). We could weight nearer-term readmissions more highly, since there is some evidence that the very near-term readmissions are more likely preventable.

    2. Obviously, some readmissions are a good thing, or a necessary thing. How will the HRPP account for these, or differentiate them from bad readmissions?

    Right now, it won’t. The metric used is all-cause readmissions, meaning that a rehospitalization for any reason at any point within the 30 days following a discharge “counts” as a readmission. We know from prior work that only a fraction of readmissions are preventable (van Walraven, Bennett et al. 2011; van Walraven, Jennings et al. 2011), but we know little about what a “good” readmission might look like – that’s a very interesting thought.

    3. Is there a realistic danger that the HRRP could encourage hospitals to resist readmissions, even if that practice is to the detriment of patients? Might hospitals dump patients to alternate facilities instead of readmitting them? Are there mechanisms in place to monitor or prevent such practice?

    I think this is a realistic danger. People respond to incentives, and if the signaling is strong enough, some may respond to them in ways that aren’t in patients’ best interest. Two major ways that hospitals could “game” the readmissions measure are putting patients on observation status rather than full admission status (Feng, Wright et al. 2012), and declining transfers of particularly ill patients. Both are phenomena that we will hopefully be able to track in Medicare data over the coming years, in order to determine if these are real problems or just theoretical ones. There are no formal mechanisms in place to prevent such practice, to my knowledge, though I hope there are folks at CMS who are tracking these types of outcomes as well.

    4. You’ve argued that hospital readmission rates are sensitive to socioeconomic characteristics, yet the HRRP doesn’t adjust for them. Which characteristics have been examined in the literature? How sensitive are they and why aren’t they included in CMS’s calculation?

    The literature has been fairly consistent that socioeconomic characteristics matter in terms of readmissions. Specific characteristics that have been examined include race/ethnicity (Alexander, Grumbach et al. 1999; Rathore, Foody et al. 2003; Jiang, Andrews et al. 2005; Silverstein, Qin et al. 2008; Jencks, Williams et al. 2009; Joynt, Orav et al. 2011; Rodriguez, Joynt et al. 2011), hospital racial makeup (Joynt, Orav et al. 2011; Rodriguez, Joynt et al. 2011), patient poverty (Weissman, Stern et al. 1994; Kangovi, Grande et al. 2012), poverty of the neighborhood in which the patient lives (Foraker, Rose et al. 2011), poverty of the community in which a hospital is located (Joynt and Jha 2011), Medicaid versus private insurance status (Jiang and Wier 2010; Foraker, Rose et al. 2011; Kangovi, Grande et al. 2012), having limited education (Arbaje, Wolff et al. 2008), and things like living alone and requiring help with basic functional needs (Arbaje, Wolff et al. 2008).

    Adjusting for these factors is feasible, at least to the degree to which information about them is available, but this idea has met with resistance in the policy community and thus hasn’t been done.

    My concern is that hospitals that serve a higher proportion of patients who face these challenges are more likely to be penalized under the program, specifically safety-net hospitals (Joynt and Jha 2013).

    5. What other aspects of the HRRP concern you?

    The models used for risk adjustment are not very good at predicting readmissions.  They likely underestimate risk at hospitals that serve a very medically complex group of patients, such as major teaching hospitals. The models generally indicate whether or not a patient has a comorbidity in a binary fashion, which may not capture the complexity at a major referral center. We did find that teaching hospitals and large hospitals were more likely to be penalized under the HRRP than their non-teaching and smaller counterparts, though we can’t be certain that this is due to risk-adjustment alone (Joynt and Jha 2013).

    Also, the models employed use a Bayesian hierarchical shrinkage approach that makes it very unlikely that small hospitals will ever be identified as outliers (Silber, Rosenbaum et al. 2010), though this is probably a little more of a tech-y answer than you were looking for!

    6. No, we like the weeds! Getting back to risk adjustment, proponents of the HRRP argue that socioeconomic risk adjusters would be inappropriate because they could relate to quality. When and why is it appropriate or inappropriate to control for socioeconomics in a hospital or system performance measure?

    My personal opinion is that it is inappropriate to control for socioeconomics when one is measuring processes of care. There is no reason that a hospital should have any lower rate of use of revascularization for an acute MI in poor compared to wealthy patients or in black compared to white patients – and controlling for these factors would give an inappropriate “free pass” to provide low-quality care.

    However, the case of a complicated outcome measure like readmissions is different. There is plenty of evidence that socioeconomics impact readmissions (whereas as far as I know there is no evidence that socioeconomics impact a patient’s benefit from revascularization, or aspirin, or appropriate antibiotics, etc.). Given that we are asking hospitals to do vastly different jobs in preventing readmission, recognizing these differences seems reasonable. A hospital with a high proportion of patients who are homeless, or who cannot afford medications, or who have severe mental illness or substance abuse, will have a harder time preventing readmissions than a hospital with a wealthy, stable population. We shouldn’t penalize hospitals for caring for vulnerable populations – we should “level the playing field” to some degree, while we work to determine better ways to provide care for these populations.


    Alexander, M., K. Grumbach, et al. (1999). “Congestive heart failure hospitalizations and survival in California: patterns according to race/ethnicity.” Am Heart J 137(5): 919-927.

    Arbaje, A. I., J. L. Wolff, et al. (2008). “Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community-dwelling Medicare beneficiaries.” Gerontologist 48(4): 495-504.

    Feng, Z., B. Wright, et al. (2012). “Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences.” Health Aff (Millwood) 31(6): 1251-1259.

    Foraker, R. E., K. M. Rose, et al. (2011). “Socioeconomic status, Medicaid coverage, clinical comorbidity, and rehospitalization or death after an incident heart failure hospitalization: Atherosclerosis Risk in Communities cohort (1987 to 2004).” Circ Heart Fail 4(3): 308-316.

    Jencks, S. F., M. V. Williams, et al. (2009). “Rehospitalizations among patients in the Medicare fee-for-service program.” N Engl J Med 360(14): 1418-1428.

    Jiang, H. J., R. Andrews, et al. (2005). “Racial/ethnic disparities in potentially preventable readmissions: the case of diabetes.” Am J Public Health 95(9): 1561-1567.

    Jiang, H. J. and L. M. Wier (2010). All-Cause Hospital Readmissions among Non-Elderly Medicaid Patients, 2007, HCUP Statistical Brief #89. Rockville, MD, United States Agency for Healthcare Research and Quality.

    Joynt, K. E. and A. K. Jha (2011). “Who has higher readmission rates for heart failure, and why? Implications for efforts to improve care using financial incentives.” Circ Cardiovasc Qual Outcomes 4(1): 53-59.

    Joynt, K. E. and A. K. Jha (2013). “Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program.” JAMA 309(4): 342-343.

    Joynt, K. E., E. J. Orav, et al. (2011). “Thirty-day readmission rates for Medicare beneficiaries by race and site of care.” Jama 305(7): 675-681.

    Kangovi, S., D. Grande, et al. (2012). “Perceptions of readmitted patients on the transition from hospital to home.” J Hosp Med 7(9): 709-712.

    Rathore, S. S., J. M. Foody, et al. (2003). “Race, quality of care, and outcomes of elderly patients hospitalized with heart failure.” Jama 289(19): 2517-2524.

    Rodriguez, F., K. E. Joynt, et al. (2011). “Readmission rates for Hispanic Medicare beneficiaries with heart failure and acute myocardial infarction.” Am Heart J 162(2): 254-261 e253.

    Silber, J. H., P. R. Rosenbaum, et al. (2010). “The Hospital Compare mortality model and the volume-outcome relationship.” Health Serv Res 45(5 Pt 1): 1148-1167.

    Silverstein, M. D., H. Qin, et al. (2008). “Risk factors for 30-day hospital readmission in patients >/=65 years of age.” Proc (Bayl Univ Med Cent) 21(4): 363-372.

    van Walraven, C., C. Bennett, et al. (2011). “Proportion of hospital readmissions deemed avoidable: a systematic review.” CMAJ 183(7): E391-402.

    van Walraven, C., A. Jennings, et al. (2011). “Incidence of potentially avoidable urgent readmissions and their relation to all-cause urgent readmissions.” CMAJ.

    Weinberger, M., E. Z. Oddone, et al. (1996). “Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission.” N Engl J Med 334(22): 1441-1447.

    Weissman, J. S., R. S. Stern, et al. (1994). “The impact of patient socioeconomic status and other social factors on readmission: a prospective study in four Massachusetts hospitals.” Inquiry 31(2): 163-172.

    Comments closed
  • Expanding Medicaid saved lives

    This post is jointly authored by Aaron Carroll and Harold Pollack

    No matter how many times we refute the idea that Medicaid is bad for health, people keep on saying it. There’s so much evidence to the contrary. Recently, a number of states have used the “questionable” quality of Medicaid to buttress their arguments against the Medicaid expansion contained in the Affordable Care Act.

    Most of those decisions are fiscal. But we shouldn’t ignore the effect of them on patients themselves. There’s a paper in yesterday’s New England Journal of Medicineentitled “Mortality and Access to Care among Adults after State Medicaid Expansions.” It’s worth a read:


    Several states have expanded Medicaid eligibility for adults in the past decade, and the Affordable Care Act allows states to expand Medicaid dramatically in 2014. Yet the effect of such changes on adults’ health remains unclear. We examined whether Medicaid expansions were associated with changes in mortality and other health-related measures.


    We compared three states that substantially expanded adult Medicaid eligibility since 2000 (New York, Maine, and Arizona) with neighboring states without expansions. The sample consisted of adults between the ages of 20 and 64 years who were observed 5 years before and after the expansions, from 1997 through 2007. The primary outcome was all-cause county-level mortality among 68,012 year- and county-specific observations in the Compressed Mortality File of the Centers for Disease Control and Prevention. Secondary outcomes were rates of insurance coverage, delayed care because of costs, and self-reported health among 169,124 persons in the Current Population Survey and 192,148 persons in the Behavioral Risk Factor Surveillance System.


    Basically, Sommers, Baicker, and Epstein examined county all-cause mortality rates of working-age adults in three states—Arizona, Maine, and New York–that expanded Medicaid eligibility for childless adults between 2000 and 2005. They compared trends in these states in the five years before and the five years after Medicaid expansion to trends found in nearby comparison states that didn’t’ expand eligibility. These comparison states therefore served as controls. The authors also examined the proportion of individuals reporting that they are in “excellent” or “good” health, as well as those who reported that they were unable to obtain needed care in the past year because of cost.

    Let’s acknowledge that this difference-in-difference design isn’t airtight. It’s not a randomized trial, and we can’t prove causality. But the study is still pretty compelling.  On all fronts, these authors found that Medicaid expansion was associated with reduced mortality rates, improved health, and improved access to needed care. In the preferred regression model, the authors found that annual mortality rates declined by 19.6 deaths per 100,000. That represents a relative reduction of 6.1% (p=0.001). These results imply that the Medicaid expansion prevented 2840 deaths per year in states that expanded Medicaid by about 500,000 adults. That’s not a small change.

    What’s especially impressive is the way this paper’s modest but important findings hang together from both a statistical and clinical perspective.  Mortality reductions were greatest in precisely the groups most likely to benefit from more generous Medicaid policies: Nonwhites, older adults, and those living in counties with more prevalent poverty. The authors found smaller but significant reductions among whites. They found no effects among persons under the age of 35, whose mortality rate is simply too small for such policies to make much of a difference.

    As the authors say:

    Our estimate of a 6.1% reduction in the relative risk of death among adults is similar to the 8.5% and 5.1% population-level reductions in infant and child mortality, respectively, as estimated in analyses of Medicaid expansions in the 1980s….

    A relative reduction of 6% in population mortality would be achieved if insurance reduced the individual risk of death by 30% and if the 1-year risk of death for new Medicaid enrollees was 1.9%… This degree of risk reduction is consistent with the Institute of Medicine’s estimate that health insurance may reduce adult mortality by 25%, though other researchers have estimated greater or much smaller effects of coverage. A baseline risk of death of 1.9% approximates the risk for a 50-year-old black man with diabetes or for all men between the ages of 35 and 49 years who are in self-reported poor health.

    The bottom line is that, according to these findings, state Medicaid programs need only cover 176 additional adults to avert one additional death every year. This allows for a crude but intriguing cost-effectiveness calculation. Annual Medicaid costs for childless adults are roughly $6,000. The cost per averted death (176*6,000) is thus about $1 million. This $1 million figure is easily within the range of acceptable costs based on common, widely-supported interventions to save lives and improve health.

    In 1994, Janet Currie and Jonathan Gruber performed a classic analysis of the health impacts and costs associated with earlier Medicaid expansions for infants and pregnant women. Largely through the financing of NICUs and related care, these expansions reduced infant mortality. Expressing the findings in year-2012 dollars, Currie and Gruber found that early, more targeted Medicaid expansions for relatively high-risk women and infants cost about $1.3 million per averted infant death. Later expansions to relatively lower-risk patients were more costly, with an estimate of about $6.5 million per averted death.

    Now, Sommers, Baicker, and Epstein add to our fund of knowledge by showing that expanded  Medicaid benefits for childless adults can also save lives. Moreover, this Medicaid expansion provides good public value, as it improves many measures of health in addition to preventing death.

    There’s been a wide, often misplaced debate over whether Medicaid helps or hurts its own recipients. We need to stop that. Medicaid helps. As states debate whether and how to expand coverage to millions of childless adults across America, they can focus on how much they’re willing to spend to save lives, but they shouldn’t deny that that’s what’s at stake.

    Comments closed
  • Medicaid spending growth is surprisingly modest

    This post is coauthored by Austin Frakt and Aaron Carroll.

    Christopher Flavelle has put together a fascinating Bloomberg Government Study on the allure and growth of Medicaid managed care and the recent trend in Medicaid spending by states. It’s the first of three pieces in this area and, unfortunately, is behind a paywall. If you can get your hands on it, it’s worth a full read. If you can’t, here are a few details we thought worth highlighting.

    Flavelle writes that advocates of turning Medicaid into a block grant program often claim it would “increase spending predictability.” Given this and other rhetoric from states about their “out of control” Medicaid growth, you’d think that spending has been growing exceptionally rapidly recently.

    According to analysis by Flavelle, that’s not the case.

    Inflation adjusted Medicaid spending per capita by state general funds increased just 3.8% between 2002 and 2011. This is illustrated by the dotted line in the chart below. Per capita Medicaid spending by each of the five states with the largest Medicaid programs is also shown. Though they gyrate up and down, they all end up at the end up in 2011 close to or even below where they started in 2002.

    Some might object to dividing spending by the state population (per capita), because the population grows over time.* Of course, the state’s population reflects its potential tax base too, so it is fair to divide by it by that standard. A State with a growing population should to be able to afford commensurate growth in its Medicaid spending, though that can depend on how different sectors of the population grow relative to each other (more wealthy or more poor people).

    In any case, total (not per capita) real Medicaid growth was just 12% from 2002 to 2011. That’s not as high as we expected, though there is variation by state. While spending in Illinois actually decreased by 1% over this period, spending in Texas went up 58%. Moreover, this is growth above inflation, so there is room for improvement. Still, Medicaid has held spending growth below that of other payers. Flavelle quotes Vernon Smith, former Medicaid director for Michigan:

    “When you look at the rate of growth for all the major payers — Medicaid, Medicare, employer-sponsored insurance, National Health Expenditures — what you see is that no other payer has constrained the rate of growth in spending as well as Medicaid has. [] The reason is that no payer has been as motivated to undertake cost containment as state governments.”

    In total, it looks like states have done a pretty good job. Based on spending trends alone, it isn’t clear why officials in some states think the program needs major restructuring. That’s not to say it is perfect or that there aren’t other justifications for reform. Of course there are. In fact, given its low reimbursement rates, some might reasonably argue we should spend more on Medicaid, not less. At any rate, the data don’t support claims that state spending on Medicaid has been growing in an especially concerning way, particularly relative to other payers.

    * Another potential objection is that Medicaid spending by state general funds does not count “other funds and revenue sources used as Medicaid match, such as local funds and provider taxes, fees, donations, assessments.” True! However, Flavelle justifies a focus on general fund spending by quoting Alan Simpson who characterized that type of spending as a “tax gimmick” and that states just use “that additional ‘spending’ to increase their federal match.” If you collect a dollar and then give it back to those you collected it from in order to get another dollar (or more) from the federal government (which, of course, you also spend on the same providers you taxed), have you really spent your own funds?

    Comments closed
  • Reflex: spiked

    We have decided to terminate our morning Reflex posts. The readership demand (as measured by comments) and blogospheric popularity (as measured by links) do not seem to justify the work required to produce them. If news items warrant comment, we’ll provide it in one-off posts.

    Sorry to those who loved Reflex. The economics of the blogosphere have spoken. We must listen, at least incidentally.

    Comments closed
  • Reflex: December 21, 2011

    Kicking the decision about what benefits must be included in individual and small-employer plans to states will continue the nation’s patchwork of uneven coverage, report Gardiner Harris, Reed Abelson, and Robert Pear. “People in Utah and Wyoming, for example, are likely to have more limited access to expensive services now mandated in states like Massachusetts and Maryland — at least until 2016, when a senior administration official said the federal government plans to establish a national standard of essential benefits.” Austin’s comment: I’m getting a lot of questions as to whether this was a good or bad move by the Obama Administration. One thing is clear, it was a politically wise move. Dodging another huge fight over health reform may aid the viability of the new law in the long run.

    Yes, Congress didn’t pass a deal to extend the Payroll Tax deal, but that also means they didn’t pass a temporary doc fix, writes Julian Pecquet. “Patient advocates immediately started blasting Congress on Tuesday after House Republicans nixed a temporary fix to Medicare payments to physicians. The House voted 229-193 to reject the Senate’s two-month “doc fix” and instead call for a conference meeting with the Senate. Senate Majority Leader Harry Reid (D-Nev.) says the Senate is done for the year. If neither chamber changes its mind, physicians will see a 27.4 percent cut in Medicare payments starting Jan. 1.” Aaron’s Comment: There’s no “if” there. The Senate has gone home for the holidays, and the House has voted. There will be no doc fix before the new year. This doesn’t mean that one own’t be passed retroactively, but it’s got to be frightening for the AMA and others. If Congress is willing to play chicken with all of America, I don’t know why they wouldn’t with physician reimbursement.


    Comments closed
  • Reflex: December 20, 2011

    House will not have direct vote on Senate deal, writes Tom Cohen and Alvin Silverleib. The two month extension of the payroll tax cut, Unemployment Insurance and the doc fix that passed the Senate 89-10 on Saturday appears dead in the House and there will not be a direct vote on the Senate measure; Senate Democrats say they will not negotiate a full year extension of the payroll tax cut (that everyone wants) until the House passes the 2 month version. Don’s comment: This is not as dumb and dysfunctional as the debt ceiling debacle in August, but there is still time.

    Ezekiel Emanuel rejects premium support. “Premium support is classic cost shifting, rather than cost cutting. […] To address the root of the cost problem, we must change how we pay doctors and hospitals. We must move away from fee-for-service payments to bundled payments that include all the costs of caring for a patient, thereby encouraging providers to keep patients healthy and avoid unnecessary services.” Austin’s comment: Premium support is a broader concept than Emanuel suggests. It need not shift costs to Medicare beneficiaries. However, Emanuel’s conclusion is reasonable. The type of cost cutting we need will not be found in premium support alone. My recent series covered all these points and more.

    The administration’s first crack at essential benefits guidance is drawing no backlash, says Jason Millman. “The Obama administration’s first crack at defining minimum health benefits did exactly what consumer groups hoped it wouldn’t do: It gave states a choice of “benchmark” plans rather than spelling out the details. But the administration seems to have pulled it off — because there was no backlash to be found from groups that championed the law.” Aaron’s Comment: (1) It provoked no backlash because it punted the call to the states. (2) It’s sad that “success” is now being defined as not angering anyone. (3) That just doesn’t seem worthy


    Comments closed
  • Reflex: December 19, 2011

    Julian Pecquet expects continued attacks on and challenges to the ACA. “Its first life-or-death experience lies in the hands of the Supreme Court, which could potentially strike down the Affordable Care Act as early as June. Even if the high court upholds the law, it could remove its individual mandate […] Every Republican presidential candidate has vowed to repeal the law, through executive orders and by signing repeal legislation. Republicans are expected to keep control of the House, and with Democrats defending 23 seats in the Senate, the GOP has a shot at gaining the 60-member majority needed to get anything through.” Austin’s comment: The outcome of the 2012 election is the most important factor in the future of health reform and structural changes to Medicare.

    Crisis line tries to save suicidal veterans, writes Christina Ginn. There is an epidemic of suicide among veterans in the U.S., but there are resources to help. Don’s comment: Even as the Iraq war ended yesterday, this piece points out that our nation will be dealing with the aftermath of this war as well as the one in Afghanistan for many years. 1-800-273-8255 is a crisis line that any vet can call to receive help, or you can send a text to 838255 or go to http://www.veteranscrisisline.net/

    And now there’s concern over deadlines for the federal government’s health-care exchange, writes Julie Appleby. “With many states unwilling or unable to get insurance exchanges operational by the health-care law’s deadline of Jan. 1, 2014, pressure is growing on the federal government to do the job for them. But health-care experts are starting to ask whether the fallback federal exchange called for in the 2010 law will be operational by the deadline in states that will not have their exchanges ready.” Aaron’s Comment: I’ve written many times of the gamble some states are playing by not getting their exchanges ready; if they don’t the feds will take control. It’s hard to understate the importance of the federal government not falling behind if they want the ACA to succeed.

    Comments closed