Lots of gun safety advocates focus on regulating the sale and transfer of firearms. Another area that could yield gains is gun storage. How safely weapons are stored is a big factor, especially when it comes to keeping kids safe. Secure storage and locks can do a lot.
In a surprise, the Supreme Court agreed this morning to hear cases arising out of the risk corridor mess. At issue is $12 billion in federal money, and the case’s outcome will hinge on what Congress meant when it placed limits on the use of appropriated funds in an effort to sabotage the Affordable Care Act.
The Federal Circuit held that Congress, in placing those limits, qualified an earlier promise made in the ACA to make risk corridor payments to insurers that lost big on the exchanges. As I’ve explained many times, I think that decision is wrong. We’ll see if the Supreme Court agrees.
I’m on the road, so a longer recap of the background and the litigation will have to wait. But I’ve been writing about the appropriations battle since 2014, and I thought I’d provide some resources if you’re interested in learning more about the case.
- Here’s my recap of the Federal Circuit decision that the Supreme Court will review. It’s a short and crisp description of the key issues in the case, and offers too my views about why the Federal Circuit got this one wrong.
- Craig Garthwaite and I put the litigation into its broader context—the full faith and credit of the U.S. government—in this New York Times op-ed.
- I discuss the litigation at some length in this Pennsylvania Law Review piece laying out my view that
- I’ve got a piece in the New England Journal of Medicine discussing rumors that the Obama administration wanted to settle the cases when they were still in the Court of Federal Claims.
- And here’s my first piece from May 2014 on the whole fiasco—titled “Does the Risk Corridor Program Have a Fatal Technical Flaw?”
We’re constantly on the look out for ways to save money in the US health care system. Targeting waste is our best bet to do so. A new study in JAMA Pediatrics points out a contender – low-value diagnostic imaging in the emergency department.
This is a guest post by Adam A. Markovitz, BS and Andrew M. Ryan, PhD.
Our paper published on June 18th in Annals of Internal Medicine found that previous estimates of the effects of the Medicare Shared Savings Program (MSSP) have been overstated. After accounting for selective attrition of clinicians and beneficiaries from MSSP ACOs, the MSSP was associated with an increase in spending of $5 per quarter, statistically indistinguishable from zero.
In their response, McWilliams and colleagues raised a number of objections.
They claimed that ACOs have no incentive to avoid patients with higher risk. We disagree. If patients develop new health conditions while attributed to an ACO, the ACO is unable to include these conditions in patients’ risk scores. While designed to protect against upcoding, this provision creates an incentive for ACOs to prune patients who are decompensating. Second, as identified by David Dranove and colleagues in an essential paper on risk selection, even if risk adjustment is accurate, the outcomes of high risk patients will have higher variance. It is rational for risk-averse ACOs to avoid these patients.
McWilliams et al. also objected to our design and statistical specifications. One complaint concerns our decision to determine patients’ treatment status on the basis of actual CMS assignment. Instead of using actual CMS assignment, papers authored by McWilliams and colleagues have attempted to replicate CMS’ attribution algorithm to approximate ACO assignment. We believe that this approach has led these studies to miss the selective attrition observed in our paper. It is possible that the high risk patients that we observed exiting ACOs were never assigned to ACOs in this prior research.
While McWilliams et al. claim that our use of the true CMS assignment introduces “time-varying inconsistency in how utilization is used to define comparison groups,” this critique is unfounded. Using this approach, and otherwise replicating the preferred specification of McWilliams et al. (with market-year and ACO fixed effects), we found that the MSSP was associated with a significant reduction in spending. It should therefore be obvious that this analytic approach did not introduce bias toward the null.
We present a robust set of results showing evidence of selective attrition within ACOs. This includes evidence that
- higher spending patients and clinicians are more likely to exit ACOs;
- spending estimates attenuate as we progressively add fixed effects for markets, patients, and clinicians;
- ACOs are associated with our falsification outcome, hip fracture, in all standard adjusted specifications (including McWilliams et al.’s preferred specification);
- and the effects of ACOs decrease to approximately zero in our instrumental variables specification.
We also provide robust evidence for the validity of the instrumental variable, including well-balanced patient characteristics, comparable pre-period spending trends, and no association with hip fracture (our falsification outcome). The adjusted longitudinal model failed each of these tests.
McWilliams and colleagues object to this evidence for a variety of reasons, all of them unpersuasive. For instance, they argue that exit of high-risk beneficiaries is simply due to a glitch in the MSSP attribution methodology whereby sick patients are passively drawn from ACOs when they start to only receive care from non-ACO specialists. However, this would not explain our finding of pruning of physicians with high-cost patient panels.
But we also present a simple intention-to-treat analysis that demonstrates the strong influence of attrition bias on standard difference-in-differences estimates (see Supplement, Section H “Intention-to-treat analyses”). Based on actual CMS assignment, patient treatment status was “turned on” when they were first attributed to an MSSP ACO, remaining on for the duration of the study period. This model was not affected by choices related to patient attribution or changes in composition of providers within ACOs or physician groups. Estimates from this simple model from a balanced panel of beneficiaries with beneficiary fixed effects found small and non-significant effects of the MSSP (+$11 per quarter [95% CI, -$13 to $36]). This crucial validity check demonstrates that the effects of the MSSP disappear when patient composition is held constant and selective attrition is addressed.
We agree with McWilliams et al. that the effect of the MSSP is challenging to ascertain. Unlike hospital-based reforms — where hospitals exist before and after the reform is initiated and attribution is straightforward — evaluating the effects of unstable MSSP ACOs is more difficult. The failure of previous work to account for subtle compositional changes within ACOs has led researchers to miss an important source of bias.
If you’re attending the ASHEcon conference I hope you will come see my colleagues and me at the sessions below. These all include members of the Partnered Evidence-based Policy Resource Center (PEPReC) at the VA Boston Healthcare System and/or the Department of Health Law, Policy & Management (HLPM) at the Boston University School of Public Health. Those individuals are in bold, below.*
Monday June 24
9:30 – 10:45
Chair: Partha Deb
Location: Madison A
Presenter: Edward Norton
Co-Authors: Emily Lawton; Jun Li; Lena Chen
Discussant: Elena Prager
Presenter: Augustine Denteh
Co-Author: Sherri Rose
Discussant: Kevin N. Griffith
- Forecasting Health Care Spending: A Comparison of Nonlinear Econometric and Machine Learning Methods
Presenter: Partha Deb
Discussant: Naomi B. Zewde
1:15 – 2:45
Chair: Vicki Fung
- State policies permitting the denial of services to same-sex couple and sexual minority mental distress
Presenter: Julia Raifman, BU
Co-Authors: Ellen Moscoe; S. Bryn Austin; Mark Hatzenbuehler; Sandro Galea
Discussant: Elham Mahmoudi
Presenter: Ana Progovac
Co-Authors: Brian Mullin; Laura Hatfield; Alex McDowell; Mark A. Schuster; Benjamin Le Cook
Discussant: Neil Kamdar
- The Impact of Medicare Mental Health Cost-Sharing Parity on Outpatient Care for Beneficiaries with Serious Mental Illness
Presenter: Vicki Fung
Co-Authors: Mary Price; John Hsu; Benjamin Le Cook
Discussant: Ana M. Progovac
1:15 – 2:45
Chair: Austin Frakt
- Parental Coverage and Insurance Use Behavior of Young Women for Sexual and Reproductive Health Services in Massachusetts
Presenter: Jacqueline Ellison, BU
Co-Authors: Megan Cole, BU; Lewis Kazis; Amresh Hanchate
Discussant: Christine Yee
- Did the ACA Improve Rates of Well Child and Depression Screening Visits for Commercially Insured Adolescents?
Presenter: Carolina-Nicole Herrera, BU
Discussant: Kandice Kapinos
Presenter: Kevin Griffith
Co-Authors: Benjamin Sommers; David Jones
Discussant: Sarah Miller
1:15 – 2:45
Chair: Joshua Rolnick
Location: Madison A
- Are Medicare Advantage plans more or less effective at reducing hospital readmissions for patients with multiple chronic conditions?
Presenter: Jayasree Basu
Co-Author: Paul Jacobs
Discussant: Keaton Miller
Presenter: Meng-Yun Lin
Co-Authors: Amresh Hanchate, Austin Frakt, Kathleen Carey, BU
- Longer-Term Effects of Bundled Payments for Medical Conditions on Spending and Utilization: A Difference-In-Difference Analysis of the Bundled Payments for Care Improvement Initiative
Presenter: Joshua Rolnick
Co-Authors: Joshua Liao; Xinshuo Ma; Eric Shan; Jingsan Zhu; Erkuan Wang; Qian Huang; Amol Navathe
Discussant: Michael Barnett
Tuesday June 25
10:00 – 11:30
Chair: Zirui Song
- Home Health Care Use in Medicare Advantage Compared to Traditional Medicare: The Role of Benefit Design
Presenter: Laura Skopec
Co-Authors: Stephen Zuckerman; Doug Wissoker; Peter Huckfeldt; Joshua Aarons; Robert Berenson; Judy Feder; Judy Dey
Discussant: Austin Frakt
Presenter: Laura Keohane
Co-Authors: Zilu Zhou; David Stevenson
Discussant: Courtney H. Van Houtven
- Medicare Advantage Dual-Eligible Special Needs Plans: An Examination of Beneficiary Characteristics and Enrollment Decisions
Presenter: Brian McGarry
Co-Authors: Timothy Layton; Zirui Song; David Grabowski
Discussant: Daria M. Pelech
10:00 – 11:30
Chair: Steve Pizer
Presenter: Kenneth John McConnell
Co-Author: Stephan Lindner
Discussant: Eric Roberts
- Vector-Based Kernel Weighting: A Simple Estimator for Improving Precision and Bias of Average Treatment Effects in Multiple Treatment Settings
Presenter: Jessica Lum
Co-Authors: Steven Pizer, Austin Frakt, Melissa Garrido
Discussant: Partha Deb
- A Sequence of Two Studies to Learn & Test Heterogeneous Treatment Sub-groups: Effects of Cost Exposure on Use of Outpatient Care
Presenter: Amelia Haviland
Co-Authors: Rahul Ladhania; Neeraj Sood; Ateev Mehrotra
Discussant: Jeffrey S. McCullough
1:30 – 3:00
Chair: Timothy Classen
Presenter: Julia Raifman, BU
Co-Authors: Elysia Larson; Michael Siegel; Michael Ulrich; Colleen Barry; Anita Knopov; Sandro Galea
Discussant: Timothy Classen
- Association Between Changes in Community Mental Health Services Availability and Suicide Mortality in the US
Presenter: Peiyin Hung
Co-Authors: Susan Busch; Shiyi Wang
Discussant: Julia Raifman, BU
Presenter: Timothy Classen
Discussant: Peiyin Hung
1:30 – 3:00
Chair: Lacey Loomer
Location: Madison A
- Identification and Evaluation of Informal Inter-Organizational Ties between Hospitals and Skilled Nursing Facilities
Presenter: Cyrus Kosar
Co-Authors: David Meyers; Vincent Mor; Momotazur Rahman
Discussant: Jia Yu
- The Comparative Advantage of Accountable Care Organizations in Structuring Post-Acute Care for Medicare Beneficiaries
Author(s): Derek Lake; David C. Grabowski; Pedro Gozalo
Discussant: Brian E. McGarry
- Evaluating the Impact of Payer and Provider Integration on Medicare Advantage Enrollee Hospitalization and Enrollment Outcomes
Presenter: David Meyers
Co-Authors: Vincent Mor; Momotazur Rahman
Discussant: Steven Pizer
1:30 – 3:00
Chair: Thomas Buchmueller
Presenter: Catherine Maclean
Co-Authors: Ioana Popovici; Michael T. French
Discussant: Otto Lenhart
Presenter: Aparna Soni
Discussant: Xiaoxue Li
Presenter: Coleman Drake
Co-Authors: Conor Ryan; Bryan Dowd
Discussant: Paul Shafer, BU
5:15 – 7:00 Posters
Location: Exhibit Hall C (lower level)
Presenter: Kevin Griffith
Co-Authors: Benjamin Sommers; David Jones
Presenter: Paul Shafer, BU
Co-Authors: Stacie Dusetzina; Lindsay Sabik; Timothy Platts-Mills; Sally Stearns; Justin Trogdon
Wednesday June 26
8:00 – 9:30
Topics in Health Care Financing and Incentives
Chair: Jean M Fuglesten Biniek
Location: Madison A
Presenter: Yingzhe Yuan
Co-Authors: Megan E. Price; David F. Schmidt, MD; Merry Ward, PhD; Jonathan R. Nebeker, MD; Steven Pizer
Discussant: Jeffrey S. McCullough
- Sources of Geographic Variation in Health Care Spending and Utilization Among Individuals with Employer Sponsored Insurance
Presenter: Jean Fuglesten Biniek
Co-Author: William Johnson
Discussant: Sayeh S. Nikpay
Presenter: Sayeh Nikpay
Co-Authors: Rena Conti; Melinda Buntin
Discussant: David Cutler
8:00 – 9:30
Chair: Steven Pizer
Presenter: Taeko Minegishi
Discussant: John Romley
Presenter: Christine Yee
Discussant: Austin Frakt
Presenter: Aigerim Kabdiyeva
Discussant: Michael R. Richards
8:00 – 9:30
Chair: Zhiyou Yang
- Potential Unintended Consequences of the New Stratified Methodology by Dual Proportion Under the Hospital Readmissions Reduction Program (HRRP)
Presenter: Zhiyou Yang
Co-Authors: Peter Huckfeldt; Neeraj Sood; Jose Escarce; Teryl Nuckols; Ioana Popescu
Discussant: Eric Roberts
- The Impact of Medicaid Expansion on Continuous Enrollment: A Two-State Analysis of All Payer Claims Data
Presenter: Sarah Gordon, BU
Co-Authors: Benjamin Sommers; Ira Wilson, MD; Omar Galarraga; Amal Trivedi
Discussant: Jacob Wallace
Presenter: Andrew Wilcock
Co-Authors: Michael Barnett; J. McWilliams; David Grabowski; Ateev Mehrotra
Discussant: Neeraj Sood
10:00 – 11:30
Chair: Megan B. Cole, BU
- Effects of a Community-Based Care Management Program on Utilization and Spending Among High-Utilizers
Presenter: Xinqi Li
Co-Author: Omar Galarraga
Discussant: Kimberley Geissler
Presenter: Kimberley Geissler
Discussant: Michael Flores
- Integrating Behavioral Health into the Pediatric Medical Home for Low-Income Children: Impact on Utilization and Cost of Care
Presenter: Megan Cole, BU
Co-Authors: Qiuyuan Qin; Megan Bair-Merritt
Discussant: Xinqi Li
* If I’ve overlooked anyone please bring it to my attention and I will update.
The following originally appeared on The Upshot (copyright 2019, The New York Times Company).
The idea that legal cannabis can help address the opioid crisis has generated much hope and enthusiasm.
Based on recent research, some advocates have been promotingthis connection, arguing that easier access to marijuana reduces opioid use and, in turn, overdose deaths.
A new study urges caution. Sometimes appearances — or statistics — can be deceiving.
It’s plausible that marijuana can help reduce pain. Systematicreviews show that certain compounds found in marijuana or synthetically produced cannabinoids do so, at least for some conditions. So some people who might otherwise seek out opioid painkillers could use medical marijuana instead.
In 2014, a study published in JAMA gave further hope that liberalizing marijuana laws might alleviate the opioid crisis.
The study examined the years 1999 through 2010, during which 10 states established medical marijuana programs. It compared changes in the rates of opioid painkiller deaths in states that passed medical marijuana laws with those that had not. The results? Researchers found that the laws were associated with a nearly 25 percent decline in the death rate from opioid painkillers.
Since publication of the JAMA study, others have produced similar findings. One posted last fall at the Social Science Research Network found that counties with medical marijuana dispensaries have up to 8 percent fewer opioid-related deaths among non-Hispanic white men, and 10 percent fewer heroin deaths.
None of this proves that marijuana liberalization causes lower opioid-related mortality, something the authors of the 2014 JAMA study pointed out.
Correlation does not mean causation, of course. A particular challenge in interpreting correlations in social science has its own name — the ecological fallacy. It’s the erroneous conclusion that relationships observed at the wider level (like state or region) necessarily hold true at the individual level as well.
“It’s possible that relationships get strengthened, weakened or even reversed when going from the individual to aggregate level,” said Mark Glickman, senior lecturer on statistics at Harvard. This was documented in a classic paper in 1950 and underlies many erroneous conclusions from research.
A new study revisited the JAMA-published analysis with more data. Its conclusions cast doubt on the idea that medical marijuana helps reduce opioid deaths — at least as far as we can tell with state-level data.
Between 2010 — the final year of analysis in the JAMA study — and 2017, 32 more states legalized medical marijuana, and eight legalized recreational use. A new study published in the Proceedings of the National Academy of Sciences (P.N.A.S.) reassessed the relationship between these laws and opioid deaths using the same approach as the JAMA study, but extending the years of analysis through 2017.
Over the years analyzed in the JAMA study, 1999 to 2010, the new P.N.A.S. study produced similar findings: Medical marijuana legalization was associated with reduced opioid painkiller overdose deaths. But in an expanded analysis through 2017, the results reversed — the laws are associated with a 23 percent increase in deaths.
This doesn’t necessarily mean that the laws first saved lives and then, in later years, contributed to deadly overdoses.
We’ve talked about how housing is important for health. We’ve talked about how we can improve access to housing through stimulation of production through the LIHTC. We’ve talked about how we can improve access through vouchers and mobility programs. There’s one more thing we’d like to discuss: Inclusionary zoning. Zoning rules are important for making neighborhoods and municipalities function smoothly, but they can also be written in ways that keep low-income residents from moving to certain neighborhoods.
David Tuller, a lecturer in UC Berkeley’s School of Public Health and Graduate School of Journalism, wrote about this recently in a policy brief at Health Affairs. It’s also the topic of this week’s HCT.
This is a guest post by J. Michael McWilliams, MD, PhD, Alan M. Zaslavsky, PhD, Bruce E. Landon, MD, MBA, and Michael E. Chernew, PhD.
The extent to which the Medicare Shared Savings Program (MSSP) has generated savings for Medicare has been a topic of debate, and understandably so—the program’s impact is important to know for guiding provider payment policy but is challenging to ascertain.
Prior studies suggest that accountable care organizations (ACOs) in the MSSP have achieved modest, growing savings.(1-4) In a recent study in Annals of Internal Medicine, Markovitz et al. conclude that savings from the MSSP are illusory, an artifact of risk selection behaviors by ACOs such as “pruning” primary care physicians (PCPs) with high-cost patients.(5) Their conclusions appear to contradict previous findings that characteristics of ACO patients changed minimally over time relative to local control groups.
We therefore undertook to review the paper and explain these apparently contradictory results.(1,3) We concluded that these new results do not demonstrate bias due to risk selection in the MSSP but rather are consistent with the literature.
Below we explain how several problems in the study’s methods and interpretation are responsible for the apparent inconsistencies. We provide this post-publication commentary to clarify the evidence for researchers and policymakers and to support development of evidence-based policy.
Approaches to Estimating Savings and Risk Selection in the MSSP
If the objective is to determine Medicare’s net savings from the MSSP, the key is to estimate the amount by which participating ACOs reduced Medicare spending in response to the program using an evaluation approach that removes any bias from risk selection and compares ACO spending with a valid counterfactual (as opposed to the program’s spending targets or “benchmarks” for ACOs). With this unbiased estimate of gross savings in hand, the net savings can then be calculated by subtracting the shared-savings bonuses that Medicare distributes to ACOs. If ACOs engage in favorable risk selection, it is unnecessary to quantify it to calculate net savings. As long as the evaluation methods used to estimate gross savings appropriately remove any contribution from risk selection, the net savings will accurately portray the savings to Medicare (the bonuses include any costs to Medicare from risk selection). Thus, an evaluation can yield a valid estimate of net savings while avoiding the pitfalls of attempting to isolate the amount of risk selection.
Taking this approach, prior studies have estimated the gross savings while minimizing bias from risk selection,(1-3) without directly measuring it. Through the end of 2014 (the study period examined by Markovitz et al.), prior analyses found modest gross savings of about 1.1% when averaged over the performance years among cohorts of ACOs entering the MSSP in 2012-2014.(2) Gross savings grew over time within cohorts and exceeded bonus payments by 2014, with no evidence that residual risk selection contributed to the estimated savings or their growth.
Importantly, these prior evaluations took an intention-to-treat approach that held constant over time the group of providers defined as MSSP participants, regardless of whether ACOs subsequently exited the program or changed their constituent practices or clinicians. In other words, by keeping membership in the ACO groups constant over time, these estimates excluded spurious savings that might appear if ACOs selectively excluded providers with sicker patients over time.
Taking an alternative approach, Markovitz et al. try to quantify risk selection by estimating gross savings under a “base” method that includes selection effects, and then modeling and removing selection effects under various assumptions.
Although appealing in principle and potentially illuminating of undesirable provider behavior, their base case approach introduces additional sources of bias (not just risk selection), so their initial estimates are not comparable to those from the previous studies. Moreover, the comparisons of their base estimates with estimates from subsequent models do not support their conclusions. The authors misinterpret the reductions in savings caused by the analytic modifications intended to address selection as evidence of selection, when in fact the modifications correct for other sources of bias that were addressed by prior studies but included in the authors’ base case.
In addition to this misinterpretation, the approaches to removing risk selection from estimates also are problematic. Before discussing the details of these methodological issues, we first review the incentives for selection in the MSSP, which must be understood to interpret the findings of Markovitz et al. correctly.
Incentives for Risk Selection in the MSSP
The MSSP defines ACOs as collections of practices—taxpayer identification numbers (TINs)—including all clinicians billing under those TINs; ACOs thus can select TINs but cannot select clinicians within TINs for inclusion in contracts. The MSSP accounts for changes in TIN inclusion each year by adjusting an ACO’s benchmark to reflect the baseline spending of the revised set of TINs.
Thus, ACOs do not have clear incentives to exclude TINs with high-cost patients in favor of TINs with low-cost patients. Doing so might improve their performance on utilization-based quality measures such as readmission rates, thereby increasing the percentage of savings they can keep (the quality score affects the shared savings rate), but the savings estimate should not increase. More generally, if there are some advantages to selecting TINs with low-risk patients, the associated reduction in spending should not be interpreted as a cost to Medicare of risk selection because the benchmark adjustments for changes in TIN inclusion should eliminate much or all of the cost to Medicare (and the gain to ACOs). Theory and prior empirical work would actually suggest advantages of including high-spending TINs, as ACOs with high spending should have an easier time generating savings and indeed have reduced spending more than other ACOs, on average.
An analysis attempting to quantify risk selection should therefore focus on changes in patients or clinicians after MSSP entry within sets of TINs—changes that ACOs have clear incentives to pursue (e.g., by encouraging high-cost patients within a TIN to leave [e.g. through referrals] or directing clinicians of high-cost patients to bill under an excluded TIN).(6) Failure to exclude changes in TIN inclusion from estimates of risk selection is analogous to not accounting for the Hierarchical Condition Categories (HCC) score in an analysis of risk selection in Medicare Advantage vs. traditional Medicare.
Problems with Analysis and Interpretation by Markovitz et al.
Markovitz et al. present a base analysis intended to produce the gross savings that would be estimated if one allowed changes in the composition of ACOs to contribute to the savings estimate. Such an analysis should compare spending differences between ACO and non-ACO providers at baseline with spending differences between the two groups after MSSP entry (a difference in differences), while allowing the provider and patient composition to change over time within ACO TINs.
But the statistical model (section D of the Appendix) omits controls for fixed differences between providers that would be observable at baseline (i.e., provider effects). Consequently, the estimate (the coefficient on “MSSPijqt”) is not interpretable as a difference in differences, and the characterization of this model as similar to “previous analyses” is inaccurate. Furthermore, the estimate suggests gross savings that are nearly five times greater than the prior estimate of 1.1% that the authors claim to have replicated—a 5.0% reduction in per-patient spending ([-$118/quarter]/[mean spending of $2341/quarter]) after only about 12 months of participation, on average (Figure 2B).
Subsequent models do include terms for patient or provider fixed effects (Figure 2B), constituting difference-in-differences analyses. Hence, the dramatic attenuation of the estimated spending reduction caused by introducing these terms does not demonstrate risk selection, but rather the correction of the omitted term in the base model. The base model is thus a misleading reference value for comparisons. The fixed effects adjust not only for within-TIN changes in clinicians or patients after MSSP entry (the potential selection effects of interest that Markovitz et al are trying to isolate) but also for fixed (baseline) differences between ACOs and non-ACO providers and within-ACO changes in TIN inclusion that are reflected in benchmarks and account for much of the turnover in participating clinicians.(7) The latter two sources of compositional differences do not reflect risk selection and did not contribute to prior estimates of savings.(1-3)
In a more appropriate base analysis that better resembles previous evaluations (the 4th model in Figure 2, panel B), Markovitz et al. include ACO fixed effects and hold constant each ACO’s TINs over time. Compared with the results of that analysis (-$66/quarter or -$264/year or -2.8%), the addition of patient or clinician controls to eliminate selection has effects that are inconsistent in direction and more modest in magnitude than when using the previous base case as the comparator (Figure 2B).
This set of findings does not support a conclusion that prior evaluations overstated ACO savings by failing to fully account for risk selection. In fact, the gross savings estimated by models with patient or clinician effects range from approximately 10% greater to over 3 times greater than the average gross savings estimated in a prior evaluation over the same performance years (i.e., 113-300+% × the 1.1% spending reduction noted above).(2) Thus, the interpretation of the results from this series of models is misleading and mischaracterizes their relation to the prior literature.
Even with adjustment for patient or provider effects, the difference-in-differences analyses remain problematic for at least two reasons. First, Markovitz et al. use the actual MSSP assignments (in some cases based on post-acute or specialty care use) only in the post-period for ACOs. They cannot use these for the control group or for the pre-period for either the ACO or comparison group because the assignment data are only available for ACOs in performance years and only for ACOs that continue in the program. This introduces a time-varying inconsistency in how utilization is used to define comparison groups.
Second, Markovitz et al. rely on within-patient or within-clinician changes (i.e., models with patient or clinician fixed effects) to isolate the MSSP effect on spending, net of selection, but doing so can introduce bias.(3) For example, if ACOs hired clinicians to perform annual wellness visits, this could shift attribution of single-visit healthy patients away from their PCPs, causing artifactual within-PCP spending increases and underestimation of savings.
Or, if a strategy for ACO success is to shift high-risk patients to more cost-effective clinicians better equipped or trained to manage their care, one would not want to eliminate that mechanism in an evaluation of savings. More generally, the patient or clinician fixed effects can introduce bias from time-varying factors that would otherwise be minimized in a difference-in-differences comparison of stably different cross-sections of ACO and non-ACO populations.
Markovitz et al report substantial differences in pre-period levels and trends and a differential reduction in hip fractures. But none of these imbalances were observed in previous evaluations that addressed provider-level selection by holding ACO TIN (or clinician) composition constant and assigned patients to ACOs and control providers using a method based only on primary care use and applied consistently across comparison groups and years.(1,3) Markovitz et al. imply that their findings for hip fractures should be interpreted as evidence of bias from risk selection in prior evaluations.
But MSSP evaluation by our group (3) found no differential change in the proportion of patients with a history of hip fracture among ACO patients vs. control patients from before to after MSSP entry (differential change in 2015: 0.0% with a sample baseline mean of 2.9%) and no emergence of a differential change in hip fractures over the performance years that would suggest selection. We did not report this specific result in the published paper because we conducted balance tests for numerous patient characteristics, including 27 conditions in the Chronic Conditions Data Warehouse (of which hip fracture is one) that we summarized with counts. We report this result here to correct the misleading conclusion by Markovitz et al. that their findings would have been found in our study. The finding of a differential reduction in hip fractures suggests bias only in their analyses and provides further evidence that Markovitz et al. did not replicate prior evaluations and thus cannot demonstrate that they overstated savings.
Instrumental variables analysis
Markovitz et al. also include an instrumental variables (IV) analysis, using differential changes in local MSSP participation surrounding a patient’s PCP (“MSSP supply”) to estimate the incentive effect without selection effects. We question the validity and conclusions of this analysis for reasons we can only state briefly here.
Specifically, the instrument should affect the outcome only by altering treatment assignment and should therefore not be affected by treatment. Yet, unlike a standard ecologic instrument that is unaffected by treatment assignment (e.g., where a patient lives), “MSSP supply” can be altered by a change in a patient’s assigned PCP, which can occur as a result of ACO exposure (e.g., from risk selection, the focus of the study). This calls into question the applicability of a key assumption in IV analysis.
In addition, the difference-in-differences model in which the instrument is deployed does not adjust for fixed differences in spending between PCP localities (and thus does not produce difference-in-differences estimates). Moreover, the results of this analysis suggest implausible spending increases of $588-1276/patient-year for ACOs entering in 2013-2014 (Appendix Figure 4). Acceptance of the instrument’s validity requires acceptance that participation in the MSSP caused these large spending increases.
Even if we accept the validity of the IV estimates, they are not comparable to the other difference-in-difference estimates because IV estimates pertain only to the population (the “switchers”) for whom treatment is determined by the instrument. Therefore, the comparisons cannot be interpreted as quantifying risk selection. By construction, increases in the local supply variable arising from MSSP entry by large hospital-based systems are larger, and ascribed to more patients, thereby giving the most weight to ACOs previously found to have no significant effect on spending, on average.(1-3)
Thus, comparing the IV estimates to estimates from the other models is analogous to comparing overall program effects with subgroup effects. The difference may reflect treatment effect heterogeneity as opposed to selection, and the authors have implicitly chosen a subgroup (large health system ACOs) that other work suggests is less responsive to MSSP incentives. Thus, estimates from the IV analysis suggestive of minimal savings would be consistent with the minimal savings documented in the literature for the group of ACOs to which the IV estimates are applicable.
We also note that the “adjusted longitudinal analysis” is again used as an inappropriate comparator for the IV analysis. It appears the imprecise IV estimates would not differ statistically from the estimates produced by the more appropriate base case with ACO fixed effects (Figure 2B).
Finally, Markovitz et al. interpret flow of patients and clinicians entering and exiting the MSSP as evidence of “pruning.” These analyses, however, do not support inferences about selection because they lack a counterfactual (flow in the absence of MSSP contracts).
Flow analyses can be deceptive because the health characteristics of the “stock” of patients assigned to the ACO changes over time, too. An ostensible net change in risk suggested by differences between those entering and exiting may be completely consistent with a population that is stable over time if patients’ risk status in the stock changes in a way that offsets the flow imbalance.
For example, Markovitz et al. previously interpreted greater “exit” of high-risk patients from ACOs as evidence of risk selection.(8) In the table below, we demonstrate that this conclusion is erroneous. The pattern of “exit” is merely an artifact of the utilization-based algorithm used to assign patients to ACOs. The higher switch rates among the highest-risk ACO patients (first column, based on CMS assignments of patients to ACOs) is similarly observed if one applies the CMS assignment rules to assign patients to large provider groups not participating in the MSSP (second column). Higher-risk patients receive care from more providers, causing more providers to “compete” for the plurality of a patient’s qualifying services in a given year and thus greater instability in assignment over time as the patient’s needs evolve. In other words, high-risk patients simply are reassigned more often, independent of ACO incentives.(9)
The comparisons of clinician entry and exit rates by Markovitz et al. are additionally misleading because of different denominators. If the probabilities in Figure 4 were calculated using a consistent denominator or instead reported as a replacement rate (high-risk patients served by entering physicians/high-risk patients served by exiting physicians), the higher spending associated with clinician exit and entry would be more similar.
Ultimately, if ACOs are “pruning” clinicians of high-cost patients, there should be evidence in the stock, but within-TIN changes in baseline risk scores of physician-group ACO patients have increased slightly, not decreased, relative to concurrent local changes.(1,3) The authors make no attempt to reconcile their conclusions with the documented absence of differential changes in ACO patient characteristics relative to controls. They make two contradictory arguments: that the savings estimated by prior studies were explained by selection on unobservable patient characteristics; but also that the risk selection is demonstrable based on observable patient characteristics (e.g., hip fracture, HCC score) that exhibited no pattern of selection in the prior studies.
Monitoring ACOs will be essential, particularly as incentives for selection are strengthened as regional spending rates become increasingly important in determining benchmarks.(10,11) Although there has likely been some gaming, the evidence to date—including the study by Markovitz et al.—provides no clear evidence of a costly problem and suggests that ACOs have achieved very small, but real, savings. Causal inference is hard but necessary to inform policy. When conclusions differ, opportunities arise to understand methodological differences and to clarify their implications for policy.
- McWilliams JM, Hatfield LA, Chernew ME, Landon BE, Schwartz AL. Early Performance of Accountable Care Organizations in Medicare. N Engl J Med. 2016;374(24):2357-66.
- McWilliams JM. Changes in Medicare Shared Savings Program Savings from 2013 to 2014. JAMA. 2016;316(16):1711-13.
- McWilliams JM, Hatfield LA, Landon BE, Hamed P, Chernew ME. Medicare Spending after 3 Years of the Medicare Shared Savings Program. N Engl J Med. 2018;379(12):1139-49.
- Colla CH, Lewis VA, Kao LS, O’Malley AJ, Chang CH, Fisher ES. Association Between Medicare Accountable Care Organization Implementation and Spending Among Clinically Vulnerable Beneficiaries. JAMA Intern Med. 2016;176(8):1167-75.
- Markovitz AA, Hollingsworth JM, Ayanian JZ, Norton EC, Yan PL, Ryan AM. Performance in the Medicare Shared Savings Program After Accounting for Non-Random Exit: An Instrumental Variable Analysis. Ann Intern Med. 2019;171(1).
- Friedberg MW, Chen PG, Simmons M., Sherry T., Mendel P, et al. Effects of Health Care Payment Models on Physician Practice in the United States. Follow-up Study. Accessed at: https://www.rand.org/pubs/research_reports/RR2667.html on March 29, 2019. 2018.
- Research Data Assistance Center. Shared Savings Program Accountable Care Organizations Provider-level RIF. Accessed at http://www.resdac.org/cms-data/files/ssp-aco-provider-level-rif on March 29, 2019.
- Markovitz AA, Hollingsworth JM, Ayanian JZ, Norton EC, Moloci NM, Yan PL, Ryan, AM. Risk adjustment in Medicare ACO program deters coding increases but may lead ACOs to drop high-risk beneficiaries. Health Aff (Millwood). 2019;38(2):253-261.
- McWilliams JM, Chernew ME, Zaslavsky AM, Landon BE. Post-acute care and ACOs – who will be accountable? Health Serv Res. 2013;48(4):1526-38.
- Department of Health and Human Services. Centers for Medicare and Medicaid Services. 42 CFR Part 425. Medicare Program; Medicare Shared Savings Program; Accountable Care Organizations–Pathways to Success and Extreme and Uncontrollable Circumstances Policies for Performance Year 2017. Final rules. Accessed at https://www.govinfo.gov/content/pkg/FR-2018-12-31/pdf/2018-27981.pdf on March 29, 2019.
- McWilliams JM, Landon BE, Rathi VK, Chernew ME. Getting more savings from ACOs — can the pace be pushed? N Engl J Med. 2019;380:2190-2192.
Then it came back, and with frustrating rapidity and persistence. Months went by, and the approaches that had seemed to work the first time weren’t doing the job.
It took me a while to respond to what my body was telling me. It didn’t want to wear shoes! So, I stopped. I spent most of this past week at home, barefoot. Then I added barefoot shoes only when I needed to wear something, like today, traveling for tomorrow’s first Drivers of Health meeting (it will be webcast, by the way). Not wearing supportive shoes/orthotics is the opposite of what is typically suggested for plantar fasciitis.
I also started using Yoga Toes, which feel amazing. My current recovery (and admittedly it’s been only a few days) is correlated in time with both these changes. It may not last, and you better believe you will hear from me if it doesn’t. Right now I’m on cloud 9. It’s like I have new feet, and that’s incredibly exciting.
Here are some other updates and interesting things readers have shared:
- This video is the first thing I’ve seen that matches my experience.
- Here’s a different taping technique from one I had been using.
- Suggested by a reader, here’s an interesting e-book with lots of links to research. (The whole website is interesting.) I’ve read it, including its list of conditions often confused with plantar fasciitis. None match my case.
- Make rock mats! (Then walk on them, of course!) I am totally doing this. (H/t Ana Progovac.)
It bothers me when there’s nothing on the internet that matches my search. As best I can tell, nobody has documented a case of “plantar fasciitis” exactly like mine. So, I will. Maybe it’ll help someone else. (Feel free to contact me.)
My case has always been odd in at least three ways:
- I don’t feel discomfort getting out of bed in the morning. Apart from mild stiffness (which is true of my entire body after sleeping 8 hours, and always has been, and is normal), my feet feel rested. This is, apparently, not how plantar fasciitis is supposed to feel. Literally everything I’ve read says the first morning steps will hurt. In my case, my feet exhibit classic plantar fasciitis pain symptoms only after use (walking, standing). With rest, they get better, often within an hour or so.
- My symptoms are bilaterally symmetric (both feet, same spots hurt the same, at the same time). This is not completely unheard of, but is rare.
- I vastly prefer to be barefoot, even for walking and standing. I cannot overstate this. Yesterday I walked/stood for 20 minutes barefoot in one stretch with no problem. Today, 45. So far so good. The key seems to be to maintain a healthy arch with my own foot muscles (avoid my natural pronation). My feet absolutely do not crave support of any kind to accomplish this. They hate it. I can walk or stand longer, with no discomfort, barefoot than in shoes. Shoes are not a relief. They make things worse. This, again, is unusual for plantar fasciitis. Many, many cases are documented in which people find the right supportive shoes or orthotics and feel immediate relief. I have tried lots of shoes and a variety of orthotics — custom and OTC — including highly recommended types for plantar fasciitis. None beat barefoot.
I’ve told all this to five health care practitioners. Nobody’s suggested it’s anything other than plantar fasciitis. It’s true that when I have symptoms they absolutely match this condition, I just don’t get them in the way almost everyone else gets them.
I’m starting to doubt the diagnosis. But, my symptoms don’t exactly match anything, as far as I know.
One thing this all means is that I should stop trying to treat my condition with more foot support. My feet don’t want it. I’d go barefoot all the time, everywhere if I could. That’s just not practical. On order are Merrell Vapor Glove “barefoot” shoes.