• Response to “Spending Reductions in the Medicare Shared Savings Program: Selection or Savings?”

    This is a guest post by Adam A. Markovitz, BS and Andrew M. Ryan, PhD.

    Our paper published on June 18th in Annals of Internal Medicine found that previous estimates of the effects of the Medicare Shared Savings Program (MSSP) have been overstated. After accounting for selective attrition of clinicians and beneficiaries from MSSP ACOs, the MSSP was associated with an increase in spending of $5 per quarter, statistically indistinguishable from zero.

    In their response, McWilliams and colleagues raised a number of objections.

    They claimed that ACOs have no incentive to avoid patients with higher risk. We disagree. If patients develop new health conditions while attributed to an ACO, the ACO is unable to include these conditions in patients’ risk scores. While designed to protect against upcoding, this provision creates an incentive for ACOs to prune patients who are decompensating. Second, as identified by David Dranove and colleagues in an essential paper on risk selection, even if risk adjustment is accurate, the outcomes of high risk patients will have higher variance. It is rational for risk-averse ACOs to avoid these patients.

    McWilliams et al. also objected to our design and statistical specifications. One complaint concerns our decision to determine patients’ treatment status on the basis of actual CMS assignment. Instead of using actual CMS assignment, papers authored by McWilliams and colleagues have attempted to replicate CMS’ attribution algorithm to approximate ACO assignment. We believe that this approach has led these studies to miss the selective attrition observed in our paper. It is possible that the high risk patients that we observed exiting ACOs were never assigned to ACOs in this prior research.

    While McWilliams et al. claim that our use of the true CMS assignment introduces “time-varying inconsistency in how utilization is used to define comparison groups,” this critique is unfounded. Using this approach, and otherwise replicating the preferred specification of McWilliams et al. (with market-year and ACO fixed effects), we found that the MSSP was associated with a significant reduction in spending. It should therefore be obvious that this analytic approach did not introduce bias toward the null.

    We present a robust set of results showing evidence of selective attrition within ACOs. This includes evidence that

    • higher spending patients and clinicians are more likely to exit ACOs;
    • spending estimates attenuate as we progressively add fixed effects for markets, patients, and clinicians;
    • ACOs are associated with our falsification outcome, hip fracture, in all standard adjusted specifications (including McWilliams et al.’s preferred specification);
    • and the effects of ACOs decrease to approximately zero in our instrumental variables specification.

    We also provide robust evidence for the validity of the instrumental variable, including well-balanced patient characteristics, comparable pre-period spending trends, and no association with hip fracture (our falsification outcome). The adjusted longitudinal model failed each of these tests.

    McWilliams and colleagues object to this evidence for a variety of reasons, all of them unpersuasive. For instance, they argue that exit of high-risk beneficiaries is simply due to a glitch in the MSSP attribution methodology whereby sick patients are passively drawn from ACOs when they start to only receive care from non-ACO specialists. However, this would not explain our finding of pruning of physicians with high-cost patient panels.

    But we also present a simple intention-to-treat analysis that demonstrates the strong influence of attrition bias on standard difference-in-differences estimates (see Supplement, Section H “Intention-to-treat analyses”). Based on actual CMS assignment, patient treatment status was “turned on” when they were first attributed to an MSSP ACO, remaining on for the duration of the study period. This model was not affected by choices related to patient attribution or changes in composition of providers within ACOs or physician groups. Estimates from this simple model from a balanced panel of beneficiaries with beneficiary fixed effects found small and non-significant effects of the MSSP (+$11 per quarter [95% CI, -$13 to $36]). This crucial validity check demonstrates that the effects of the MSSP disappear when patient composition is held constant and selective attrition is addressed.

    We agree with McWilliams et al. that the effect of the MSSP is challenging to ascertain. Unlike hospital-based reforms — where hospitals exist before and after the reform is initiated and attribution is straightforward — evaluating the effects of unstable MSSP ACOs is more difficult. The failure of previous work to account for subtle compositional changes within ACOs has led researchers to miss an important source of bias.

    Comments closed
     
  • Come see us at the American Society of Health Economists conference

    If you’re attending the ASHEcon conference I hope you will come see my colleagues and me at the sessions below. These all include members of the Partnered Evidence-based Policy Resource Center (PEPReC) at the VA Boston Healthcare System and/or the Department of Health Law, Policy & Management (HLPM) at the Boston University School of Public Health. Those individuals are in bold, below.*

    Monday June 24

    9:30 – 10:45

    Applications of Nonlinear Modeling Techniques: Econometrics and Machine Learning

    Chair: Partha Deb

    Location: Madison A

    (9:30)

    Presenter: Edward Norton

    Co-Authors: Emily Lawton; Jun Li; Lena Chen

    Discussant: Elena Prager

    (10:00)

    Presenter: Augustine Denteh

    Co-Author: Sherri Rose

    Discussant: Kevin N. Griffith

    (10:30)

    Presenter: Partha Deb

    Discussant: Naomi B. Zewde

    1:15 – 2:45

    Access To Mental Health Services Among Minority and Severely Ill Patients

    Chair: Vicki Fung

    Location: Jefferson

    (1:15)

    Presenter: Julia Raifman, BU

    Co-Authors: Ellen Moscoe; S. Bryn Austin; Mark Hatzenbuehler; Sandro Galea

    Discussant: Elham Mahmoudi

    (1:45)

    Presenter: Ana Progovac

    Co-Authors:  Brian Mullin; Laura Hatfield; Alex McDowell; Mark A. Schuster; Benjamin Le Cook

    Discussant: Neil Kamdar

    (2:15)

    Presenter: Vicki Fung

    Co-Authors: Mary Price; John Hsu; Benjamin Le Cook

    Discussant: Ana M. Progovac

    1:15 – 2:45

    Health Insurance, Adolescents and Young Adults

    Chair: Austin Frakt

    Location: Hoover

    (1:15)

    Presenter: Jacqueline Ellison, BU

    Co-Authors: Megan Cole, BU; Lewis Kazis; Amresh Hanchate

    Discussant: Christine Yee

    (1:45)

    Presenter: Carolina-Nicole Herrera, BU

    Discussant: Kandice Kapinos

    (2:15)

    Presenter: Kevin Griffith

    Co-Authors: Benjamin Sommers; David Jones

    Discussant: Sarah Miller

    1:15 – 2:45

    Incentives, Integration and Efficiency

    Chair: Joshua Rolnick

    Location: Madison A

    (1:15)

    Presenter: Jayasree Basu

    Co-Author:  Paul Jacobs

    Discussant: Keaton Miller

    (1:45)

    Presenter: Meng-Yun Lin

    Co-Authors: Amresh Hanchate, Austin Frakt, Kathleen Carey, BU

    (2:15)

    Presenter: Joshua Rolnick

    Co-Authors:  Joshua Liao; Xinshuo Ma; Eric Shan; Jingsan Zhu; Erkuan Wang; Qian Huang; Amol Navathe

    Discussant: Michael Barnett 

    Tuesday June 25

    10:00 – 11:30

    Private Plans in Medicare and Disadvantaged Populations

    Chair: Zirui Song

    Location: Jefferson

    (10:00)

    Presenter: Laura Skopec

    Co-Authors: Stephen Zuckerman; Doug Wissoker; Peter Huckfeldt; Joshua Aarons; Robert Berenson; Judy Feder; Judy Dey

    Discussant: Austin Frakt

    (10:30)

    Presenter: Laura Keohane

    Co-Authors:  Zilu Zhou; David Stevenson

    Discussant: Courtney H. Van Houtven

    (11:00)

    Presenter: Brian McGarry

    Co-Authors:  Timothy Layton; Zirui Song; David Grabowski

    Discussant: Daria M. Pelech

    10:00 – 11:30

    Treatment Effects & Heterogeneity

    Chair: Steve Pizer

    Madison A

    (10:00)

    Presenter: Kenneth John McConnell

    Co-Author: Stephan Lindner

    Discussant: Eric Roberts

    (10:30)

    Presenter: Jessica Lum

    Co-Authors: Steven Pizer, Austin Frakt, Melissa Garrido

    Discussant: Partha Deb

    (11:00)

    Presenter: Amelia Haviland

    Co-Authors: Rahul Ladhania; Neeraj Sood; Ateev Mehrotra

    Discussant: Jeffrey S. McCullough

    1:30 – 3:00

    Effects of Policy Reforms and Access to Care on Rates of Suicide

    Chair: Timothy Classen

    Location: Jefferson

    (1:30)

    Presenter: Julia Raifman, BU

    Co-Authors: Elysia Larson; Michael Siegel; Michael Ulrich; Colleen Barry; Anita Knopov; Sandro Galea

    Discussant: Timothy Classen

    Location: Jefferson

    (2:00)

    Presenter: Peiyin Hung

    Co-Authors: Susan Busch; Shiyi Wang

    Discussant: Julia Raifman, BU

    (2:30)

    Presenter: Timothy Classen

    Discussant: Peiyin Hung

    1:30 – 3:00

    The Effects of Care Coordination and Vertical Integration on Patient Outcomes

    Chair: Lacey Loomer

    Location: Madison A

    (1:30)

    Presenter: Cyrus Kosar

    Co-Authors:  David Meyers; Vincent Mor; Momotazur Rahman

    Discussant: Jia Yu

    (2:00)

    Author(s): Derek Lake; David C. Grabowski; Pedro Gozalo

    Discussant: Brian E. McGarry

    (2:30)

    Presenter: David Meyers

    Co-Authors: Vincent Mor; Momotazur Rahman

    Discussant: Steven Pizer

    1:30 – 3:00

    Indirect Effects of Insurance Design

    Chair: Thomas Buchmueller

    Location: Hoover

    (1:30)

    Presenter: Catherine Maclean

    Co-Authors: Ioana Popovici; Michael T. French

    Discussant: Otto Lenhart

    (2:00)

    Presenter: Aparna Soni

    Discussant: Xiaoxue Li

    (2:30)

    Presenter: Coleman Drake

    Co-Authors: Conor Ryan; Bryan Dowd

    Discussant: Paul Shafer, BU

    5:15 – 7:00 Posters

    Location: Exhibit Hall C (lower level)

    Changes in Coverage, Access to Care, and Disparities in 2017

    Presenter: Kevin Griffith

    Co-Authors: Benjamin Sommers; David Jones

    (Posters cont.)

    Do High Deductibles Reduce the Use of ‘Free’ Preventive Services Under the Affordable Care Act?

    Presenter: Paul Shafer, BU

    Co-Authors: Stacie Dusetzina; Lindsay Sabik; Timothy Platts-Mills; Sally Stearns; Justin Trogdon

    Wednesday June 26

    8:00 – 9:30

    Topics in Health Care Financing and Incentives

    Chair: Jean M Fuglesten Biniek

    Location: Madison A

    (8:00)

    Presenter: Yingzhe Yuan

    Co-Authors: Megan E. Price; David F. Schmidt, MD; Merry Ward, PhD; Jonathan R. Nebeker, MD; Steven Pizer

    Discussant: Jeffrey S. McCullough

    (8:30)

    Presenter: Jean Fuglesten Biniek

    Co-Author:  William Johnson

    Discussant: Sayeh S. Nikpay

    (9:00)

    Presenter: Sayeh Nikpay

    Co-Authors:  Rena Conti; Melinda Buntin

    Discussant: David Cutler

    8:00 – 9:30

    Physician Productivity and Quality of Care

    Chair: Steven Pizer

    Location: Taylor

    (8:00)

    Presenter: Taeko Minegishi

    Discussant: John Romley

    (8:30)

    Presenter: Christine Yee

    Discussant: Austin Frakt

    (9:00)

    Presenter: Aigerim Kabdiyeva

    Discussant:  Michael R. Richards

    8:00 – 9:30

    Recent Empirical Evidence on the Affordable Care Act (ACA): Coverage and Payment

    Chair: Zhiyou Yang

    Location: McKinley

    (8:00)

    Presenter: Zhiyou Yang

    Co-Authors: Peter Huckfeldt; Neeraj Sood; Jose Escarce; Teryl Nuckols; Ioana Popescu

    Discussant: Eric Roberts

    (8:30)

    Presenter: Sarah Gordon, BU

    Co-Authors: Benjamin Sommers; Ira Wilson, MD; Omar Galarraga; Amal Trivedi

    Discussant: Jacob Wallace

    (9:00)

    Presenter: Andrew Wilcock

    Co-Authors: Michael Barnett; J. McWilliams; David Grabowski; Ateev Mehrotra

    Discussant: Neeraj Sood

    10:00 – 11:30

    Evaluation of Behavioral Health Integration Initiatives and Measures

    Chair: Megan B. Cole, BU

    Location: Jefferson

    (10:00)

    Presenter: Xinqi Li

    Co-Author: Omar Galarraga

    Discussant: Kimberley Geissler

    (10:30)

    Presenter: Kimberley Geissler

    Discussant: Michael Flores

    (11:00)

    Presenter: Megan Cole, BU

    Co-Authors: Qiuyuan Qin; Megan Bair-Merritt

    Discussant: Xinqi Li

     

    * If I’ve overlooked anyone please bring it to my attention and I will update.

    @afrakt

    Comments closed
     
  • Can Marijuana Help Cure the Opioid Crisis?

    The following originally appeared on The Upshot (copyright 2019, The New York Times Company). 

    The idea that legal cannabis can help address the opioid crisis has generated much hope and enthusiasm.

    Opioid misuse has declined in recent years at the same time that cannabis use has been increasing, with many states liberalizing marijuana laws.

    Based on recent research, some advocates have been promotingthis connection, arguing that easier access to marijuana reduces opioid use and, in turn, overdose deaths.

    A new study urges caution. Sometimes appearances — or statistics — can be deceiving.

    It’s plausible that marijuana can help reduce pain. Systematicreviews show that certain compounds found in marijuana or synthetically produced cannabinoids do so, at least for some conditions. So some people who might otherwise seek out opioid painkillers could use medical marijuana instead.

    Regulations in some states, including New York, that streamline access to medical marijuana are based on the idea that it can substitute for opioids in pain treatment.

    In 2014, a study published in JAMA gave further hope that liberalizing marijuana laws might alleviate the opioid crisis.

    The study examined the years 1999 through 2010, during which 10 states established medical marijuana programs. It compared changes in the rates of opioid painkiller deaths in states that passed medical marijuana laws with those that had not. The results? Researchers found that the laws were associated with a nearly 25 percent decline in the death rate from opioid painkillers.

    Since publication of the JAMA study, others have produced similar findings. One posted last fall at the Social Science Research Network found that counties with medical marijuana dispensaries have up to 8 percent fewer opioid-related deaths among non-Hispanic white men, and 10 percent fewer heroin deaths.

    Other studies have documented marijuana laws associated withreduced opioid prescribing in Medicaid and Medicare.

    None of this proves that marijuana liberalization causes lower opioid-related mortality, something the authors of the 2014 JAMA study pointed out.

    Correlation does not mean causation, of course. A particular challenge in interpreting correlations in social science has its own name — the ecological fallacy. It’s the erroneous conclusion that relationships observed at the wider level (like state or region) necessarily hold true at the individual level as well.

    “It’s possible that relationships get strengthened, weakened or even reversed when going from the individual to aggregate level,” said Mark Glickman, senior lecturer on statistics at Harvard. This was documented in a classic paper in 1950 and underlies many erroneous conclusions from research.

    A new study revisited the JAMA-published analysis with more data. Its conclusions cast doubt on the idea that medical marijuana helps reduce opioid deaths — at least as far as we can tell with state-level data.

    Between 2010 — the final year of analysis in the JAMA study — and 2017, 32 more states legalized medical marijuana, and eight legalized recreational use. A new study published in the Proceedings of the National Academy of Sciences (P.N.A.S.) reassessed the relationship between these laws and opioid deaths using the same approach as the JAMA study, but extending the years of analysis through 2017.

    Over the years analyzed in the JAMA study, 1999 to 2010, the new P.N.A.S. study produced similar findings: Medical marijuana legalization was associated with reduced opioid painkiller overdose deaths. But in an expanded analysis through 2017, the results reversed — the laws are associated with a 23 percent increase in deaths.

    This doesn’t necessarily mean that the laws first saved lives and then, in later years, contributed to deadly overdoses.

    @afrakt

    Comments closed
     
  • Healthcare Triage: Zoning Rules Can Keep People in Bad Neighborhoods

    We’ve talked about how housing is important for health. We’ve talked about how we can improve access to housing through stimulation of production through the LIHTC. We’ve talked about how we can improve access through vouchers and mobility programs. There’s one more thing we’d like to discuss: Inclusionary zoning. Zoning rules are important for making neighborhoods and municipalities function smoothly, but they can also be written in ways that keep low-income residents from moving to certain neighborhoods.

    David Tuller, a lecturer in UC Berkeley’s School of Public Health and Graduate School of Journalism, wrote about this recently in a policy brief at Health Affairs. It’s also the topic of this week’s HCT.

    @aaronecarroll

    Comments closed
     
  • Spending Reductions in the Medicare Shared Savings Program: Selection or Savings?

    This is a guest post by J. Michael McWilliams, MD, PhD, Alan M. Zaslavsky, PhD, Bruce E. Landon, MD, MBA, and Michael E. Chernew, PhD.

    The extent to which the Medicare Shared Savings Program (MSSP) has generated savings for Medicare has been a topic of debate, and understandably so—the program’s impact is important to know for guiding provider payment policy but is challenging to ascertain.

    Prior studies suggest that accountable care organizations (ACOs) in the MSSP have achieved modest, growing savings.(1-4) In a recent study in Annals of Internal Medicine, Markovitz et al. conclude that savings from the MSSP are illusory, an artifact of risk selection behaviors by ACOs such as “pruning” primary care physicians (PCPs) with high-cost patients.(5) Their conclusions appear to contradict previous findings that characteristics of ACO patients changed minimally over time relative to local control groups.

    We therefore undertook to review the paper and explain these apparently contradictory results.(1,3) We concluded that these new results do not demonstrate bias due to risk selection in the MSSP but rather are consistent with the literature.

    Below we explain how several problems in the study’s methods and interpretation are responsible for the apparent inconsistencies. We provide this post-publication commentary to clarify the evidence for researchers and policymakers and to support development of evidence-based policy.

    Approaches to Estimating Savings and Risk Selection in the MSSP

    If the objective is to determine Medicare’s net savings from the MSSP, the key is to estimate the amount by which participating ACOs reduced Medicare spending in response to the program using an evaluation approach that removes any bias from risk selection and compares ACO spending with a valid counterfactual (as opposed to the program’s spending targets or “benchmarks” for ACOs). With this unbiased estimate of gross savings in hand, the net savings can then be calculated by subtracting the shared-savings bonuses that Medicare distributes to ACOs. If ACOs engage in favorable risk selection, it is unnecessary to quantify it to calculate net savings. As long as the evaluation methods used to estimate gross savings appropriately remove any contribution from risk selection, the net savings will accurately portray the savings to Medicare (the bonuses include any costs to Medicare from risk selection). Thus, an evaluation can yield a valid estimate of net savings while avoiding the pitfalls of attempting to isolate the amount of risk selection.

    Taking this approach, prior studies have estimated the gross savings while minimizing bias from risk selection,(1-3) without directly measuring it. Through the end of 2014 (the study period examined by Markovitz et al.), prior analyses found modest gross savings of about 1.1% when averaged over the performance years among cohorts of ACOs entering the MSSP in 2012-2014.(2)  Gross savings grew over time within cohorts and exceeded bonus payments by 2014, with no evidence that residual risk selection contributed to the estimated savings or their growth.

    Importantly, these prior evaluations took an intention-to-treat approach that held constant over time the group of providers defined as MSSP participants, regardless of whether ACOs subsequently exited the program or changed their constituent practices or clinicians.  In other words, by keeping membership in the ACO groups constant over time, these estimates excluded spurious savings that might appear if ACOs selectively excluded providers with sicker patients over time.

    Taking an alternative approach, Markovitz et al. try to quantify risk selection by estimating gross savings under a “base” method that includes selection effects, and then modeling and removing selection effects under various assumptions.

    Although appealing in principle and potentially illuminating of undesirable provider behavior, their base case approach introduces additional sources of bias (not just risk selection), so their initial estimates are not comparable to those from the previous studies. Moreover, the comparisons of their base estimates with estimates from subsequent models do not support their conclusions. The authors misinterpret the reductions in savings caused by the analytic modifications intended to address selection as evidence of selection, when in fact the modifications correct for other sources of bias that were addressed by prior studies but included in the authors’ base case.

    In addition to this misinterpretation, the approaches to removing risk selection from estimates also are problematic. Before discussing the details of these methodological issues, we first review the incentives for selection in the MSSP, which must be understood to interpret the findings of Markovitz et al. correctly.

    Incentives for Risk Selection in the MSSP

    The MSSP defines ACOs as collections of practices—taxpayer identification numbers (TINs)—including all clinicians billing under those TINs; ACOs thus can select TINs but cannot select clinicians within TINs for inclusion in contracts. The MSSP accounts for changes in TIN inclusion each year by adjusting an ACO’s benchmark to reflect the baseline spending of the revised set of TINs.

    Thus, ACOs do not have clear incentives to exclude TINs with high-cost patients in favor of TINs with low-cost patients. Doing so might improve their performance on utilization-based quality measures such as readmission rates, thereby increasing the percentage of savings they can keep (the quality score affects the shared savings rate), but the savings estimate should not increase. More generally, if there are some advantages to selecting TINs with low-risk patients, the associated reduction in spending should not be interpreted as a cost to Medicare of risk selection because the benchmark adjustments for changes in TIN inclusion should eliminate much or all of the cost to Medicare (and the gain to ACOs). Theory and prior empirical work would actually suggest advantages of including high-spending TINs, as ACOs with high spending should have an easier time generating savings and indeed have reduced spending more than other ACOs, on average.

    An analysis attempting to quantify risk selection should therefore focus on changes in patients or clinicians after MSSP entry within sets of TINs—changes that ACOs have clear incentives to pursue (e.g., by encouraging high-cost patients within a TIN to leave [e.g. through referrals] or directing clinicians of high-cost patients to bill under an excluded TIN).(6) Failure to exclude changes in TIN inclusion from estimates of risk selection is analogous to not accounting for the Hierarchical Condition Categories (HCC) score in an analysis of risk selection in Medicare Advantage vs. traditional Medicare.

    Problems with Analysis and Interpretation by Markovitz et al.

    Difference-in-differences analysis

    Markovitz et al. present a base analysis intended to produce the gross savings that would be estimated if one allowed changes in the composition of ACOs to contribute to the savings estimate. Such an analysis should compare spending differences between ACO and non-ACO providers at baseline with spending differences between the two groups after MSSP entry (a difference in differences), while allowing the provider and patient composition to change over time within ACO TINs.

    But the statistical model (section D of the Appendix) omits controls for fixed differences between providers that would be observable at baseline (i.e., provider effects). Consequently, the estimate (the coefficient on “MSSPijqt”) is not interpretable as a difference in differences, and the characterization of this model as similar to “previous analyses” is inaccurate. Furthermore, the estimate suggests gross savings that are nearly five times greater than the prior estimate of 1.1% that the authors claim to have replicated—a 5.0% reduction in per-patient spending ([-$118/quarter]/[mean spending of $2341/quarter]) after only about 12 months of participation, on average (Figure 2B).

    Subsequent models do include terms for patient or provider fixed effects (Figure 2B), constituting difference-in-differences analyses. Hence, the dramatic attenuation of the estimated spending reduction caused by introducing these terms does not demonstrate risk selection, but rather the correction of the omitted term in the base model. The base model is thus a misleading reference value for comparisons. The fixed effects adjust not only for within-TIN changes in clinicians or patients after MSSP entry (the potential selection effects of interest that Markovitz et al are trying to isolate) but also for fixed (baseline) differences between ACOs and non-ACO providers and within-ACO changes in TIN inclusion that are reflected in benchmarks and account for much of the turnover in participating clinicians.(7) The latter two sources of compositional differences do not reflect risk selection and did not contribute to prior estimates of savings.(1-3)

    In a more appropriate base analysis that better resembles previous evaluations (the 4th model in Figure 2, panel B), Markovitz et al. include ACO fixed effects and hold constant each ACO’s TINs over time. Compared with the results of that analysis (-$66/quarter or -$264/year or -2.8%), the addition of patient or clinician controls to eliminate selection has effects that are inconsistent in direction and more modest in magnitude than when using the previous base case as the comparator (Figure 2B).

    This set of findings does not support a conclusion that prior evaluations overstated ACO savings by failing to fully account for risk selection. In fact, the gross savings estimated by models with patient or clinician effects range from approximately 10% greater to over 3 times greater than the average gross savings estimated in a prior evaluation over the same performance years (i.e., 113-300+% × the 1.1% spending reduction noted above).(2) Thus, the interpretation of the results from this series of models is misleading and mischaracterizes their relation to the prior literature.

    Even with adjustment for patient or provider effects, the difference-in-differences analyses remain problematic for at least two reasons. First, Markovitz et al. use the actual MSSP assignments (in some cases based on post-acute or specialty care use) only in the post-period for ACOs. They cannot use these for the control group or for the pre-period for either the ACO or comparison group because the assignment data are only available for ACOs in performance years and only for ACOs that continue in the program. This introduces a time-varying inconsistency in how utilization is used to define comparison groups.

    Second, Markovitz et al. rely on within-patient or within-clinician changes (i.e., models with patient or clinician fixed effects) to isolate the MSSP effect on spending, net of selection, but doing so can introduce bias.(3)  For example, if ACOs hired clinicians to perform annual wellness visits, this could shift attribution of single-visit healthy patients away from their PCPs, causing artifactual within-PCP spending increases and underestimation of savings.

    Or, if a strategy for ACO success is to shift high-risk patients to more cost-effective clinicians better equipped or trained to manage their care, one would not want to eliminate that mechanism in an evaluation of savings. More generally, the patient or clinician fixed effects can introduce bias from time-varying factors that would otherwise be minimized in a difference-in-differences comparison of stably different cross-sections of ACO and non-ACO populations.

    Markovitz et al report substantial differences in pre-period levels and trends and a differential reduction in hip fractures. But none of these imbalances were observed in previous evaluations that addressed provider-level selection by holding ACO TIN (or clinician) composition constant and assigned patients to ACOs and control providers using a method based only on primary care use and applied consistently across comparison groups and years.(1,3)  Markovitz et al. imply that their findings for hip fractures should be interpreted as evidence of bias from risk selection in prior evaluations.

    But MSSP evaluation by our group (3) found no differential change in the proportion of patients with a history of hip fracture among ACO patients vs. control patients from before to after MSSP entry (differential change in 2015: 0.0% with a sample baseline mean of 2.9%) and no emergence of a differential change in hip fractures over the performance years that would suggest selection. We did not report this specific result in the published paper because we conducted balance tests for numerous patient characteristics, including 27 conditions in the Chronic Conditions Data Warehouse (of which hip fracture is one) that we summarized with counts. We report this result here to correct the misleading conclusion by Markovitz et al. that their findings would have been found in our study. The finding of a differential reduction in hip fractures suggests bias only in their analyses and provides further evidence that Markovitz et al. did not replicate prior evaluations and thus cannot demonstrate that they overstated savings.

    Instrumental variables analysis

    Markovitz et al. also include an instrumental variables (IV) analysis, using differential changes in local MSSP participation surrounding a patient’s PCP (“MSSP supply”) to estimate the incentive effect without selection effects. We question the validity and conclusions of this analysis for reasons we can only state briefly here.

    Specifically, the instrument should affect the outcome only by altering treatment assignment and should therefore not be affected by treatment. Yet, unlike a standard ecologic instrument that is unaffected by treatment assignment (e.g., where a patient lives), “MSSP supply” can be altered by a change in a patient’s assigned PCP, which can occur as a result of ACO exposure (e.g., from risk selection, the focus of the study). This calls into question the applicability of a key assumption in IV analysis.

    In addition, the difference-in-differences model in which the instrument is deployed does not adjust for fixed differences in spending between PCP localities (and thus does not produce difference-in-differences estimates). Moreover, the results of this analysis suggest implausible spending increases of $588-1276/patient-year for ACOs entering in 2013-2014 (Appendix Figure 4). Acceptance of the instrument’s validity requires acceptance that participation in the MSSP caused these large spending increases.

    Even if we accept the validity of the IV estimates, they are not comparable to the other difference-in-difference estimates because IV estimates pertain only to the population (the “switchers”) for whom treatment is determined by the instrument. Therefore, the comparisons cannot be interpreted as quantifying risk selection. By construction, increases in the local supply variable arising from MSSP entry by large hospital-based systems are larger, and ascribed to more patients, thereby giving the most weight to ACOs previously found to have no significant effect on spending, on average.(1-3)

    Thus, comparing the IV estimates to estimates from the other models is analogous to comparing overall program effects with subgroup effects. The difference may reflect treatment effect heterogeneity as opposed to selection, and the authors have implicitly chosen a subgroup (large health system ACOs) that other work suggests is less responsive to MSSP incentives. Thus, estimates from the IV analysis suggestive of minimal savings would be consistent with the minimal savings documented in the literature for the group of ACOs to which the IV estimates are applicable.

    We also note that the “adjusted longitudinal analysis” is again used as an inappropriate comparator for the IV analysis.  It appears the imprecise IV estimates would not differ statistically from the estimates produced by the more appropriate base case with ACO fixed effects (Figure 2B).

    Flow analyses

    Finally, Markovitz et al. interpret flow of patients and clinicians entering and exiting the MSSP as evidence of “pruning.” These analyses, however, do not support inferences about selection because they lack a counterfactual (flow in the absence of MSSP contracts).

    Flow analyses can be deceptive because the health characteristics of the “stock” of patients assigned to the ACO changes over time, too. An ostensible net change in risk suggested by differences between those entering and exiting may be completely consistent with a population that is stable over time if patients’ risk status in the stock changes in a way that offsets the flow imbalance.

    For example, Markovitz et al. previously interpreted greater “exit” of high-risk patients from ACOs as evidence of risk selection.(8) In the table below, we demonstrate that this conclusion is erroneous. The pattern of “exit” is merely an artifact of the utilization-based algorithm used to assign patients to ACOs. The higher switch rates among the highest-risk ACO patients (first column, based on CMS assignments of patients to ACOs) is similarly observed if one applies the CMS assignment rules to assign patients to large provider groups not participating in the MSSP (second column).  Higher-risk patients receive care from more providers, causing more providers to “compete” for the plurality of a patient’s qualifying services in a given year and thus greater instability in assignment over time as the patient’s needs evolve.  In other words, high-risk patients simply are reassigned more often, independent of ACO incentives.(9)

    The comparisons of clinician entry and exit rates by Markovitz et al. are additionally misleading because of different denominators. If the probabilities in Figure 4 were calculated using a consistent denominator or instead reported as a replacement rate (high-risk patients served by entering physicians/high-risk patients served by exiting physicians), the higher spending associated with clinician exit and entry would be more similar.

    Ultimately, if ACOs are “pruning” clinicians of high-cost patients, there should be evidence in the stock, but within-TIN changes in baseline risk scores of physician-group ACO patients have increased slightly, not decreased, relative to concurrent local changes.(1,3) The authors make no attempt to reconcile their conclusions with the documented absence of differential changes in ACO patient characteristics relative to controls. They make two contradictory arguments: that the savings estimated by prior studies were explained by selection on unobservable patient characteristics; but also that the risk selection is demonstrable based on observable patient characteristics (e.g., hip fracture, HCC score) that exhibited no pattern of selection in the prior studies.

    Conclusion

    Monitoring ACOs will be essential, particularly as incentives for selection are strengthened as regional spending rates become increasingly important in determining benchmarks.(10,11) Although there has likely been some gaming, the evidence to date—including the study by Markovitz et al.—provides no clear evidence of a costly problem and suggests that ACOs have achieved very small, but real, savings. Causal inference is hard but necessary to inform policy. When conclusions differ, opportunities arise to understand methodological differences and to clarify their implications for policy.

    References

    1. McWilliams JM, Hatfield LA, Chernew ME, Landon BE, Schwartz AL. Early Performance of Accountable Care Organizations in Medicare. N Engl J Med. 2016;374(24):2357-66.
    2. McWilliams JM. Changes in Medicare Shared Savings Program Savings from 2013 to 2014. JAMA. 2016;316(16):1711-13.
    3. McWilliams JM, Hatfield LA, Landon BE, Hamed P, Chernew ME. Medicare Spending after 3 Years of the Medicare Shared Savings Program. N Engl J Med. 2018;379(12):1139-49.
    4. Colla CH, Lewis VA, Kao LS, O’Malley AJ, Chang CH, Fisher ES. Association Between Medicare Accountable Care Organization Implementation and Spending Among Clinically Vulnerable Beneficiaries. JAMA Intern Med. 2016;176(8):1167-75.
    5. Markovitz AA, Hollingsworth JM, Ayanian JZ, Norton EC, Yan PL, Ryan AM. Performance in the Medicare Shared Savings Program After Accounting for Non-Random Exit: An Instrumental Variable Analysis. Ann Intern Med. 2019;171(1).
    6. Friedberg MW, Chen PG, Simmons M., Sherry T., Mendel P, et al. Effects of Health Care Payment Models on Physician Practice in the United States. Follow-up Study. Accessed at: https://www.rand.org/pubs/research_reports/RR2667.html on March 29, 2019. 2018.
    7. Research Data Assistance Center. Shared Savings Program Accountable Care Organizations Provider-level RIF. Accessed at http://www.resdac.org/cms-data/files/ssp-aco-provider-level-rif on March 29, 2019.
    8. Markovitz AA, Hollingsworth JM, Ayanian JZ, Norton EC, Moloci NM, Yan PL, Ryan, AM. Risk adjustment in Medicare ACO program deters coding increases but may lead ACOs to drop high-risk beneficiaries. Health Aff (Millwood). 2019;38(2):253-261.
    9. McWilliams JM, Chernew ME, Zaslavsky AM, Landon BE. Post-acute care and ACOs – who will be accountable? Health Serv Res. 2013;48(4):1526-38.
    10. Department of Health and Human Services. Centers for Medicare and Medicaid Services. 42 CFR Part 425. Medicare Program; Medicare Shared Savings Program; Accountable Care Organizations–Pathways to Success and Extreme and Uncontrollable Circumstances Policies for Performance Year 2017. Final rules. Accessed at https://www.govinfo.gov/content/pkg/FR-2018-12-31/pdf/2018-27981.pdf on March 29, 2019.
    11. McWilliams JM, Landon BE, Rathi VK, Chernew ME. Getting more savings from ACOs — can the pace be pushed? N Engl J Med. 2019;380:2190-2192.
    Comments closed
     
  • Feet update

    In April, I wrote an Upshot column about treatments for plantar fasciitis. This was a victory lap, of sorts, as I had been free of discomfort for a month, after following the regimen I described.

    Then it came back, and with frustrating rapidity and persistence. Months went by, and the approaches that had seemed to work the first time weren’t doing the job.

    It took me a while to respond to what my body was telling me. It didn’t want to wear shoes! So, I stopped. I spent most of this past week at home, barefoot. Then I added barefoot shoes only when I needed to wear something, like today, traveling for tomorrow’s first Drivers of Health meeting (it will be webcast, by the way). Not wearing supportive shoes/orthotics is the opposite of what is typically suggested for plantar fasciitis.

    I also started using Yoga Toes, which feel amazing. My current recovery (and admittedly it’s been only a few days) is correlated in time with both these changes. It may not last, and you better believe you will hear from me if it doesn’t. Right now I’m on cloud 9. It’s like I have new feet, and that’s incredibly exciting.

    Here are some other updates and interesting things readers have shared:

    • This video is the first thing I’ve seen that matches my experience.
    • Here’s a different taping technique from one I had been using.
    • Suggested by a reader, here’s an interesting e-book with lots of links to research. (The whole website is interesting.) I’ve read it, including its list of conditions often confused with plantar fasciitis. None match my case.
    • Make rock mats! (Then walk on them, of course!) I am totally doing this. (H/t Ana Progovac.)

    @afrakt

    Comments closed
     
  • It’s gotta be the shoes (plantar fasciitis?)

    It bothers me when there’s nothing on the internet that matches my search. As best I can tell, nobody has documented a case of “plantar fasciitis” exactly like mine. So, I will. Maybe it’ll help someone else. (Feel free to contact me.)

    My case has always been odd in at least three ways:

    1. I don’t feel discomfort getting out of bed in the morning. Apart from mild stiffness (which is true of my entire body after sleeping 8 hours, and always has been, and is normal), my feet feel rested. This is, apparently, not how plantar fasciitis is supposed to feel. Literally everything I’ve read says the first morning steps will hurt. In my case, my feet exhibit classic plantar fasciitis pain symptoms only after use (walking, standing). With rest, they get better, often within an hour or so.
    2. My symptoms are bilaterally symmetric (both feet, same spots hurt the same, at the same time). This is not completely unheard of, but is rare.
    3. I vastly prefer to be barefoot, even for walking and standing. I cannot overstate this. Yesterday I walked/stood for 20 minutes barefoot in one stretch with no problem. Today, 45. So far so good. The key seems to be to maintain a healthy arch with my own foot muscles (avoid my natural pronation). My feet absolutely do not crave support of any kind to accomplish this. They hate it. I can walk or stand longer, with no discomfort, barefoot than in shoes. Shoes are not a relief. They make things worse. This, again, is unusual for plantar fasciitis. Many, many cases are documented in which people find the right supportive shoes or orthotics and feel immediate relief. I have tried lots of shoes and a variety of orthotics — custom and OTC — including highly recommended types for plantar fasciitis. None beat barefoot.

    Some of the shoes and orthotics I have tried. Others already in the trash or I was too lazy to make another trip up the stairs to get them.

    I’ve told all this to five health care practitioners. Nobody’s suggested it’s anything other than plantar fasciitis. It’s true that when I have symptoms they absolutely match this condition, I just don’t get them in the way almost everyone else gets them.

    I’m starting to doubt the diagnosis. But, my symptoms don’t exactly match anything, as far as I know.

    One thing this all means is that I should stop trying to treat my condition with more foot support. My feet don’t want it. I’d go barefoot all the time, everywhere if I could. That’s just not practical. On order are Merrell Vapor Glove “barefoot” shoes.

    @afrakt

    Comments closed
     
  • How Safe Is Sunscreen?

    The following originally appeared on The Upshot (copyright 2019, The New York Times Company). 

    Skin cancer is the most common malignancy in the United States, affecting more than three million people each year. Using sunscreen is one mainstay of prevention. But the recent news that sunscreen ingredients can soak into your bloodstream has caused concern.

    Later this year, the Food and Drug Administration will offer some official guidance on the safety of such ingredients. What should people do in the interim as summer approaches?

    The only proven health risk so far is too much sun exposure. Some may think covering up and limiting time in the sun is important only for those with lighter skin, but the recommendations against UV exposure apply to everyone.

    Yes, you should probably keep using sunscreen, although some who may want to play it extra safe could switch to sunscreens that contain zinc oxide and titanium dioxide.

    Sunscreens were first regulated by the F.D.A. in the 1970s, and they were considered over-the-counter medications, before current American guidelines for the evaluation of drugs were put in place. Because of this, sunscreens didn’t undergo testing the way modern pharmaceuticals would.

    In Europe, things are even more lax. Sunscreens are regulated as cosmetics, and because of this, many more sunscreens are approved there than in the United States.

    The F.D.A., however, has wanted to know: To what degree are chemicals applied to the skin absorbed into the body, and what are the possible effects of those chemicals?

    We now have information about the first question. A few weeks ago, a study was published in JAMA that randomly assigned 24 healthy people to one of four sunscreens. Two of them were sprays, the third was a lotion, and the fourth was a cream. Participants were instructed to apply the sunscreens to 75 percent of their bodies four times a day for four days, and 30 blood samples were drawn over a week.

    The F.D.A.’s guidance says that any active ingredient that achieves systemic absorption greater than 0.5 nanograms per milliliter of blood should undergo a toxicology assessment to see if it causes “cancer, birth defects or other adverse effects.”

    The study examined four common sunscreen components: avobenzone, oxybenzone, octocrylene and ecamsule. For all four, systemic concentrations passed the nanogram threshold after the applications on the first day of the study. The levels were higher than the limit for the entire week for all the products except the cream.

    They also increased from Day 1 to Day 4, meaning that there was accumulation of the chemical in the body with continued use.

    This is not evidence that sunscreens are harmful. It’s entirely possible that the amounts absorbed are completely safe. In fact, given the widespread use of sunscreen, and the lack of any data showing increases in problems related to them, it probably is safe. Sunscreens are a key component of preventing skin damage that can lead to skin cancer.

    But this doesn’t mean the effects of absorption shouldn’t be checked. The F.D.A. is preparing a final recommendation. For now, the proposed rule, which is still open for public comment, suggests that sunscreens with para-aminobenzoic acid (an association with allergies) and trolamine salicylate (an association with bleeding) should not be given the designation “generally regarded as safe and effective.”

    The rule also proposes that sunscreens that rely on zinc oxide and/or titanium dioxide should be “generally regarded as safe and effective.” These inorganic compounds are not absorbed into the body, and sit on the skin reflecting or absorbing the sun’s harmful rays.

    Because they aren’t absorbed, they’re also noticeable. Most people prefer sunscreens that are absorbed. Lots of parents in particular prefer sprays because they’re easier and faster to apply to children, who weren’t even part of this study.

    In recent years, vacation destinations like Hawaii, Palau and Key West have started to ban sunscreens with many organic ingredients because they may be damaging coral reefs. Those ingredients include oxybenzone, octinoxate and parabens.

    These products can accumulate in living organisms over time, in both vacationing humans and sea creatures. Significant doses collect when tens of thousands of people wear sunscreen while swimming in the ocean. These quantities only increase when we wash them off in showers and baths into water that eventually finds its way into the ocean.

    The International Coral Reef Initiative says that more research is necessary, but that while we wait for such work to happen, we should be careful. A review in the Journal of the American Academy of Dermatology agrees, but points out that most studies have been limited to the lab. Many have argued that we should shift to safer “reef-friendly” products.

    It’s not clear, though, that sunscreens containing inorganic ingredients are good for the environment either. A study last yearpointed to the fact that zinc oxide and titanium dioxide could also have bleaching effects on corals.

    When it comes to personal health, a basic plan to cover up seems sensible. I wear a UV protective swim shirt and hat in the sun. My children tell me I don’t look as cool as the other dads, but I need to use a lot less sunscreen than they do. That not only makes my life easier, but it might help the environment, too.

    @aaronecarroll

    Comments closed
     
  • Healthcare a triage: Housing Vouchers and Neighborhood Mobility

    We’re talking about housing for four weeks, thanks to the support of the RWJF! The Low Income Housing Tax Credit (last week’s episode topic) stimulates production in order to increase the supply of affordable housing available to poorer people in the United States. But there’s another way to tackle our housing problem, and that’s by targeting /demand/ by giving people vouchers to help them pay for housing and assisting them to move to higher opportunity neighborhoods. While helping people with their rent can be helpful, the real benefits start to accrue when people move to neighborhoods with more opportunity. Vouchers alone don’t insure that outcome.

    Rebecca Gale wrote about this in a recent Health Affairs Policy Brief. It’s also the topic of this week’s HCT.

    @aaronecarroll

    Comments closed
     
  • Drivers of Health: The Blog

    I mentioned a new project — Drivers of Health — a few days ago. There’s a blog associated with it. Already there are several posts and we’ll be putting up another one or two (at least) every week.

    Here’s what’s already there:

    What drives health?

    What drives health? This is the big and challenging question my team and I are facing on a new, one – year project funded by the Robert Wood Johnson Foundation. This website is devoted to this question, and we invite you to engage with us as we explore it.

    Health system cost-effectiveness

    How much value do we obtain per dollar spent on the health system? How has that changed over time? How does it compare across countries? These are tough but important questions.

    Social determinants over time

    The risks to health faced by Americans long ago are different from those we face today. Some of the things that once killed many people (like poor sanitation) now kill many fewer. On the other hand, we now face new risks (like death from auto accidents) that didn’t exist a century ago.

    Social determinant pathways are complex

    The causal pathways from social determinants of health to health outcomes can be numerous and complex. Though some factors (like smoking) are directly related to health, others (like education or income) relate to health in a variety of indirect ways.

    The value of health spending

    The U.S. is the biggest spender on health care in the world, yet national health outcomes do not reflect this massive investment. This fact forces us to question the value of health care spending: are our health care dollars worth it?

    @afrakt

    Comments closed