This is a guest post by Adam A. Markovitz, BS and Andrew M. Ryan, PhD.
Our paper published on June 18th in Annals of Internal Medicine found that previous estimates of the effects of the Medicare Shared Savings Program (MSSP) have been overstated. After accounting for selective attrition of clinicians and beneficiaries from MSSP ACOs, the MSSP was associated with an increase in spending of $5 per quarter, statistically indistinguishable from zero.
They claimed that ACOs have no incentive to avoid patients with higher risk. We disagree. If patients develop new health conditions while attributed to an ACO, the ACO is unable to include these conditions in patients’ risk scores. While designed to protect against upcoding, this provision creates an incentive for ACOs to prune patients who are decompensating. Second, as identified by David Dranove and colleagues in an essential paper on risk selection, even if risk adjustment is accurate, the outcomes of high risk patients will have higher variance. It is rational for risk-averse ACOs to avoid these patients.
McWilliams et al. also objected to our design and statistical specifications. One complaint concerns our decision to determine patients’ treatment status on the basis of actual CMS assignment. Instead of using actual CMS assignment, papers authored by McWilliams and colleagues have attempted to replicate CMS’ attribution algorithm to approximate ACO assignment. We believe that this approach has led these studies to miss the selective attrition observed in our paper. It is possible that the high risk patients that we observed exiting ACOs were never assigned to ACOs in this prior research.
While McWilliams et al. claim that our use of the true CMS assignment introduces “time-varying inconsistency in how utilization is used to define comparison groups,” this critique is unfounded. Using this approach, and otherwise replicating the preferred specification of McWilliams et al. (with market-year and ACO fixed effects), we found that the MSSP was associated with a significant reduction in spending. It should therefore be obvious that this analytic approach did not introduce bias toward the null.
We present a robust set of results showing evidence of selective attrition within ACOs. This includes evidence that
higher spending patients and clinicians are more likely to exit ACOs;
spending estimates attenuate as we progressively add fixed effects for markets, patients, and clinicians;
ACOs are associated with our falsification outcome, hip fracture, in all standard adjusted specifications (including McWilliams et al.’s preferred specification);
and the effects of ACOs decrease to approximately zero in our instrumental variables specification.
We also provide robust evidence for the validity of the instrumental variable, including well-balanced patient characteristics, comparable pre-period spending trends, and no association with hip fracture (our falsification outcome). The adjusted longitudinal model failed each of these tests.
McWilliams and colleagues object to this evidence for a variety of reasons, all of them unpersuasive. For instance, they argue that exit of high-risk beneficiaries is simply due to a glitch in the MSSP attribution methodology whereby sick patients are passively drawn from ACOs when they start to only receive care from non-ACO specialists. However, this would not explain our finding of pruning of physicians with high-cost patient panels.
But we also present a simple intention-to-treat analysis that demonstrates the strong influence of attrition bias on standard difference-in-differences estimates (see Supplement, Section H “Intention-to-treat analyses”). Based on actual CMS assignment, patient treatment status was “turned on” when they were first attributed to an MSSP ACO, remaining on for the duration of the study period. This model was not affected by choices related to patient attribution or changes in composition of providers within ACOs or physician groups. Estimates from this simple model from a balanced panel of beneficiaries with beneficiary fixed effects found small and non-significant effects of the MSSP (+$11 per quarter [95% CI, -$13 to $36]). This crucial validity check demonstrates that the effects of the MSSP disappear when patient composition is held constant and selective attrition is addressed.
We agree with McWilliams et al. that the effect of the MSSP is challenging to ascertain. Unlike hospital-based reforms — where hospitals exist before and after the reform is initiated and attribution is straightforward — evaluating the effects of unstable MSSP ACOs is more difficult. The failure of previous work to account for subtle compositional changes within ACOs has led researchers to miss an important source of bias.
If you’re attending the ASHEcon conference I hope you will come see my colleagues and me at the sessions below. These all include members of the Partnered Evidence-based Policy Resource Center (PEPReC) at the VA Boston Healthcare System and/or the Department of Health Law, Policy & Management (HLPM) at the Boston University School of Public Health. Those individuals are in bold, below.*
The following originally appeared on The Upshot (copyright 2019, The New York Times Company).
The idea that legal cannabis can help address the opioid crisis has generated much hope and enthusiasm.
Opioid misuse has declined in recent years at the same time that cannabis use has been increasing, with many states liberalizing marijuana laws.
Based on recent research, some advocates have been promotingthis connection, arguing that easier access to marijuana reduces opioid use and, in turn, overdose deaths.
A new study urges caution. Sometimes appearances — or statistics — can be deceiving.
Why people were so hopeful
It’s plausible that marijuana can help reduce pain. Systematicreviews show that certain compounds found in marijuana or synthetically produced cannabinoids do so, at least for some conditions. So some people who might otherwise seek out opioid painkillers could use medical marijuana instead.
Regulations in some states, including New York, that streamline access to medical marijuana are based on the idea that it can substitute for opioids in pain treatment.
In 2014, a study published in JAMA gave further hope that liberalizing marijuana laws might alleviate the opioid crisis.
The study examined the years 1999 through 2010, during which 10 states established medical marijuana programs. It compared changes in the rates of opioid painkiller deaths in states that passed medical marijuana laws with those that had not. The results? Researchers found that the laws were associated with a nearly 25 percent decline in the death rate from opioid painkillers.
Other studies have documented marijuana laws associated withreduced opioid prescribing in Medicaid and Medicare.
Why you should be skeptical
None of this proves that marijuana liberalization causes lower opioid-related mortality, something the authors of the 2014 JAMA study pointed out.
Correlation does not mean causation, of course. A particular challenge in interpreting correlations in social science has its own name — the ecological fallacy. It’s the erroneous conclusion that relationships observed at the wider level (like state or region) necessarily hold true at the individual level as well.
A new study revisited the JAMA-published analysis with more data. Its conclusions cast doubt on the idea that medical marijuana helps reduce opioid deaths — at least as far as we can tell with state-level data.
Between 2010 — the final year of analysis in the JAMA study — and 2017, 32 more states legalized medical marijuana, and eight legalized recreational use. A new study published in the Proceedings of the National Academy of Sciences (P.N.A.S.) reassessed the relationship between these laws and opioid deaths using the same approach as the JAMA study, but extending the years of analysis through 2017.
Over the years analyzed in the JAMA study, 1999 to 2010, the new P.N.A.S. study produced similar findings: Medical marijuana legalization was associated with reduced opioid painkiller overdose deaths. But in an expanded analysis through 2017, the results reversed — the laws are associated with a 23 percent increase in deaths.
This doesn’t necessarily mean that the laws first saved lives and then, in later years, contributed to deadly overdoses.
We’ve talked about how housing is important for health. We’ve talked about how we can improve access to housing through stimulation of production through the LIHTC. We’ve talked about how we can improve access through vouchers and mobility programs. There’s one more thing we’d like to discuss: Inclusionary zoning. Zoning rules are important for making neighborhoods and municipalities function smoothly, but they can also be written in ways that keep low-income residents from moving to certain neighborhoods.
David Tuller, a lecturer in UC Berkeley’s School of Public Health and Graduate School of Journalism, wrote about this recently in a policy brief at Health Affairs. It’s also the topic of this week’s HCT.
This is a guest post by J. Michael McWilliams, MD, PhD, Alan M. Zaslavsky, PhD, Bruce E. Landon, MD, MBA, and Michael E. Chernew, PhD.
The extent to which the Medicare Shared Savings Program (MSSP) has generated savings for Medicare has been a topic of debate, and understandably so—the program’s impact is important to know for guiding provider payment policy but is challenging to ascertain.
Prior studies suggest that accountable care organizations (ACOs) in the MSSP have achieved modest, growing savings.(1-4) In a recent study in Annals of Internal Medicine, Markovitz et al. conclude that savings from the MSSP are illusory, an artifact of risk selection behaviors by ACOs such as “pruning” primary care physicians (PCPs) with high-cost patients.(5) Their conclusions appear to contradict previous findings that characteristics of ACO patients changed minimally over time relative to local control groups.
We therefore undertook to review the paper and explain these apparently contradictory results.(1,3) We concluded that these new results do not demonstrate bias due to risk selection in the MSSP but rather are consistent with the literature.
Below we explain how several problems in the study’s methods and interpretation are responsible for the apparent inconsistencies. We provide this post-publication commentary to clarify the evidence for researchers and policymakers and to support development of evidence-based policy.
Approaches to Estimating Savings and Risk Selection in the MSSP
If the objective is to determine Medicare’s net savings from the MSSP, the key is to estimate the amount by which participating ACOs reduced Medicare spending in response to the program using an evaluation approach that removes any bias from risk selection and compares ACO spending with a valid counterfactual (as opposed to the program’s spending targets or “benchmarks” for ACOs). With this unbiased estimate of gross savings in hand, the net savings can then be calculated by subtracting the shared-savings bonuses that Medicare distributes to ACOs. If ACOs engage in favorable risk selection, it is unnecessary to quantify it to calculate net savings. As long as the evaluation methods used to estimate gross savings appropriately remove any contribution from risk selection, the net savings will accurately portray the savings to Medicare (the bonuses include any costs to Medicare from risk selection). Thus, an evaluation can yield a valid estimate of net savings while avoiding the pitfalls of attempting to isolate the amount of risk selection.
Taking this approach, prior studies have estimated the gross savings while minimizing bias from risk selection,(1-3) without directly measuring it. Through the end of 2014 (the study period examined by Markovitz et al.), prior analyses found modest gross savings of about 1.1% when averaged over the performance years among cohorts of ACOs entering the MSSP in 2012-2014.(2) Gross savings grew over time within cohorts and exceeded bonus payments by 2014, with no evidence that residual risk selection contributed to the estimated savings or their growth.
Importantly, these prior evaluations took an intention-to-treat approach that held constant over time the group of providers defined as MSSP participants, regardless of whether ACOs subsequently exited the program or changed their constituent practices or clinicians. In other words, by keeping membership in the ACO groups constant over time, these estimates excluded spurious savings that might appear if ACOs selectively excluded providers with sicker patients over time.
Taking an alternative approach, Markovitz et al. try to quantify risk selection by estimating gross savings under a “base” method that includes selection effects, and then modeling and removing selection effects under various assumptions.
Although appealing in principle and potentially illuminating of undesirable provider behavior, their base case approach introduces additional sources of bias (not just risk selection), so their initial estimates are not comparable to those from the previous studies. Moreover, the comparisons of their base estimates with estimates from subsequent models do not support their conclusions. The authors misinterpret the reductions in savings caused by the analytic modifications intended to address selection as evidence of selection, when in fact the modifications correct for other sources of bias that were addressed by prior studies but included in the authors’ base case.
In addition to this misinterpretation, the approaches to removing risk selection from estimates also are problematic. Before discussing the details of these methodological issues, we first review the incentives for selection in the MSSP, which must be understood to interpret the findings of Markovitz et al. correctly.
Incentives for Risk Selection in the MSSP
The MSSP defines ACOs as collections of practices—taxpayer identification numbers (TINs)—including all clinicians billing under those TINs; ACOs thus can select TINs but cannot select clinicians within TINs for inclusion in contracts. The MSSP accounts for changes in TIN inclusion each year by adjusting an ACO’s benchmark to reflect the baseline spending of the revised set of TINs.
Thus, ACOs do not have clear incentives to exclude TINs with high-cost patients in favor of TINs with low-cost patients. Doing so might improve their performance on utilization-based quality measures such as readmission rates, thereby increasing the percentage of savings they can keep (the quality score affects the shared savings rate), but the savings estimate should not increase. More generally, if there are some advantages to selecting TINs with low-risk patients, the associated reduction in spending should not be interpreted as a cost to Medicare of risk selection because the benchmark adjustments for changes in TIN inclusion should eliminate much or all of the cost to Medicare (and the gain to ACOs). Theory and prior empirical work would actually suggest advantages of including high-spending TINs, as ACOs with high spending should have an easier time generating savings and indeed have reduced spending more than other ACOs, on average.
An analysis attempting to quantify risk selection should therefore focus on changes in patients or clinicians after MSSP entry within sets of TINs—changes that ACOs have clear incentives to pursue (e.g., by encouraging high-cost patients within a TIN to leave [e.g. through referrals] or directing clinicians of high-cost patients to bill under an excluded TIN).(6) Failure to exclude changes in TIN inclusion from estimates of risk selection is analogous to not accounting for the Hierarchical Condition Categories (HCC) score in an analysis of risk selection in Medicare Advantage vs. traditional Medicare.
Problems with Analysis and Interpretation by Markovitz et al.
Markovitz et al. present a base analysis intended to produce the gross savings that would be estimated if one allowed changes in the composition of ACOs to contribute to the savings estimate. Such an analysis should compare spending differences between ACO and non-ACO providers at baseline with spending differences between the two groups after MSSP entry (a difference in differences), while allowing the provider and patient composition to change over time within ACO TINs.
But the statistical model (section D of the Appendix) omits controls for fixed differences between providers that would be observable at baseline (i.e., provider effects). Consequently, the estimate (the coefficient on “MSSPijqt”) is not interpretable as a difference in differences, and the characterization of this model as similar to “previous analyses” is inaccurate. Furthermore, the estimate suggests gross savings that are nearly five times greater than the prior estimate of 1.1% that the authors claim to have replicated—a 5.0% reduction in per-patient spending ([-$118/quarter]/[mean spending of $2341/quarter]) after only about 12 months of participation, on average (Figure 2B).
Subsequent models do include terms for patient or provider fixed effects (Figure 2B), constituting difference-in-differences analyses. Hence, the dramatic attenuation of the estimated spending reduction caused by introducing these terms does not demonstrate risk selection, but rather the correction of the omitted term in the base model. The base model is thus a misleading reference value for comparisons. The fixed effects adjust not only for within-TIN changes in clinicians or patients after MSSP entry (the potential selection effects of interest that Markovitz et al are trying to isolate) but also for fixed (baseline) differences between ACOs and non-ACO providers and within-ACO changes in TIN inclusion that are reflected in benchmarks and account for much of the turnover in participating clinicians.(7) The latter two sources of compositional differences do not reflect risk selection and did not contribute to prior estimates of savings.(1-3)
In a more appropriate base analysis that better resembles previous evaluations (the 4th model in Figure 2, panel B), Markovitz et al. include ACO fixed effects and hold constant each ACO’s TINs over time. Compared with the results of that analysis (-$66/quarter or -$264/year or -2.8%), the addition of patient or clinician controls to eliminate selection has effects that are inconsistent in direction and more modest in magnitude than when using the previous base case as the comparator (Figure 2B).
This set of findings does not support a conclusion that prior evaluations overstated ACO savings by failing to fully account for risk selection. In fact, the gross savings estimated by models with patient or clinician effects range from approximately 10% greater to over 3 times greater than the average gross savings estimated in a prior evaluation over the same performance years (i.e., 113-300+% × the 1.1% spending reduction noted above).(2) Thus, the interpretation of the results from this series of models is misleading and mischaracterizes their relation to the prior literature.
Even with adjustment for patient or provider effects, the difference-in-differences analyses remain problematic for at least two reasons. First, Markovitz et al. use the actual MSSP assignments (in some cases based on post-acute or specialty care use) only in the post-period for ACOs. They cannot use these for the control group or for the pre-period for either the ACO or comparison group because the assignment data are only available for ACOs in performance years and only for ACOs that continue in the program. This introduces a time-varying inconsistency in how utilization is used to define comparison groups.
Second, Markovitz et al. rely on within-patient or within-clinician changes (i.e., models with patient or clinician fixed effects) to isolate the MSSP effect on spending, net of selection, but doing so can introduce bias.(3) For example, if ACOs hired clinicians to perform annual wellness visits, this could shift attribution of single-visit healthy patients away from their PCPs, causing artifactual within-PCP spending increases and underestimation of savings.
Or, if a strategy for ACO success is to shift high-risk patients to more cost-effective clinicians better equipped or trained to manage their care, one would not want to eliminate that mechanism in an evaluation of savings. More generally, the patient or clinician fixed effects can introduce bias from time-varying factors that would otherwise be minimized in a difference-in-differences comparison of stably different cross-sections of ACO and non-ACO populations.
Markovitz et al report substantial differences in pre-period levels and trends and a differential reduction in hip fractures. But none of these imbalances were observed in previous evaluations that addressed provider-level selection by holding ACO TIN (or clinician) composition constant and assigned patients to ACOs and control providers using a method based only on primary care use and applied consistently across comparison groups and years.(1,3) Markovitz et al. imply that their findings for hip fractures should be interpreted as evidence of bias from risk selection in prior evaluations.
But MSSP evaluation by our group (3) found no differential change in the proportion of patients with a history of hip fracture among ACO patients vs. control patients from before to after MSSP entry (differential change in 2015: 0.0% with a sample baseline mean of 2.9%) and no emergence of a differential change in hip fractures over the performance years that would suggest selection. We did not report this specific result in the published paper because we conducted balance tests for numerous patient characteristics, including 27 conditions in the Chronic Conditions Data Warehouse (of which hip fracture is one) that we summarized with counts. We report this result here to correct the misleading conclusion by Markovitz et al. that their findings would have been found in our study. The finding of a differential reduction in hip fractures suggests bias only in their analyses and provides further evidence that Markovitz et al. did not replicate prior evaluations and thus cannot demonstrate that they overstated savings.
Instrumental variables analysis
Markovitz et al. also include an instrumental variables (IV) analysis, using differential changes in local MSSP participation surrounding a patient’s PCP (“MSSP supply”) to estimate the incentive effect without selection effects. We question the validity and conclusions of this analysis for reasons we can only state briefly here.
Specifically, the instrument should affect the outcome only by altering treatment assignment and should therefore not be affected by treatment. Yet, unlike a standard ecologic instrument that is unaffected by treatment assignment (e.g., where a patient lives), “MSSP supply” can be altered by a change in a patient’s assigned PCP, which can occur as a result of ACO exposure (e.g., from risk selection, the focus of the study). This calls into question the applicability of a key assumption in IV analysis.
In addition, the difference-in-differences model in which the instrument is deployed does not adjust for fixed differences in spending between PCP localities (and thus does not produce difference-in-differences estimates). Moreover, the results of this analysis suggest implausible spending increases of $588-1276/patient-year for ACOs entering in 2013-2014 (Appendix Figure 4). Acceptance of the instrument’s validity requires acceptance that participation in the MSSP caused these large spending increases.
Even if we accept the validity of the IV estimates, they are not comparable to the other difference-in-difference estimates because IV estimates pertain only to the population (the “switchers”) for whom treatment is determined by the instrument. Therefore, the comparisons cannot be interpreted as quantifying risk selection. By construction, increases in the local supply variable arising from MSSP entry by large hospital-based systems are larger, and ascribed to more patients, thereby giving the most weight to ACOs previously found to have no significant effect on spending, on average.(1-3)
Thus, comparing the IV estimates to estimates from the other models is analogous to comparing overall program effects with subgroup effects. The difference may reflect treatment effect heterogeneity as opposed to selection, and the authors have implicitly chosen a subgroup (large health system ACOs) that other work suggests is less responsive to MSSP incentives. Thus, estimates from the IV analysis suggestive of minimal savings would be consistent with the minimal savings documented in the literature for the group of ACOs to which the IV estimates are applicable.
We also note that the “adjusted longitudinal analysis” is again used as an inappropriate comparator for the IV analysis. It appears the imprecise IV estimates would not differ statistically from the estimates produced by the more appropriate base case with ACO fixed effects (Figure 2B).
Finally, Markovitz et al. interpret flow of patients and clinicians entering and exiting the MSSP as evidence of “pruning.” These analyses, however, do not support inferences about selection because they lack a counterfactual (flow in the absence of MSSP contracts).
Flow analyses can be deceptive because the health characteristics of the “stock” of patients assigned to the ACO changes over time, too. An ostensible net change in risk suggested by differences between those entering and exiting may be completely consistent with a population that is stable over time if patients’ risk status in the stock changes in a way that offsets the flow imbalance.
For example, Markovitz et al. previously interpreted greater “exit” of high-risk patients from ACOs as evidence of risk selection.(8) In the table below, we demonstrate that this conclusion is erroneous. The pattern of “exit” is merely an artifact of the utilization-based algorithm used to assign patients to ACOs. The higher switch rates among the highest-risk ACO patients (first column, based on CMS assignments of patients to ACOs) is similarly observed if one applies the CMS assignment rules to assign patients to large provider groups not participating in the MSSP (second column). Higher-risk patients receive care from more providers, causing more providers to “compete” for the plurality of a patient’s qualifying services in a given year and thus greater instability in assignment over time as the patient’s needs evolve. In other words, high-risk patients simply are reassigned more often, independent of ACO incentives.(9)
The comparisons of clinician entry and exit rates by Markovitz et al. are additionally misleading because of different denominators. If the probabilities in Figure 4 were calculated using a consistent denominator or instead reported as a replacement rate (high-risk patients served by entering physicians/high-risk patients served by exiting physicians), the higher spending associated with clinician exit and entry would be more similar.
Ultimately, if ACOs are “pruning” clinicians of high-cost patients, there should be evidence in the stock, but within-TIN changes in baseline risk scores of physician-group ACO patients have increased slightly, not decreased, relative to concurrent local changes.(1,3) The authors make no attempt to reconcile their conclusions with the documented absence of differential changes in ACO patient characteristics relative to controls. They make two contradictory arguments: that the savings estimated by prior studies were explained by selection on unobservable patient characteristics; but also that the risk selection is demonstrable based on observable patient characteristics (e.g., hip fracture, HCC score) that exhibited no pattern of selection in the prior studies.
Monitoring ACOs will be essential, particularly as incentives for selection are strengthened as regional spending rates become increasingly important in determining benchmarks.(10,11) Although there has likely been some gaming, the evidence to date—including the study by Markovitz et al.—provides no clear evidence of a costly problem and suggests that ACOs have achieved very small, but real, savings. Causal inference is hard but necessary to inform policy. When conclusions differ, opportunities arise to understand methodological differences and to clarify their implications for policy.
McWilliams JM, Hatfield LA, Chernew ME, Landon BE, Schwartz AL. Early Performance of Accountable Care Organizations in Medicare. N Engl J Med. 2016;374(24):2357-66.
McWilliams JM. Changes in Medicare Shared Savings Program Savings from 2013 to 2014. JAMA. 2016;316(16):1711-13.
McWilliams JM, Hatfield LA, Landon BE, Hamed P, Chernew ME. Medicare Spending after 3 Years of the Medicare Shared Savings Program. N Engl J Med. 2018;379(12):1139-49.
Colla CH, Lewis VA, Kao LS, O’Malley AJ, Chang CH, Fisher ES. Association Between Medicare Accountable Care Organization Implementation and Spending Among Clinically Vulnerable Beneficiaries. JAMA Intern Med. 2016;176(8):1167-75.
Markovitz AA, Hollingsworth JM, Ayanian JZ, Norton EC, Yan PL, Ryan AM. Performance in the Medicare Shared Savings Program After Accounting for Non-Random Exit: An Instrumental Variable Analysis. Ann Intern Med. 2019;171(1).
Friedberg MW, Chen PG, Simmons M., Sherry T., Mendel P, et al. Effects of Health Care Payment Models on Physician Practice in the United States. Follow-up Study. Accessed at: https://www.rand.org/pubs/research_reports/RR2667.html on March 29, 2019. 2018.
Markovitz AA, Hollingsworth JM, Ayanian JZ, Norton EC, Moloci NM, Yan PL, Ryan, AM. Risk adjustment in Medicare ACO program deters coding increases but may lead ACOs to drop high-risk beneficiaries. Health Aff (Millwood). 2019;38(2):253-261.
McWilliams JM, Chernew ME, Zaslavsky AM, Landon BE. Post-acute care and ACOs – who will be accountable? Health Serv Res. 2013;48(4):1526-38.
Department of Health and Human Services. Centers for Medicare and Medicaid Services. 42 CFR Part 425. Medicare Program; Medicare Shared Savings Program; Accountable Care Organizations–Pathways to Success and Extreme and Uncontrollable Circumstances Policies for Performance Year 2017. Final rules. Accessed at https://www.govinfo.gov/content/pkg/FR-2018-12-31/pdf/2018-27981.pdf on March 29, 2019.
McWilliams JM, Landon BE, Rathi VK, Chernew ME. Getting more savings from ACOs — can the pace be pushed? N Engl J Med. 2019;380:2190-2192.
In April, I wrote an Upshot column about treatments for plantar fasciitis. This was a victory lap, of sorts, as I had been free of discomfort for a month, after following the regimen I described.
Then it came back, and with frustrating rapidity and persistence. Months went by, and the approaches that had seemed to work the first time weren’t doing the job.
It took me a while to respond to what my body was telling me. It didn’t want to wear shoes! So, I stopped. I spent most of this past week at home, barefoot. Then I added barefoot shoes only when I needed to wear something, like today, traveling for tomorrow’s first Drivers of Health meeting (it will be webcast, by the way). Not wearing supportive shoes/orthotics is the opposite of what is typically suggested for plantar fasciitis.
I also started using Yoga Toes, which feel amazing. My current recovery (and admittedly it’s been only a few days) is correlated in time with both these changes. It may not last, and you better believe you will hear from me if it doesn’t. Right now I’m on cloud 9. It’s like I have new feet, and that’s incredibly exciting.
Here are some other updates and interesting things readers have shared:
This video is the first thing I’ve seen that matches my experience.
Suggested by a reader, here’s an interesting e-book with lots of links to research. (The whole website is interesting.) I’ve read it, including its list of conditions often confused with plantar fasciitis. None match my case.
It bothers me when there’s nothing on the internet that matches my search. As best I can tell, nobody has documented a case of “plantar fasciitis” exactly like mine. So, I will. Maybe it’ll help someone else. (Feel free to contact me.)
My case has always been odd in at least three ways:
I don’t feel discomfort getting out of bed in the morning. Apart from mild stiffness (which is true of my entire body after sleeping 8 hours, and always has been, and is normal), my feet feel rested. This is, apparently, not how plantar fasciitis is supposed to feel. Literally everything I’ve read says the first morning steps will hurt. In my case, my feet exhibit classic plantar fasciitis pain symptoms only after use (walking, standing). With rest, they get better, often within an hour or so.
My symptoms are bilaterally symmetric (both feet, same spots hurt the same, at the same time). This is not completely unheard of, but is rare.
I vastly prefer to be barefoot, even for walking and standing. I cannot overstate this. Yesterday I walked/stood for 20 minutes barefoot in one stretch with no problem. Today, 45. So far so good. The key seems to be to maintain a healthy arch with my own foot muscles (avoid my natural pronation). My feet absolutely do not crave support of any kind to accomplish this. They hate it. I can walk or stand longer, with no discomfort, barefoot than in shoes. Shoes are not a relief. They make things worse. This, again, is unusual for plantar fasciitis. Many, many cases are documented in which people find the right supportive shoes or orthotics and feel immediate relief. I have tried lots of shoes and a variety of orthotics — custom and OTC — including highly recommended types for plantar fasciitis. None beat barefoot.
Some of the shoes and orthotics I have tried. Others already in the trash or I was too lazy to make another trip up the stairs to get them.
I’ve told all this to five health care practitioners. Nobody’s suggested it’s anything other than plantar fasciitis. It’s true that when I have symptoms they absolutely match this condition, I just don’t get them in the way almost everyone else gets them.
I’m starting to doubt the diagnosis. But, my symptoms don’t exactly match anything, as far as I know.
One thing this all means is that I should stop trying to treat my condition with more foot support. My feet don’t want it. I’d go barefoot all the time, everywhere if I could. That’s just not practical. On order are Merrell Vapor Glove “barefoot” shoes.
The following originally appeared on The Upshot (copyright 2019, The New York Times Company).
Skin cancer is the most common malignancy in the United States, affecting more than three million people each year. Using sunscreen is one mainstay of prevention. But the recent news that sunscreen ingredients can soak into your bloodstream has caused concern.
Later this year, the Food and Drug Administration will offer some official guidance on the safety of such ingredients. What should people do in the interim as summer approaches?
The only proven health risk so far is too much sun exposure. Some may think covering up and limiting time in the sun is important only for those with lighter skin, but the recommendations against UV exposure apply to everyone.
Yes, you should probably keep using sunscreen, although some who may want to play it extra safe could switch to sunscreens that contain zinc oxide and titanium dioxide.
Sunscreens were first regulated by the F.D.A. in the 1970s, and they were considered over-the-counter medications, before current American guidelines for the evaluation of drugs were put in place. Because of this, sunscreens didn’t undergo testing the way modern pharmaceuticals would.
The F.D.A., however, has wanted to know: To what degree are chemicals applied to the skin absorbed into the body, and what are the possible effects of those chemicals?
We now have information about the first question. A few weeks ago, a study was published in JAMA that randomly assigned 24 healthy people to one of four sunscreens. Two of them were sprays, the third was a lotion, and the fourth was a cream. Participants were instructed to apply the sunscreens to 75 percent of their bodies four times a day for four days, and 30 blood samples were drawn over a week.
The study examined four common sunscreen components: avobenzone, oxybenzone, octocrylene and ecamsule. For all four, systemic concentrations passed the nanogram threshold after the applications on the first day of the study. The levels were higher than the limit for the entire week for all the products except the cream.
They also increased from Day 1 to Day 4, meaning that there was accumulation of the chemical in the body with continued use.
This is not evidence that sunscreens are harmful. It’s entirely possible that the amounts absorbed are completely safe. In fact, given the widespread use of sunscreen, and the lack of any data showing increases in problems related to them, it probably is safe. Sunscreens are a key component of preventing skin damage that can lead to skin cancer.
The rule also proposes that sunscreens that rely on zinc oxide and/or titanium dioxide should be “generally regarded as safe and effective.” These inorganic compounds are not absorbed into the body, and sit on the skin reflecting or absorbing the sun’s harmful rays.
Because they aren’t absorbed, they’re also noticeable. Most people prefer sunscreens that are absorbed. Lots of parents in particular prefer sprays because they’re easier and faster to apply to children, who weren’t even part of this study.
These products can accumulate in living organisms over time, in both vacationing humans and sea creatures. Significant doses collect when tens of thousands of people wear sunscreen while swimming in the ocean. These quantities only increase when we wash them off in showers and baths into water that eventually finds its way into the ocean.
The International Coral Reef Initiative says that more research is necessary, but that while we wait for such work to happen, we should be careful. A review in the Journal of the American Academy of Dermatology agrees, but points out that most studies have been limited to the lab. Many have argued that we should shift to safer “reef-friendly” products.
It’s not clear, though, that sunscreens containing inorganic ingredients are good for the environment either. A study last yearpointed to the fact that zinc oxide and titanium dioxide could also have bleaching effects on corals.
When it comes to personal health, a basic plan to cover up seems sensible. I wear a UV protective swim shirt and hat in the sun. My children tell me I don’t look as cool as the other dads, but I need to use a lot less sunscreen than they do. That not only makes my life easier, but it might help the environment, too.
We’re talking about housing for four weeks, thanks to the support of the RWJF! The Low Income Housing Tax Credit (last week’s episode topic) stimulates production in order to increase the supply of affordable housing available to poorer people in the United States. But there’s another way to tackle our housing problem, and that’s by targeting /demand/ by giving people vouchers to help them pay for housing and assisting them to move to higher opportunity neighborhoods. While helping people with their rent can be helpful, the real benefits start to accrue when people move to neighborhoods with more opportunity. Vouchers alone don’t insure that outcome.
Rebecca Gale wrote about this in a recent Health Affairs Policy Brief. It’s also the topic of this week’s HCT.
What drives health? This is the big and challenging question my team and I are facing on a new, one – year project funded by the Robert Wood Johnson Foundation. This website is devoted to this question, and we invite you to engage with us as we explore it.
The risks to health faced by Americans long ago are different from those we face today. Some of the things that once killed many people (like poor sanitation) now kill many fewer. On the other hand, we now face new risks (like death from auto accidents) that didn’t exist a century ago.
The causal pathways from social determinants of health to health outcomes can be numerous and complex. Though some factors (like smoking) are directly related to health, others (like education or income) relate to health in a variety of indirect ways.
The U.S. is the biggest spender on health care in the world, yet national health outcomes do not reflect this massive investment. This fact forces us to question the value of health care spending: are our health care dollars worth it?
Austin and Aaron are participants in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.