• Some cost shifting claims

    I never know when I’m going to need to document that claims of hospital cost shifting are still pervasive. So that my future self can easily find some, here are a few quotes from a report by HCTrends, which I’ve posted here.

    • “In southeastern Wisconsin, cost shifting is responsible for 35 percent of the overall commercial rates paid.”
    • “Cost shifting is a hidden tax on employers that affects their ability to compete economically.”
    • “A 2014 Milliman analysis conducted for the Greater Milwaukee Business Group found that cost shifting accounted for 35 percent of the commercial rate paid for hospital services in 2012. Milliman estimated that Medicare and Medicaid underfunding accounted for almost two-thirds of the cost shift, adding about $782 million to commercial rates in 2012. Bad debt and charity care accounted for the remaining third.”
    • “Medicare, however, will pay less than half that amount due to specific budget cuts mandated by the Affordable Care Act and the sequester, and an assumed productivity adjustment implemented as part of the ACA (see Chart 2). Since its inception in FY2012, the productivity adjustment has reduced the market basket update by between 0.5 and 1.0 percentage points each year.”
    • “Revenue reductions or payment rates that fail to keep pace with inflation force health care providers to find more efficient ways to deliver care while simultaneously improving the quality of care delivered. If those initiatives do not completely offset their government revenue shortfall, providers make up the difference by increasing the rates charged by the business community – a process known as ‘cost shifting.’ The degree to which a hospital can leverage the business community to subsidize government health programs depends on the market dynamics between health care providers and insurers.”
    • “Cost-shifting is real and represents a hidden tax on employers that can threaten their competitiveness.
    • “Cost-shifting is not a 1:1 proposition: Every $1 in government funding is not offset by a $1 increase in private payer funding. Some of it is absorbed by providers through cost-savings and other efficiency initiatives. But after years of flat or declining government revenues, hospitals have little choice but to offset these revenue losses by increasing commercial rates.”

    Here are all of TIE’s cost shifting posts.

    @afrakt

    Share
    Comments closed
     
  • Another attempt to defund AHRQ

    Earlier this morning, the House Labor, Health and Human Services, Education, and Related Agencies Subcommittee on Appropriations posted draft legislation that calls for the termination of AHRQ.

    SEC. 226. (a) Termination.—Effective October 1, 2015, the Agency for Healthcare Research and Quality is terminated.

    The last time AHRQ was headed for defunding, we posted on some of the important work the agency does. You’ll find that here

    @afrakt 

    Share
    Comments closed
     
  • Some things we know about incomplete study reporting and publication bias

    Registering a study means specifying, in a public database like ClinicalTrials.gov, what the study’s sample criteria are, what outcomes will be examined, and what analyses will be done. It’s a way to guard against data mining and bias, though an imperfect one. A boost for trial registry was provided in 2004 by the International Committee of Medical Journal Editors (ICMJE) when it mandated registration for clinical trials published in member journals, listed here.

    Publishing a study means having it appear in a peer-reviewed journal. Few people will ever look at a trial registry. Many more, including journalists, will read or hear about published studies. So, what gets published is important.

    Not everything gets published. Many studies have examined trial registration incompleteness and selective publishing of registered data. [Links galore: 1, 2, 3, 4, 5, 6, 7, 8]. Perhaps as many as half of trials for FDA-approved drugs are unpublished even after five years post-approval. This is concerning, but what does it really mean? Does it imply bias? If so, is that bias different by funding source (e.g., industry vs non-industry)?

    Trial registry data can be changed. That weakens the de-biasing, pre-commitment role registration is supposed to play. But sometimes changes are reasonable. After all, if you haven’t done any analysis yet and you think of a better way to do it, it’d be dumb to just blindly keep going with your registered study. You should do it the right way, and you should change your registered approach. However, changing registry data after the study is done, e.g. to match what you did, is a lot more sketchy (or could be). All changes in ClinicalTrials.gov are stored, so one can try to infer whether its being gamed.

    A study examined changes in ClinicalTrials.gov registered data for 152 RCTs published ICMJE journals between 13 September 2005 to 24 April 2008. It doesn’t make the registry look very good. The vast majority (123) of examined trials had changes in their registries.* Most commonly changed fields were for primary outcome, secondary outcome, and sample size. The final registration entry for 40% and 34% of RCTs had missing secondary and primary outcomes fields, respectively, though more than half of the missing data could be found in other fields. Already that’s a concern because it makes the registry hard to use if data are missing or in the wrong place. (I want to emphasize here that I’m not blaming investigators for this. Maybe they deserve the blame. But maybe the registry is also hard to use. I’ve never used it, so I cannot say.)

    The study found that registry and published data differed for most RCTs including on key secondary outcomes (64% of RCTs), target sample size (78%), interventions (74%), exclusion criteria (51%), and primary outcome (39%). Eight RCTs had primary or secondary outcomes registry changes after publication, six of which were industry sponsored. That’s concerning. But six or eight is a small number relative to all trials examined, so let’s not freak out.

    Another study looking at all ~90,000 ClinicalTrials.gov-registered interventional trials as of 25 October 2012 assessed the extent to which registry entries had primary outcome changes and when changes were made, stratified by study sponsor.* It found that almost one-third of registered trials had primary outcome changes, changes were more likely for industry-sponsored studies, and industry sponsorship was associated with changes made after study completion date. I think we should be at least a bit concerned about that. (Again, maybe there are perfectly reasonable explanations, but it warrants some concern.)

    What gets registered? When we’re talking about trials aimed at FDA-approval, there are different types.* There are pre-clinical trials in which drugs are tested, but not in humans. Then there are several phases (I, II, III) of clinical trials that ramp up in terms of number of humans in which the drug is used and change in relative emphasis in looking for safety vs. efficacy. (As you might imagine, safety is emphasized first.) Post-market trials (phase IV) look at longer-term effects from real-world use. Because trials cost money, it’s likely that drugs that make it to later trials tend to be more promising (i.e., are more likely to provide positive more effects).

    From a set of registered trials, only a subset of which are published in the literature, how does one assess publication bias? The easy way is to look at the subset of matched published and registered trials to see what registered findings reach the journals. Do they skew positive? The hard way seems impossible: What about studies that are registered but never published? Do those harbor disproportionately negative findings? We can’t really know, but there’s a clever way to infer an answer.

    If I’m not mistaken, pre-clinical trials are also called the NDA phase, for new drug application, which examine new molecular entities (NDEs). In the NDA phase, drug manufactures are required to submit all studies to the FDA. I infer, from what I read, that this is not true of other phases. Therefore, the NDA (or pre-clinical?) phase offers a nice test. Which subset of results sent to the FDA get published? We might infer that the estimate applies to other trial phases, those for which we can’t see a full set of results.

    A study of all efficacy trials (N=164) for approved NDAs (N=33) for new molecular entities from 2001 to 2002 found that 78% were published. Those with outcomes favoring the tested drug were more likely to be published. Forty-seven percent of outcomes in the NDAs that did not favor the drug were not included in publications. Nine percent of conclusions changed (all in a direction more favorable to the drug) from the FDA review of the NDA to the paper. Score this as publication bias. And don’t blame journal editors or reviewers: the authors wrote that investigators told them studies weren’t published because they weren’t submitted to journals.

    But is this an industry-driven bias? A Cochrane Collaboration review examined and meta-analyzed 48 published studies from 1948 (!!!) through August 2011 on the subject of whether industry-sponsored drug and device studies have more favorable outcomes, relative to non-industry ones. Industry-sponsored studies were more likely to report favorable results and fewer harms.

    This sounds like industry-sponsorship might produce a bias, but it could be that industry just tends to look at more favorable drugs, and does more late-phase trials.

    Another study looked at this. It examined 546 ClinicalTrials.gov-registered trials of anticholesteremics, antidepressants, antipsychotics, proton-pump inhibitors, and vasodilators conducted betweeen 2000 and 2006 to assess association of funding source with favorability of published outcomes. Industry-funded trials were less likely to be published (32% for industry vs 56% for non-industry). Among the 362 (66%) of published trials, industry-sponsored ones were more likely to report positive outcomes (85% for industry-, 50% for government-, and 72% for nonprofit/non-federally-funded). Industry-funded trials were more likely to be phase 3 or 4, so maybe that explains higher favorability of findings.

    Nope. Industry-funded outcomes for phase 1 and 2 trials were more favorable as well (see chart below).

    NIHMS378991.html

    Another study, however, found no association of funding source with positive outcomes. It looked at 103 published RCTs on rheumatoid arthritis drugs from 2002-2003 and 2006-2007.

    A study looked at the extent to which ClincalTrials.gov-registered studies were published.* Its sample of 677 studies drew from trials registered as of 2000 and completed by 2006. Just over half the trials were industry sponsored, with 18% government- and 29% nongovernment/nonindustry-sponsored. Industry-sponsored trials were less likely to be published than nonindustry/nongovernment ones (40% vs 56%), but there was no statistically significant difference compared to government-sponsored trials. In fact, NIH-sponsored trials were only published 42% of the time.

    I think that’s worth emphasizing: We should be suspicious of all publication oddities and omissions, not just those that are associated with industry. A lot of NIH-sponsored findings—most of them—never see publication too.*

    * As Aaron has reminded me, not all ClinicalTrials.gov-registered studies are for drugs or devices. Many non-industry studies, for example, concern aspects of health care delivery that post far less risk to patients. It may not make sense to analyze these alongside those for drugs and devices, which place patients at higher risk. It also may matter less if such studies change their registries, publish all their findings, or are even registered at all. In other words, given constraints on investigator resources, we might reasonably hold drug and device trials to higher standards than others.

    @afrakt

    Share
    Comments closed
     
  • AcademyHealth: How comparative effectiveness research is viewed and used by policymakers

    My colleagues and I like to tell ourselves, if not others, that our research makes an impact by informing policymakers. Are we right? Read my latest AcademyHealth post for an answer.

    @afrakt

    Share
    Comments closed
     
  • How does the economy affect demand for VA care?

    Edwin Wong and colleagues are on it. The first paragraph below pertains to outpatient care, the others to inpatient care:

    Poor economic conditions have been associated with an uptake in VA outpatient health services, particularly among veterans who are exempt from copayments and thus have a financial incentive to seek VA care (Wong & Liu, 2013; Wong et al., 2014). […]

    Among elderly patients using the VA system, local unemployment was not associated with the probability of being hospitalized in VA, FFS Medicare, or either system. However, sensitivity analyses suggested differences in the association between local unemployment and hospitalization probability according to whether patients were exempted from VA copayment requirements. For veterans subject to copayments, higher local unemployment was moderately associated with a greater probability of seeking inpatient care from VA. This positive association was accompanied by a negative, but nonsignificant unemployment rate marginal effect obtained from the Medicare inpatient model. The marginal effect for total hospitalizations was close to zero and not statistically significant. Taken together, these results are suggestive of modest substitution between Medicare and VA inpatient use attributable to higher local unemployment. VA may provide a more financially favorable option for some veterans who were subject to a $220 VA inpatient copayment compared with the $1,132 Medicare deducible in 2011. […]

    Collectively, these results suggest the substitution effect exhibited by veterans subject to copayments was not present among low-income copayment-exempt VA enrollees, and that these veterans may have delayed or forgone inpatient care because of economic reasons.

    @afrakt

    Share
    Comments closed
     
  • Even without lemons, you can make lemonade

    Via Adrianna:

    no juice

    @afrakt

    Share
    Comments closed
     
  • Drug adherence and medical savings

    From Bruce Stuart et al.:

    Similar to previous studies,[*] we found that high adherence to [angiotensin-converting enzyme inhibitors/angiotensin receptor blockers, oral antidiabetic drugs], and statins by Medicare beneficiaries with diabetes was associated with lower medical costs and higher drug costs. In all cases, the savings on the medical side more than compensated for greater drug spending, although not all of the comparisons were statistically significant. The lack of consistently significant findings could well be due to a combination of small sample size and relatively short observation periods.

    See also this post.

    * Those studies are:

    • Sokol MC, McGuigan KA, Verbrugge RR, et al. Impact of medication adherence on hospitalization risk and healthcare cost. Med Care. 2005;43:521–530.
    • Salas M, Hughes D, Zuluaga A, et al. Costs of medication nonadherence in patients with diabetes mellitus: a systematic review and critical analysis of the literature. Value Health. 2009;19:915–922.
    • Lee WC, Balu S, Cobden D, et al. Prevalence and economic consequences of medication adherence in diabetes: a systematic literature review. Manag Care Interface. 2006;19:31–41.
    • Encinosa WE, Bernard D, Dor AQ. Does prescription drug adherence reduce hospitalization and costs? The case of diabetes. Adv Health Econ Health Serv Res. 2010;22:151–173.
    • Pelletier EM, Pawaskar M, Smith PJ, et al. Economic outcomes of exenatide vs. liraglutide in type 2 diabetes patients in the United States: results from a retrospective claims database analysis. J Med Econ. 2012;15:1039–1050.
    • Gibson TB, Song X, Alemayehu B, et al. Cost sharing, adherence, and health outcomes in patients with diabetes. Am J Manag Care. 2010;16:589–600.
    • Pawaskar MD, Camacho FT, Anderson RT, et al. Health care costs and medication adherence associated with initiation of insulin pen therapy in Medicaid-enrolled patients with type 2 diabetes: a retrospective database analysis. Clin Ther. 2007;29(pt 1):1294–1305.
    • Stuart B, Loh FL, Roberto P, et al. Increasing Medicare Part D enrollment in medication therapy management could improve health and lower costs. Health Aff (Millwood). 2013;32:1212–1220
    • Stuart B, Davidoff A, Lopert R, et al. Does medication adherence lower Medicare spending among beneficiaries with diabetes? Health Serv Res. 2011;46:1180–1199.
    • Stuart B, Simoni-Wastila L, Zhao L, et al. Increased persistency in medication use by US Medicare beneficiaries with diabetes is associated with lower hospitalization rates and cost savings. Diabetes Care. 2009;32:647–649.
    • Zhao Y, Zabriski S, Bertram C. Association between statin adherence level, health care costs, and utilization. J Man Care Pharm. 2014;20: 703–713.

    @afrakt

    Share
    Comments closed
     
  • Have insomnia? Consider the evidence before popping pills

    The following originally appeared on The Upshot (copyright 2015, The New York Times Company).

    One weekend afternoon a couple of years ago, while turning a page of the book I was reading to my daughters, I fell asleep. That’s when I knew it was time to do something about my insomnia.

    Data, not pills, was my path to relief.

    Insomnia is common. About 30 percent of adults report some symptoms of it, though less than half that figure have all symptoms. Not all insomniacs are severely debilitated zombies. Consistent sleeplessness that causes some daytime problems is all it takes to be considered an insomniac. Most function quite well, and the vast majority go untreated.

    I was one of the high-functioning insomniacs. In fact, part of my problem was that I relished the extra time awake to work. My résumé is full of accomplishments I owe, in part, to my insomnia. But it took a toll on my mood, as well as my ability to make it through a children’s book.

    Insomnia is worth curing. Though causality is hard to assess, chronic insomnia is associated with greater risk of anxiety, depression,hypertension, diabetes, accidents and pain. Not surprisingly, and my own experience notwithstanding, it is also associated with lower productivity at work. Patients who are successfully treated experience improved mood, and they feel healthier, function better and have fewer symptoms of depression.

    Which remedy would be best for me? Lunesta, Ambien, Restoril and other drugs are promised by a barrage of ads to deliver sleep to minds that resist it. Before I reached for the pills, I looked at the data.

    Specifically, for evidence-based guidance, I turned to comparative effectiveness research. That’s the study of the effects of one therapy against another therapy. This kind of head-to-head evaluation offers ideal data to help patients and clinicians make informed treatment decisions. As obvious as that seems, it’s not the norm. Most clinical drug trials, for instance, compare a drug with a placebo, because that’s all that’s required for F.D.A. approval. In recognition of this, in recent years more federal funding has become available for comparative effectiveness research.

    When it comes to insomnia, comparative effectiveness studies reveal that sleep medications aren’t the best bet for a cure, despite what the commercials say. Several clinical trials have found that they’re outperformed by cognitive behavioral therapy. C.B.T. for insomnia (or C.B.T.-I.) goes beyond the “sleep hygiene” most people know, though many don’t employ — like avoiding alcohol or caffeine near bedtime and reserving one’s bed for sleep (not reading or watching TV, for example). C.B.T. adds — through therapy visits or via self-guided treatments — sticking to a consistent wake time (even on weekends), relaxation techniques and learning to rid oneself of negative attitudes and thoughts about sleep.

    One randomized trial compared C.B.T. with the active ingredient in Restoril in patients 55 years and older, evaluating differences for up to two years. It found that C.B.T. led to larger and more durable improvements in sleep. Long-term, C.B.T. alone even outperformed the combination of C.B.T. and Restoril.

    Another trial focused on 25- to 64-year-olds found that C.B.T. outperformed Ambien alone. Adding Ambien to a C.B.T. regimen did not lead to further improvements. Yet another trial found that patients experienced greater relief from insomnia with C.B.T. than with the sleep drug zopiclone. Patients report that they prefer C.B.T. for insomnia over drug therapy.

    A systematic review of C.B.T. for insomnia, published in the Annals of Internal Medicine on Monday, quantifies how much relief it can provide. Combining data from 20 clinical trials, which included over 1,000 patients with chronic insomnia, the authors calculated sleep improvements after C.B.T. treatment, relative to no treatment. On average, treated patients fell asleep almost 20 minutes faster and were awake in the night almost half an hour less. The total amount of time that they were sleeping when in bed increased by nearly 10 percent. These results are similar to or better than improvements from many sleep drugs, and lasted longer.

    My experience is consistent with these averages. The C.B.T. treatment I received, through an online program recommended by my doctor, also included keeping careful track of how much sleep I got each night. This proved very helpful. It demonstrated progress — the nights in which I got only four or five hours of sleep became less common, and, on average, my nights of sleep lengthened by 30 minutes. My sleep log also helped me be more objective. Many nights I might have considered “bad” — and fretted over — were ones in which I got only one hour less sleep than my target of seven hours. Recognizing that’s not really so bad helped me relax, and relaxing helped me get more and better sleep.

    Improvements like mine and those reported in the study bring sleep statistics for those with insomnia quite close to those without it. This further emphasizes the point that many insomniacs aren’t that different from normal sleepers. Many sleep fine most nights, but also have more frequent nights of insufficient sleep than normal sleepers would experience. A big part of the difference may be how insomniacs perceive their sleep performance and the negative messages they give themselves about their poor sleep and how it will affect their daily lives.

    C.B.T. practitioners learn that if you label a night of sleep “bad” and expect a bad day to follow a bad night of sleep, you’re more likely to get it, as well as more likely to be anxious the next time you attempt to sleep. In this way, unless exacerbated by physical causes — like sleep apnea or restless leg syndrome — insomnia is a condition of the mind that then infects the body. Like a patch on faulty software, C.B.T. reorients one’s thinking and behavior so that sleep is first thought to be, and then soon after actually is, a more positive experience. Drugs, on the other hand, just treat insomniacs’ symptoms without addressing the underlying cause, which is why the relief they provide may be less durable.

    For me, and many patients, C.B.T. works. And as studies show, it works better than drugs. That moment with my children, a couple of years ago, was the last time I fell asleep reading to them.

    @afrakt

    Share
    Comments closed
     
  • Health care gobbling up resources for other government services

    Here’s a chart for California that complements one I’ve posted for Massachusetts. It tells the same story; health care is drawing an increasing proportion of resources, leaving less for other government functions.

    CA budget

    The chart is from a California Common Sense report.

    The growing proportion of the California state budget devoted to health care is even higher than the “Health Care Services” bars of this chart suggest. A great deal of “Retirement Benefits” growth is due to health care too.

    Annual state contributions to retirement benefits – pensions and retiree health care – have increased $1.5 billion, or 24.8% []. In particular, annual retiree health care payments have increased $682 billion, and thus account for nearly half of the retirement cost growth. Furthermore, among annual retirement costs to the state, health care for retired employees and their beneficiaries grew the most – 61.2%. By comparison, annual pension contributions increased $790 million, or 16.4%.

    We should not overlook the likely possibility that many of the services losing budget, in relative terms, also probably contribute to health, and may do so more efficiently than some direct health care spending: education, social services, and transportation.

    @afrakt

    Share
    Comments closed
     
  • Do NNTs work? A deeper dive into the literature

    Hilda Bastian and I previously discussed the extent to which people can comprehend the statistic number needed to treat (NNT). Hilda, who knows this literature far better than I do, helpfully offered a follow-up post that included a lot more studies. At my request, research assistant Jennifer Gilbert looked through all those studies and summarized them in this PDF.

    The chart below summarizes findings based on studies suggested in a post by Hilda Bastian and other studies discussed in prior posts by Austin and Hilda. Depending on how one wants to count three studies (Christensen 2003, Gyrd-Hansen 2011, and Cuite 2008—see footnotes of the chart) then out of four to seven studies, two found NNTs no more difficult to comprehend. [See the right-most column of the chart.]

    nnt-comprened

    This chart is included in Jennifer’s review, where you’ll find additional detail and hyperlinks to all studies.

    Note that in my prior posts on this topic, based on systematic reviews by Akl and Zipkin, I had overlooked the paper by Cuite. That’s because it wasn’t in Akl, and Zipkin didn’t cite it among the NNT comprehension studies, as documented in Jennifer’s review.

    From all this, here are my conclusions:

    • Many people, myself included, are fond of NNTs. We find them useful for making a point about the surprisingly low rate at which some therapies are helpful.
    • That is different than claiming they’re easily understood for the purposes of medical decision making. That’s an empirical question for which we should let research be our guide.
    • The research base, to the extent I know it and mostly from Hilda’s posts, suggests NNTs have comprehension challenges. But, I must acknowledge, it doesn’t uniformly point in that direction.
    • I would welcome a systematic review that includes all these studies and any others that make the cut. In that review, the quality of the evidence, not just number of studies, should be weighed. Also, any meaningful heterogeneity should be examined. Such a review is a substantial undertaking, and not one I have the resources to do.

    (Aside: To those of you who may be disappointed in this post because it didn’t mention that treatments can cause harm too, look here.)

    @afrakt

    Share
    Comments closed