• Healthcare Triage News is here!!!

    Hard as it is to imagine, Healthcare Triage is almost one year old. We decided to celebrate by giving you… more Healthcare Triage!

    Every Monday will be a traditional episode, chock full of the data and details you’ve come to expect and love (this Monday’s episode is about gluten). On Friday, though, we will be releasing episodes of Healthcare Triage News, where we’ll cover topical stories of the week in a shorter, more focused way. This week, it’s all about Ebola and salt:

    Share the news! Watch the episodes! Thanks for your continued support!

    @aaronecarroll

    Share
    Comments closed
     
  • Everyone wants test results. No one knows what they mean.

    From the annals of studies that make me sigh, “Numeracy and Literacy Independently Predict Patients’ Ability to Identify Out-of-Range Test Results“:

    Background: Increasing numbers of patients have direct access to laboratory test results outside of clinical consultations. This offers increased opportunities for both self-management of chronic conditions and advance preparation for clinic visits if patients are able to identify te8st results that are outside the reference ranges.

    Objective: Our objective was to assess whether adults can identify laboratory blood test values outside reference ranges when presented in a format similar to some current patient portals implemented within electronic health record (EHR) systems.

    Methods: In an Internet-administered survey, adults aged 40-70 years, approximately half with diabetes, were asked to imagine that they had type 2 diabetes. They were shown laboratory test results displayed in a standard tabular format. We randomized hemoglobin A1c values to be slightly (7.1%) or moderately (8.4%) outside the reference range and randomized other test results to be within or outside their reference ranges (ie, multiple deviations). We assessed (1) whether respondents identified the hemoglobin A1c level as outside the reference range, (2) how respondents rated glycemic control, and (3) whether they would call their doctor. We also measured numeracy and health literacy.

    These researchers got together more than 1800 adults, about half of whom had diabetes. Then they showed them lab results of A1c values randomized to be either slightly or moderately high. They mixed these in with other random lab values. Then they asked them if their A1c level was outside the normal range, what this said about their glucose control, and if they would call their doctor out of concern.

    Just over half of participants correctly realized that the A1c was out of the normal range. Those that HAD DIABETES only realized this 56% of the time. A picture is worth a thousand words:

    Predicted probabilities that participants with and without diabetes would correctly identify hemoglobin A1c test results as outside the standard range by lower versus higher literacy and numeracy levels.

    Predicted probabilities that participants with and without diabetes would correctly identify hemoglobin A1c test results as outside the standard range by lower versus higher literacy and numeracy levels.

    Keep on telling me how giving patients more data is the silver bullet for fixing health care.

    @aaronecarroll

    UPDATE: Dan Diamond reports on how Amazon is getting into the “give patients more data” game. Nick Bagley accurately notes my response:

    giphy

    Share
    Comments closed
     
  • When are wellness programs illegal?

    In what appears to be the first major legal volley against wellness programs, the Equal Employment Opportunity Commission (EEOC) filed suit on Wednesday against a Wisconsin company for allegedly firing a worker because she refused to undergo a health assessment.

    According to the EEOC, Orion Energy Systems instituted a wellness program in 2009 and asked its employees to undertake a “health risk assessment.” That’s standard practice. Employers say (not implausibly) that the assessments provide an opportunity for employees to take stock of their health needs. But some workers fear (also not implausibly) that their employers are intruding on their privacy.

    Employers don’t usually insist that their workers sign up for a wellness program. Instead, workers are given an incentive to participate—the average incentive is about $50 a month. Not Orion, though. For workers who declined to take the assessment, Orion said that it would no longer cover any of their insurance premiums. Not a dime.

    Nonetheless, one Orion employee—Wendy Schobert—still opted out. To keep her insurance, she was going to have to pay more than $400 a month, as well as an extra $50 monthly penalty. But about a month later, Schobert was fired, allegedly in retaliation for her refusal to take the assessment.

    The EEOC filed suit against Orion on her behalf. The core of the EEOC’s complaint is the claim that requiring Schobert to participate in the wellness program, and then firing her for refusing, violated the Americans with Disabilities Act. The ADA prohibits businesses from discriminating against the disabled, and that includes subjecting workers to “medical examinations and inquiries.”

    But not all medical examinations or inquiries are prohibited. Health assessments are thought to pass muster under the ADA so long as they’re voluntary. The trouble with Orion’s plan was that it wasn’t really voluntary. There was too much money at stake to leave Schobert with a meaningful choice.

    When do financial incentives become so large that wellness programs are no longer voluntary? It can’t be that incentives are always okay but penalties are verboten. Any penalty (“I’ll dock your paycheck $100 a month if you don’t sign up”) can be recast as an incentive (“You’ll get an extra $100 a month if you do sign up.”). The question has to be whether the payment in question leaves workers with a real choice. If not, the wellness program is effectively mandatory, which would violate the ADA.

    Where do you draw the line? The Affordable Care Act allows employers to vary premiums by as much as 30% in connection with a wellness program. Is a plan that varies premiums by 30% truly voluntary? If Orion had adopted a 30% plan, Schobert’s refusal to participate would have cost her about $1,500 over the year—much more if she had a family plan. Might such a plan violate the ADA, even if it was authorized by the ACA? The EEOC is supposed to issue guidance on this question, but it hasn’t yet.

    Because Orion’s wellness program was so draconian, the EEOC’s lawsuit probably isn’t a harbinger of the end of wellness programs. The case nonetheless underscores just how tricky it is to get wellness programs right. As Austin and Aaron have said time and again, wellness programs, at least as they’re currently structured, don’t seem to save money or improve health.

    One reason might be that current financial incentives are too modest to encourage meaningful behavioral change. If that’s the concern, however, businesses may be in a bind. Ratcheting up the incentives might be the only way to make their wellness programs work. But ratcheting up the incentives might also violate the ADA. In other words, it’s possible that wellness programs can either be effective or legal—but not both.

    @nicholas_bagley

    Share
    Comments closed
     
  • Insurance and HPV vaccination

    From the American Journal of Public Health, “Insurance Continuity and Human Papillomavirus Vaccine Uptake in Oregon and California Federally Qualified Health Centers“:

    Objectives. We examined the association between insurance continuity and human papillomavirus (HPV) vaccine uptake in a network of federally qualified health clinics (FQHCs).

    Methods. We analyzed retrospective electronic health record data for females, aged 9–26 years in 2008 through 2010. Based on electronic health record insurance coverage information, patients were categorized by percent of time insured during the study period (0%, 1%–32%, 33%–65%, 66%–99%, or 100%). We used bilevel multivariable Poisson regression to compare vaccine-initiation prevalence between insurance groups, stratified by race/ethnicity and age. We also examined vaccine series completion among initiators who had at least 12 months to complete all 3 doses.

    this study looked at insurance continuity and its relationship of HPV vaccination in a network of federally qualified health clinics. Basically, they wanted to see if time spent uninsured was related to a child missing the HPV vaccine,

    But here’s the thing.  If your child is Medicaid-eligible, uninsured, or underinsured (meaning that you have insurance, but it doesn’t covered vaccines), you can still get vaccines free of charge through the Vaccines for Children program. So, theoretically, insurance shouldn’t matter.

    Of course, it turned out that insurance did matter. Kids 13 and older were significantly less likely to get the HPV vaccine if they were insured for less than two-thirds of the time.

    I hear all the time from people that “insurance doesn’t matter”. In this case, even I agree – it shouldn’t. But it does. The way we’ve set up this crazy health care system, it just does.

    @aaronecarroll

    Share
    Comments closed
     
  • Please let us know if you see popup ads on TIE

      1 comment

    I saw a popup ad on TIE today (see below). This is unacceptable. We do not allow advertising here. But it’s happened before that some third-party we rely on (e.g., for traffic monitoring) pushes popups our way. This is bad behavior and we won’t tolerate it. We’ll dump that third-party, once we figure out who it is. My only other worry is that it’s my computer that’s the problem, not the site. (I will run a malware scan.)

    Have you seen an ad here? If so, let me know in the comments, by email, or Twitter. Please grab a screenshot and share it as well, if you can.

    ad

    @afrakt

    Share
     
  • Methods: RCT’s simplicity advantage

    In “Assessing the case for social experiments,” James Heckman and Jeffrey Smith warn against mistaking the apparent simplicity of randomized controlled trials for actual simplicity. For, they are not so simple when assumptions on which they rely are violated. (How often they are violated and the extent to which that threatens validity of findings  is not so clear, but it’s plausible that they are, to some extent, in a nontrivial proportion of cases.)

    In an experiment, the counterfactual is represented by the outcomes of a control group generated through the random denial of services to persons who would ordinarily be participants. [... T]wo assumptions must hold. The first assumption requires that randomization not alter the process of selection into the program, so that those who participate during an experiment do not differ from those who would have participated in the absence of an experiment. Put simply, there must be no “randomization bias.” Under the alternative assumption that the impact of the program is the same for everyone (the conventional common-effect model), the assumption of no randomization bias becomes unnecessary, because the mean impact of treatment on participants is then the same for persons participating in the presence and in the absence of an experiment.

    The second assumption is that members of the experimental control group cannot obtain close substitutes for the treatment elsewhere. That is, there is no ” substitution bias.” [...]

    It has been argued that experimental evidence on program effectiveness is easier for politicians and policymakers to understand. This argument mistakes apparent for real simplicity. In the presence of randomization bias or substitution bias, the meaning of an experimental impact estimate would be just as difficult to interpret honestly in front of a congressional committee as any nonexperimental study. The hard fact is that some evaluation problems have intrinsic levels of difficulty that render them incapable of expression in sound bites. Delegated expertise must therefore play a role in the formation of public policy in these areas, just as it already does in many other fields. It would be foolish to argue for readily understood but incompetent studies, whether they are experimental or not.

    Moreover, if the preferences and mental capacities of politicians are to guide the selection of an evaluation methodology, then analysts should probably rely on easily understood and still widely used before-after comparisons of the outcomes of program participants. Such comparisons are simpler to explain than experiments, because they require no discussions of selection bias and the rationale for a control group. Furthermore, before-after comparisons are cheaper than experiments. They also have the advantage, or disadvantage, depending on one’s political perspective, that they are more likely to yield positive impact estimates (at least in the case of employment and training programs) due to the well-known preprogram dip in mean earnings for participants in these programs.

    In fact, I frequently see policy arguments made with before-after type evidence. A familiar theme these days is that anything that’s happened in health care since March 2010 is due to Obamacare. Nothing could be more preposterous,* yet this is all a politician needs for a talking point.

    * Well that’s not true. It’d be more preposterous to say that anything that’s happened in health care since March 2010 is due to the 2020 presidential election. That would not fly as a talking point. Not yet, anyway.

    @afrakt

    Share
    Comments closed
     
  • What we fight about when we fight about Halbig.

    Austin and I had an exchange a couple of weeks ago where he expressed frustration at all the bickering about Halbig. Some people think the D.C. Circuit got it right, others think it got it wrong. But what does it matter? Halbig isn’t going to be resolved over Twitter. It’ll be resolved in the courts. Isn’t the case right if the courts say it’s right and wrong if they say it’s wrong?

    I posed the problem to my colleague and friend Scott Hershovitz, a legal philosopher. As a first cut, he cautioned me against confusing

    the relationship between finality and infallibility. The first doesn’t imply the second. An umpire may have the final call as to whether a pitch is a ball or a strike—there’s no one to appeal to—but he can get it wrong and call a strike a ball. Or a ball a strike. The game goes on as if what he said was right. But that doesn’t mean that it was right. It just means that he’s got the authority to decide how the pitch will be treated for the rest of the game.

    What’s true for baseball is true for law. I can measure the correctness of a court’s decision with reference to a set of rules, both formal and informal, about how cases are supposed to be decided. When we fight about Halbig, we’re fighting in part about what the application of those rules tells us about the meaning of the ACA.

    But only in part. The baseball analogy breaks down when you’re talking about the Supreme Court, which not only applies the law, but also says what the law is. I’m convinced, for example, that the D.C. Circuit panel botched Halbig. But the Supreme Court might someday disagree with me. If it does, I’m sure it will have made an enormous mistake.

    Yet how can that be? Chief Justice Marshall wrote in Marbury v. Madison that “[i]t is emphatically the province and duty of the judicial department to say what the law is.” If the law is what the Supreme Court says it is, then how could the Court even get something wrong? Am I just confused?

    Back in the 1980s, Richard Lempert wrote a paper (gated) grappling with just this problem. Set aside those rare cases where the Supreme Court commits a logical fallacy or issues a decision so goofy that it can’t be reconciled with any plausible view about the law. Most of the time, Lempert thinks, when we say the Supreme Court got a case wrong, we’re making an argument about the rules the Court should have used to decide the case. Those rules can properly be criticized if they clash with widely shared ethical commitments, whether those commitments are to popular sovereignty, to legal tradition, to basic fairness, or whatever.

    As Lempert explains, “the mission of legal criticism … is to integrate … legal norms (which themselves have ethical content) with external ethical ones. Such [criticism] seeks to define what law is by identifying what law aspires to be.” On Lempert’s view, saying that the Supreme Court erred is really tantamount to saying that it picked legal rules that dishonor certain moral, ethical, or political values without sufficient justification—that it could’ve better served those values with a different set of rules.

    So when I argue that the D.C. Circuit got Halbig wrong, I’m not just saying that it did a bad job applying the accepted rules of statutory construction (although I think it did). I’m also saying that the mode of interpretation it embraced disserves the whole “government by the people” thing: it’s too woodenly literal, insensitive to statutory context, and indifferent to what Congress actually meant to accomplish. The challengers appeal to the same value, arguing that strict adherence to statutory text will best capture what Congress wanted because courts are too easily misled by other evidence of legislative purpose.

    The fight in Halbig is thus a battle about how best to honor the political commitment to representative democracy. That’s a fight worth having in public—and even on Twitter—because the courts have been entrusted to select rules that, among other things, advance the cause of democracy. If the courts pick legal rules that produce outcomes that clash with that commitment—if, in other words, they get it wrong—it’s up to us to say so. Only by holding the courts to account can we hope to get the judges we deserve.

    @nicholas_bagley

    Share
    Comments closed
     
  • The overscreening never seems to end

    From JAMA Internal Medicine, “Cancer Screening Rates in Individuals With Different Life Expectancies“:

    Importance: Routine cancer screening has unproven net benefit for patients with limited life expectancy.

    Objective: To examine the patterns of prostate, breast, cervical, and colorectal cancer screening in the United States in individuals with different life expectancies.

    Design, Setting, and Participants: Data from the population-based National Health Interview Survey (NHIS) from 2000 through 2010 were used and included 27 404 participants aged 65 years or older. Using a validated mortality index specific for NHIS, participants were grouped into those with low (<25%), intermediate (25%-49%), high (50%-74%), and very high (≥75%) risks of 9-year mortality.

    Main Outcomes and Measures: Rates of prostate, breast, cervical, and colorectal cancer screening.

    So let’s start with this idea: if you have a limited life expectancy, or a short time to live, then screening for some diseases really doesn’t make much sense. If cancers have a very high 10-year survival rate, then it doesn’t really pay to do much screening if you have less than 10 years to live.

    This study looked at people in the National Health Interview Survey, and grouped them into risks of dying in the next 9 years. They did this using a validated mortality index designed specifically for this survey.

    Patients with a very high 9-year mortality risk were screened for cancer quite often. Women who had had a hysterectomy were screened with PAP smears between 34% and 56% of the time within the last 3 years. Men were screened for prostate cancer at a rate of 55%. Even people who had a very high 5-year mortality risk were screened at high rates, as seen in this figure:

    ioi140078f2

    Bottom line is that we’re screening a huge number of people who are incredibly unlikely to receive a benefit. Why? It costs a ton of money, and it can lead to harm.

    @aaronecarroll

     

    Share
    Comments closed
     
  • The quality of Medicare Advantage

    The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

    Medicare Advantage plans — private plans that serve as alternatives to the traditional, public program for those that qualify for it — underperform traditional Medicare in one respect: They cost 6 percent more.

    But they outperform traditional Medicare in another way: They offer higher quality. That’s according to research summarized recently by the Harvard health economists Joseph Newhouse and Thomas McGuire, and it raises a difficult question: Is the extra quality worth the extra cost?

    It used to be easier to assess the value of Medicare Advantage. In the early 2000s,Medicare Advantage plans also cost taxpayers more than traditional Medicare. It also seemed that they provided poorer quality, making the case against Medicare Advantage easy. It was a bad deal.

    At that time, Medicare beneficiaries could switch between a Medicare Advantage plan and traditional Medicare each month. (Now, beneficiaries are generally locked into choices for all or most of a year.) In that setting, theMedicare Payment Advisory Commission (MedPAC) found that relatively healthier beneficiaries were switching into Medicare Advantage and relatively sicker ones were switching out.

    This suggested that Medicare Advantage didn’t provide the type of coverage or the access to services that unhealthier beneficiaries wanted or needed. Since the point of insurance is to pay for needed care when one is sick, it was tempting to condemn the program as having poor quality and failing to fulfill a basic requirement of coverage.

    But things have changed. Mr. Newhouse and Mr. McGuire show, for example, that by 2006-2007, health differences between beneficiaries in Medicare Advantage and those in traditional Medicare had narrowed. About the same proportion of beneficiaries in Medicare Advantage as in traditional Medicare rated their health as fair or poor. This suggests that sicker beneficiaries were not switching out of Medicare Advantage and healthier ones were not switching in to the extent they had been in earlier years.

    Also, in contrast to studies in the 1990s, more recent work finds that Medicare Advantage is superior to traditional Medicare on a variety of quality measures. For example, according to a paper in Health Affairs by John Ayanian and colleagues, women enrolled in a Medicare Advantage H.M.O. are more likely to receive mammography screenings; those with diabetes are more likely to receive blood sugar testing and retinal exams; and those with diabetes or cardiovascular disease are more likely to receive cholesterol testing.

    That Health Affairs paper also found that H.M.O. enrollees are more likely to receive flu and pneumonia vaccinations and about as likely to rate their personal doctor and specialists highly.

    There are reasons Medicare Advantage plans might promote higher-quality care. So long as beneficiaries don’t switch among plans too rapidly (and the evidence is that once they select a plan, they tend to stick with it), plans have a financial incentive to keep their enrollees healthy, incurring less downstream cost. It’s possible, therefore, that they may offer incentives to providers to perform preventive services.

    Moreover, in contrast to traditional Medicare, which must reimburse any provider willing to see beneficiaries enrolled in the program, Medicare Advantage plans establish networks of providers. This permits them, if they choose, to disproportionately exclude lower-quality doctors, ones who do not provide preventive services frequently enough, for example.

    Contemplating these more recent findings on quality alongside the higher taxpayer cost of Medicare Advantage plans invites some cognitive dissonance. On the one hand, we shouldn’t pay more than we need to in order to provide the Medicare benefit; we should demand that taxpayer-financed benefits be provided as efficiently as possible. Medicare Advantagedoesn’t look so good from this perspective.

    On the other hand, we want Medicare beneficiaries — which we all hope to be someday, if we’re not already — to receive the highest quality of care. Here, as far as we know from research to date, Medicare Advantage shines, at least relative to traditional Medicare.

    Is Medicare Advantage worth its extra cost? A decade ago when quality appeared poor, the answer was easy: No. Today one must think harder and weigh costs against program benefits, including its higher quality. The research base is still too thin to provide an objective answer. Mr. Newhouse and Mr. McGuire hedge but lean favorably toward Medicare Advantage, saying cuts in its “plan payments may be shortsighted.”

    ***

    Here’s a bonus chart, not provided in the original post, of some of the quality measure results described in the post.

    MA-FFS quality

    @afrakt

    Share
    Comments closed
     
  • The history of the politics and abuse of methodology

      2 comments

    About my post on RCT’s gold-standard reputation, below is text of an email from a reader who wishes to remain anonymous. I’m posting it not because of the compliments (I do like them, though) but because I am grateful for the final two paragraphs on the history of the politics and abuse of methodology, about which I know very little.

    The comments are open for one week. Chime in if you know any relevant history. Bring the dirt! References welcome.

    I think when thinking about the role RCTs have in medicine you’re dead-on when saying the conceptual simplicity is really, really important. The people who read economics journals almost all have major quant training. Doctors are supposed to understand medical journals and most of them have very little.

    I’d bring up 2 related points.

    The first is FDA. B/c medications must be approved with 2 pivotal trials, we’re all used to seeing RCTs regularly and seeing them as literally the government’s official imprimatur of success.

    The second is marketing. In the ’90s pharma figured out how to use RCTs to their advantage. Design massive trials in highly-selected populations, don’t look hard for side effects, and don’t publish the negative trials. If p<0.05, market to everyone. Gold standard blockbuster! For example, there’s reason to worry if SSRIs help much of anyone. http://www.nejm.org/doi/full/10.1056/NEJMsa065779

    The third, supporting your contention, is history. In the ’90′s there were really 3 schools fighting over how clinical data should be used in clinical practice. At Yale, Alvan Feinstein wanted a very detail-oriented, methods-based clinical epidemiology. David Eddy instead envisioned systems of care involving RCTs, decision analyses, and decision support. Finally at McMaster, Sackett, Guyatt, etc, developed a very simple view of the evidence hierarchy. McMaster won. Their success in marketing a simple approach toward clinicians with books, curricula, and doctor-focused series in JAMA was central to that.

    @afrakt

    Share