• Why Clinical Trialists Fail to Publish Completed RCTs

    Sometimes clinical trialists collect data but fail to publish the results. I know, because I am one of these bad trialists.

    It happens remarkably often. Hiroki Saito and Christopher Gill randomly selected 400 RCTs registered at the ClinicalTrials.gov website. They found that

    Among the 400 clinical trials, 118 (29.5%) failed to achieve PDOR [public disclosure of results] within four years of completion. The median day from study completion to PDOR among 282 studies (70.5%) that achieved PDOR was 602 days (mean 647 days, SD 454 days).

    Christopher Jones reports similar results here. The non-publication of a significant proportion of trials skews the evidence base, particularly if failed trials are less likely to be published.

    Failure to publish is also a significant ethical problem. Jones notes that

    The non-publication of trial data also violates an ethical obligation that investigators have towards study participants. When trial data remain unpublished, the societal benefit that may have motivated someone to enroll in a study remains unrealized.

    If it’s a publish or perish world, why do so many fail to publish? Roberta Scherer and her colleagues in the Journal of Clinical Epidemiology (see also here) did a systematic review of the reasons given by trialists for not publishing their results. Her findings were even more discouraging than Saito and Guest’s.

    Results
    The mean full publication rate was 55.9% (95% CI, 54.8% to 56.9%) for 24 of 27 eligible reports providing this information, and 73.0% (95% CI, 71.2% to 74.7%) for 7 reports of abstracts describing clinical trials. 24 studies itemized 1,831 reasons for non-publication, and 6 itemized 428 reasons considered the most important reason. ‘Lack of time’ was the most frequently reported reason (weighted average = 30.2% (95% CI, 27.9% to 32.4%)) and the most important reason (weighted average = 38.4% (95% CI, 33.7% to 43.2%)).

    WhyResearchersDontPublish

    I love this conclusion:

    Conclusions
    Across medical specialties, the main reasons for not subsequently publishing an abstract in full lies with factors related to the abstract author rather than with journals. (emphasis added)

    It’s the authors, not the journals. “Lack of time” is a completely unsatisfying explanation: no one has more or less than 24 hours in a day. The trialists who did not publish had other priorities for their time.

    But if large groups of trialists have the wrong priorities, we should ask whether the trialists are in situations that reinforce the wrong priorities. Here’s how it looks to me.

    Academic medical centers (AMCs) care only about the revenue you generate, either from patient care or from winning grant competitions. This has two adverse effects. On the one hand, many AMCs steal time from clinicians by requiring them to see patients on time that is actually paid for on grants. On the other hand, AMCs do not care much about publishing. They care some because publishing more will help you win grants. But that’s the only reason they care. If you can get another grant without publishing anything from the last one, they are happy. It’s not really a publish or perish world. You can publish a lot and still perish.

    Grants are under budgeted and within grant budgets, resources for data analysis and writing are notoriously under budgeted. Grant writers propose to spend the last six months of their funding on analysis and writing. This is a joke, but grant review panels routinely look the other way, because everyone is telling the same joke. People also make optimistic assumptions about how quickly they can recruit patients and how much they will have to spend per patient. (In my unpublished RCT, I was off by a factor of five. It’s a long story…) Finally, the NIH routinely makes across the board cuts in grant budgets, because Congress has been cutting NIH’s budget. The net effect is that many trialists expend their funds on data collection and have inadequate funds to support data analysis or writing.

    It’s really hard to make yourself write when the trial fails. And most trials do fail, either because they do not recruit enough patients or because the intervention has no effect. My unpublished trial involved a web-based technology for improving psychiatric patient follow up. The tech worked fabulously. The doctors who tried it were enthusiastic about it. The problem was, I could never convince more than a handful of them to try it. Trying and failing to sell them was among the most dispiriting experiences of my life. I’m not afraid that the data can’t be published. It just hurts to write about them. I have been working on a paper for some time; but it’s been very difficult to see the manuscript through to completion.

    This post is a commitment mechanism. Having gone public with my problem, I will finish this. You are my witnesses.

    My point, though, is that the incentives surrounding clinical trialists do not support publication. We need stronger incentives and they may need to be punitive. NIH Director Francis Collins has some ideas here.

    @Bill_Gardner

    Share
    Comments closed
     
  • AcademyHealth: Ethical quandaries in placebo prescribing

    If driven by belief in a positive outcome, the placebo effect would seem to require deceit. After all, if one knows one is taking a sugar pill, why should one think anything good would happen? And yet, at least two studies suggest non-deceiving placebos are possible. I discuss them in my latest AcademyHealth post.

    @afrakt

     

     

     

    Share
    Comments closed
     
  • Upshot: Behind New Dietary Guidelines, Better Science

    The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

    For decades, many dietary recommendations have revolved around consuming a low percentage of your daily calories from fat. It has been widely thought that doing so would reduce your chance of having coronary heart disease. Most of the evidence for that recommendation has come from epidemiologic studies, which can be flawed.

    Use of these types of studies happens far more often than we would like, leading to dietary guidelines that may not be based on the best available evidence. But last week, the government started to address that problem, proposing new guidelines that in some cases are more in line with evidence from randomized controlled trials, a more rigorous form of scientific research.

    Sometimes we have to settle for epidemiologic or other less reliable studies because we can’t do a randomized controlled trial to prove causality. We’ll never have one for smoking and cancer, for instance, because the evidence from cohort and case-control studies, which are observational and not interventional, is so compelling that telling a random population to smoke “to see if it’s harmful” would be unethical. But there’s no reason we couldn’t randomly assign people to diets.

    It turns out that we have. In fact, randomized controlled trials existed when the previous low-fat guidelines were published. It appears they were ignored.

    Just recently, a study was published in the journal Open Heart in which researchers performed a systematic review and meta-analysis of the randomized controlled trials that were available when those guidelines were announced. They wanted to explore what evidence those creating the guidelines might have been able to consider at the time.

    Before 1983, six randomized controlled trials involving 2,467 men were conducted. None were explicit studies of the recommended diet (and none involved women), but all explored the relationship between dietary fat,cholesterol and mortality. Five of them were secondary prevention trials — meaning that they involved only men with known problems already. Only one included healthy participants, who would be at lower risk, and therefore would be likely to have less benefit from dietary changes.

    That’s a lot of participants. Moreover, many of them were at high risk. And in all of them, there was no significant difference among them in the rate of death from coronary heart disease. There were also no differences in mortality from all causes, which is the metric that matters.

    The study did show that cholesterol levels went down more in the groups that ate low-fat diets. Some have used this as justification for a low-fat diet. But the difference between them was small. Mean cholesterol went down 13 percent in the intervention groups, but it went down 7 percent in the control groups. And these groups didn’t have different clinical outcomes, and that’s what we really care about.

    Small changes in cholesterol levels from dietary changes also aren’t surprising to those who follow the research. About 70 percent of people are thought to be “hyporesponders” to dietary cholesterol. This means that after consuming three eggs a day for 30 days, they would see no increase in their plasma cholesterol ratios. Their cholesterol levels have almost no relationship to what they eat.

    Don’t take my word for it. Again, there have been randomized controlled trials in this area. In 2013, researchers published a systematic review of all studies from 2003 or after. Twelve met the researchers’ criteria for inclusion in the analysis, and seven of them controlled for background diet. Most of the studies that controlled for background diet found that altering cholesterol consumption had no effect on the concentration of blood LDL (or “bad”) cholesterol. A few studies could detect differences only in small subgroups of people with certain genes or a predisposition to problems.

    In other words, in most studies, all people didn’t respond. In the rest, only a minority of patients responded to changes in dietary cholesterol.

    Did recommendations change when these studies were published? No, but they got closer to changing on Thursday, when a government committeeurged repeal of the guideline that Americans limit their cholesterol intake to 300 milligrams a day, saying, “Cholesterol is not a nutrient of concern for overconsumption.” I’m sure this will come as a surprise to a vast majority of Americans, who for decades have been watching their cholesterol intake religiously. (The change won’t be official until it is approved by the Department of Health and Human Services and the Department of Agriculture, but they usually closely follow the committee’s recommendations.)

    I wrote here at The Upshot not long ago about how a growing body of epidemiologic data was pointing out that low-salt diets might actually be unhealthy. But randomized controlled trials exist there, too. A 2008 study randomly assigned patients with congestive heart failure to either normal or low-sodium diets. Those on the low-sodium diet had significantly more hospital admissions. The “number needed to treat” for a normal-sodium diet above a low-sodium diet to prevent a hospital admission in this population was six — meaning that for every six people who are moved from a low-sodium diet to a normal diet, one hospital admission would be prevented. That’s a very strong finding.

    Let’s not cherry-pick, though. A systematic review of randomized controlled trials of salt intake was published last year. Eight trials involving more than 7,200 participants looked at whether advising patients to cut down on salt, or reducing sodium intake, affected outcomes. None of the trials, including ones involving people with both normal and high blood pressure, showed a reduction in all-cause mortality.

    Only one trial even showed an effect on death from cardiovascular causes, like heart attack or stroke. It was conducted on residents of an assisted-living facility who had high blood pressure — hardly representative of the population as a whole, which is what dietary guidelines are supposed to cover.

    I’m pretty immersed in the medical literature, and all of this is still shocking to me. It’s hard to overestimate the effect of the dietary guidelines. Hundreds of millions of people changed their diets based on these recommendations. They consumed less fat, they avoided cholesterol and they reduced their intake of salt.

    Since pretty much all calories come from fat, protein or carbohydrates, reducing your consumption of one means that you have to increase your consumption of another. (We are not talking here about recommendations for the total amount of calories you should eat. These recommendations assume you’re eating the proper amount of calories, and seek to govern the proportion of nutrients within them.)

    So, as the guidelines have recommended cutting down on meat, especially red meat, this meant that many people began to increase their consumption of carbohydrates.

    Decades later, it’s not hard to find evidence that this might have been a bad move. Many now believe that excessive carbohydrate consumption may be contributing to the obesity and diabetes epidemics. A Cochrane Review of all randomized controlled trials of reduced or modified dietary fat interventions found that replacing fat with carbohydrates does not protect even against cardiovascular problems, let alone death.

    Interestingly, the new dietary recommendations may acknowledge this as well, dropping the recommendation to limit overall fat consumption in favor of a more refined recommendation to limit only saturated fat. Even that recommendation is hotly contested by some, though. The committee is also bending a bit on salt, putting less emphasis on the 1,500-milligram daily limit on sodium for special populations, in light of the mounting evidence that too little sodium may be as bad as too much, if not worse.

    It is frustrating enough when we over-read the results of epidemiologic studies and make the mistake of believing that correlation is the same as causation. It’s maddening, however, when we ignore the results of randomized controlled trials, which can prove causation, to continue down the wrong path. In reviewing the literature, it’s hard to come away with a sense that anyone knows for sure what diet should be recommended to all Americans.

    I understand people’s frustration at the continuing shifts in nutrition recommendations. For decades, they’ve been told what to eat because “science says so.” Unfortunately, that doesn’t appear to be true. That’s disappointing not only because it reduces people’s faith in science as a whole, but also because it may have cost some people better health, or potentially even their lives.

    @aaronecarroll

    Share
    Comments closed
     
  • Innovations in health insurance design

    The following is a guest post by Michael Chernew and Aaron Schwartz. Michael is a health economist and Professor of Health Care Policy at Harvard Medical School. He has written extensively on issues of benefit design, particularly Value Based Insurance Design. Aaron is a MD/ PhD candidate at Harvard University. His research focuses on quantifying waste in the health care system and evaluating strategies to eliminate it.

    Recently, there has been much discussion of innovations in benefit design, including on this blog, where there was a recent post about a split benefit design. Given the range of proposed options it is useful to revisit the connection between benefit design and theory.

    The goal of optimal insurance design is to maximize societal welfare, which consists of two elements. First, an optimal plan steers beneficiaries toward high value services, minimizing moral hazard. Second, an optimal plan provides protection against risk, ensuring that beneficiaries can expect to experience relatively similar welfare across a range of possible life outcomes (i.e. in sickness and in health).

    The motivation for cost sharing in standard economic models is to balance these sometimes competing objectives. Early models of optimal coinsurance were based on a single coinsurance rate. More recent innovations have more nuance. The unifying theme is that optimal cost-sharing should be targeted to situations where patients can respond by making different health care choices. For instance, a patient suffering a heart attack will almost surely exceed most deductibles. So, the cost sharing associated with a high deductible plan will have very little impact; there is no incentive for the patient to follow a more fiscally conservative treatment path or choose a less expensive facility.

    One strand of new designs (e.g., reference pricing and tiered networks) focuses on choice of provider. These designs recognize the widespread variation in prices. They allow beneficiaries that seek care from low cost providers to share the savings. Reference pricing focuses on specific services. Typically a fixed price is paid by the insurer and the beneficiary must pay the difference if they get care from a higher priced provider. Tiered network plans typically identify preferred providers (physician and hospitals) based on cost, and sometimes quality and place them in a preferred “tier”.

    Both reference pricing and tiered network designs will be more effective with better search tools, but they still must contend with complexities of the delivery system. For example, tiered network products sometimes place hospitals and the physicians with admitting privileges at that hospital in different tiers. Reference pricing, which is more targeted than tiered networks, may be practical for only a relatively small share of spending. Tiered networks may affect more spending, but may disadvantage patients that have conditions best treated at the high tier facilities. In both cases the effectiveness of these products depends on the variation in provider process and existence of sufficient choice. If there is only one provider these benefit designs will be ineffective.

    Another strand of design, value based insurance designs (VBID), focuses on which services are used. The idea is that cost sharing should be low for high value services and higher for low value services. These designs recognize the underutilization of high value services (which may be exacerbated by across the board coinsurance increases) and the overuse of low value services (which have received increasing attention through campaigns such as Choosing Wisely and evidence on widespread geographic variation in use). Commonly, VBID designs are applied to low unit cost preventive services, but the theory is much broader. In these cases, traditional cost sharing acts like a tax, with few beneficial incentive effects. VBID allows patients who choose low cost treatment options to share in the savings.

    Split benefit design applies similar principles to patients with high cost illnesses. These patients often face little cost-sharing because they have exceeded their annual out-of-pocket limits. Unlike the previous examples, split benefit design involves a cash rebate to patients who choose less-expensive treatment options. This rebate is forfeited if the patient instead chooses the more expensive treatment option.

    An intriguing aspect of split benefit design is that, relative to fully covering expensive treatments, this design does not increase the financial burden of sick patients receiving expensive care, and yet it still encourages the choice of less expensive treatment alternatives. However, this feature comes at a cost of reduced income smoothing (risk protection); indeed, premiums could increase substantially under certain circumstances. Consider the extreme case in which the rebate equals the price difference between the low-price and high-price care options. This split benefit design would ensure that the low-price option effectively costs the insurance company the same amount as the high price option, and premiums would be as high as if all patients chose a fully-covered high-price option.

    Chernew, Encinosa and Hirth (CEH) worked out the math in a related scenario. The insights from the CEH model is that the optimal benefit design charges patients who choose the high cost treatment a fee and pays the patients who choose the low cost option a rebate. The sum of the fee and the rebate is less than the full incremental cost (which dilutes incentives to choose the low-cost option but helps insure against the “risk” that a patient prefers the high cost treatment). This model, described in detail in the paper, does a better job at smoothing utility across different states of the world than reference pricing (in which beneficiaries may pay the incremental fee for high-cost care) or split benefit design (in which beneficiaries are paid a rebate if they choose the low cost option). Specifically, CEH is based on a utility maximizing model that recognizes the need to transfer income from the healthy to sick state of the world, and in the context of that model derives the optimal way to do that.

    A key distinction among split benefit, reference pricing, and CEH is distributional. Reference pricing has low premiums and charges people who fall ill and opt for the high cost option more. Split benefit has high premiums and refunds a portion to those who get sick and choose the low cost option. The CEH model falls in between. If all beneficiaries can equally expect to become sick, then CEH maximizes patient welfare. But if risk is heterogeneous, distributional issues become important. For example, reference pricing favors relatively healthy people because premiums are low and out of pocket costs if one becomes ill could be high. Split benefit favors less heathy people because premiums are high and individuals who use care may receive a rebate. Economic efficiency criteria say nothing about these distributional issues, which depend on concepts of fairness.

    As benefit designs evolve, these and other innovations are likely to get more attention. Implementation issues will be important. Currently, many insurers do not offer these more innovative designs. Over time, if insurers, employers, benefit consultants and most importantly patients, become more comfortable with these designs they will become much more common and offer mechanisms to improve the efficiency of the health care system.

    Share
    Comments closed
     
  • Students: Join me for a Twitter chat about plan-provider integration

    Let me just quote from the email:

    The Translation and Communications Interest Group is sponsoring a Twitter chat with Dr. Austin Frakt on Wednesday, February 25 from 2 – 3 p.m. ET.  Dr. Frakt is the lead author of the 2013 HSR Article of the Year, Plan–Provider Integration, Premiums, and Quality in the Medicare Advantage Market.  This study is the basis for the 2015 Student Competition:  Presenting Research in Compelling Ways, which will be held during a special session at the Annual Research Meeting in Minneapolis on June 15, 2015.   Dr. Frakt will answer questions from students related to his work and the competition, using the #ARMStudent hashtag.

    More about the competition here. The article is ungated (through March) here. I blogged about it here and here.

    @afrakt

    Share
    Comments closed
     
  • By shielding infants from stuff, we may be making allergies worse

    In 2000, the AAP published a guideline recommending to decrease the risk of a child developing an allergic disease. They recommended that “mothers should eliminate peanuts and tree nuts (eg, almonds, walnuts, etc) and consider eliminating eggs, cow’s milk, fish, and perhaps other foods from their diets while nursing. Solid foods should not be introduced into the diet of high-risk infants until 6 months of age, with dairy products delayed until 1 year, eggs until 2 years, and peanuts, nuts, and fish until 3 years of age.”

    In 2006, along with colleagues like Beth Tarini, we published a systematic review of the early introduction of solid foods and the later development of allergic disease. We found, somewhat to many people’s surprise, that while there was some evidence linking early solid feeding to eczema, there was no strong evidence supporting a link between early solid food exposure and the development of asthma, allergic rhinitis, allergies to animals, or persistent food allergies.

    In other words, there was no good evidence to keep infants away from foods in the belief that we could spare them food allergies later. Other studies showed a similar lack of evidence for the other parts of the recommendation. In 2008, the AAP altered its recommendations to say there wasn’t good evidence to support food avoidance to prevent allergies.

    A study in the NEJM today goes a step further. It says that keeping peanuts away from infants may be making things worse:

    Background: The prevalence of peanut allergy among children in Western countries has doubled in the past 10 years, and peanut allergy is becoming apparent in Africa and Asia. We evaluated strategies of peanut consumption and avoidance to determine which strategy is most effective in preventing the development of peanut allergy in infants at high risk for allergy.

    Methods: We randomly assigned 640 infants with severe eczema, egg allergy, or both to consume or avoid peanuts until 60 months of age. Participants who were at least 5 months but younger than 11 months of age at randomization, were assigned to separate study cohorts on the basis of preexisting sensitivity to peanut extract, which was determined with the use of a skin-prick test – one consisting of participants with no measurable wheal after testing and the other consisting of those with a wheal measuring 1 to 4 mm in diameter. The primary outcome, which was assessed independently in each cohort, was the proportion of participants with peanut allergy at 60 months of age.

    They took 640 high risk infants and randomized them to get peanuts or not for the first 5 years of life. They separated kids in both intervention groups by any pre-existing sensitivity to peanuts. Then they checked them at 5 years of age to see if they had a peanut allergy.

    I have friends who will already have lost their minds hearing about this. I mean, letting kids get exposed to peanuts? Especially kids with a sensitivity to peanuts already? Insane, right? Until you see results like this:

    Peanuts

    Looking at all kids, about 3.2% of those exposed to peanuts developed a peanut allergy, as opposed to 17.2% of those not exposed to peanuts. If you only look at the kids without prior peanut sensitivity, about 1.9% of those exposed to peanuts developed a peanut allergy, as opposed to 13.7% of those not exposed.

    But in the cohort of kids with a known peanut sensitivity already, exposure to peanuts until age 5 years led to a prevalence of peanut allergies of 10.6%, versus 35.3% in those not exposed.

    In other words, exposing kids to peanuts, even those with a sensitivity, led to fewer allergies. Conversely, not exposing them led to more allergies. I mean, kids with a previous sensitivity to peanuts who were exposed to them had a lower prevalence of peanut allergies at 5 years of age than kids who didn’t have a previous sensitivity to peanuts, but were never exposed to them. The accompanying editorial pulls no punches:

    [W]e believe that because the results of this trial are so compelling, and the problem of the increasing prevalence of peanut allergy so alarming, new guidelines should be forthcoming very soon. In the meantime, we suggest that any infant between 4 months and 8 months of age believed to be at risk for peanut allergy should undergo skin-prick testing for peanut. If the test results are negative, the child should be started on a diet that includes 2 g of peanut protein three times a week for at least 3 years, and if the results are positive but show mild insensitivity (i.e., the wheal measured 2 mm of less), the child should undergo a food challenge in this peanut is administered and the child’e response observed by a physician who has experience performing a food challenge.

    We’re seeing an alarming increase in peanut allergies worldwide. Our response appears to be making things worse. Time to change our behavior here.

    @aaronecarroll

    Share
    Comments closed
     
  • Healthcare Triage: Assessing Utilities – How Much Risk Are You Willing to Take?

    When we are judging the cost-effectiveness of a treatment or intervention, we’re really asking how much bang for the buck we’re getting for our healthcare spending. That can be relatively easy when we’re talking about life and death. But how do we measure improvements in quality? The most widely used method is through the use of utility values, and we’ll show you how we calculate those in this week’s Healthcare Triage:

    A lot of this can be found in a paper we published in the Journal of Pediatrics, but it may be gated. Here’s the key results for those of you who are interested. These are utility values parents gave us for their children for a variety of health states:

    Standard Gamble Time Tradeoff
    Disease state Mean Median Mean Median
    Perfect health 1 - 1 -
    Otitis media with pain 0.96 1 0.97 1
    Mild ADHD 0.94 1 0.93 1
    10-day hospitalization 0.94 1 0.95 1
    Moderate gastroenteritis 0.93 1 0.94 1
    Moderate allergic reaction 0.93 1 0.93 1
    Severe ADHD 0.92 0.99 0.9 0.99
    Mild hearing loss 0.92 0.99 0.93 0.99
    Moderate hearing loss 0.91 0.99 0.92 0.99
    Severe allergic reaction 0.91 0.99 0.91 0.99
    Mild intermittent asthma 0.91 0.99 0.91 0.98
    Mild persistent asthma 0.9 0.98 0.91 0.99
    Severe gastroenteritis 0.9 1 0.92 1
    Mild bilateral vision loss 0.89 0.97 0.91 0.99
    Moderate persistent asthma 0.88 0.97 0.91 0.97
    Monocular blindness 0.88 0.96 0.89 0.96
    10-day intensive care unit hospitalization 0.87 0.98 0.91 1
    Mild cerebral palsy 0.87 0.96 0.88 0.96
    Severe hearing loss 0.86 0.94 0.86 0.94
    Mild seizure disorder 0.85 0.96 0.86 0.96
    Moderate bilateral vision loss 0.85 0.94 0.86 0.94
    Mild mental retardation 0.84 0.91 0.83 0.93
    Moderate seizure disorder 0.84 0.92 0.83 0.9
    Severe persistent asthma 0.83 0.93 0.85 0.93
    Severe bilateral vision loss 0.81 0.89 0.81 0.89
    Moderate mental retardation 0.79 0.86 0.79 0.87
    Moderate cerebral palsy 0.76 0.8 0.76 0.86
    Severe seizure disorder 0.7 0.75 0.71 0.8
    Severe cerebral palsy 0.6 0.5 0.55 0.5
    Severe mental retardation 0.59 0.5 0.51 0.5

    @aaronecarroll

    Share
    Comments closed
     
  • Fifty Shades of Wrong

    Just in time for oral argument, Tim Jost and James Engstrand have a new article out on King v. Burwell. In it, they march through the statute identifying anomalies—at least fifty of them—that accepting the plaintiffs’ interpretation would create. The parties cover many of these incongruities in their briefs, but by no means all of them. As the authors explain:

    Some of these anomalies are arguably minor if considered singly. Others, however, are quite difficult to explain away. Indeed, some are more properly characterized as “absurdities.” … Judge Thomas B. Griffith, for example, in his majority panel decision in Halbig v. Burwell (since vacated), was forced by his grim determination to find that only state-operated exchanges could grant premium tax credits, to conclude that [federally facilitated exchanges] could enroll individuals who were not “qualified,” leaving the term “qualified individuals” meaningless.

    [Plaintiffs] also make this argument in their brief to the Supreme Court, although they concede that the Department of Health and Human Services (HHS), pursuant to its “broad power [under 42 U.S.C. § 18041(c)] to ‘take such actions as are necessary to implement’ the ‘other requirements’” regarding the operation of Exchanges, could redefine “qualified individuals.” This, of course, begs the question of why HHS could not use the same “broad powers” to apply to [federally facilitated exchanges] the requirement that Exchanges make premium tax credits available.

    In any event, cumulatively, the incongruities that [plaintiffs’] reading of 36B creates make it difficult to see how the Supreme Court could rule for [plaintiffs] without ignoring the “fundamental canon of statutory construction that the words of a statute must be read in their context and with a view to their place in the overall statutory scheme,” which the justices have repeatedly acknowledged in their decisions.

    In my view, Jost and Engstrand are on exactly the right track: they’re building a statutory case, premised on the text of the ACA as a whole, in favor of the government’s interpretation. (I’ve made the same effort in some posts of my own.) As it stands, the meticulousness of their examination is unmatched. Let’s hope the Supreme Court takes notice.

    @nicholas_bagley

    Share
    Comments closed
     
  • The science fair

    Via Dan Diamond:

    science fair

    @afrakt

    Share
    Comments closed
     
  • Healthcare Triage News: Patients Bossing Doctors Around? It’s a Myth.

    One of the favorite complaints of doctors when we confront them with the over using technology and overtreating patients is that they demand it. Do they? More and more evidence says that’s a myth. This is Healthcare Triage News.

     

    For those of you who want to read more:

    @aaronecarroll

    Share
    Comments closed