• On CMMI and our tendency to “over-mythologize” RCTs

    The following is jointly authored by Adrianna and Austin.

    Randomized controlled trials are the gold standard in empirical research, but that doesn’t mean they’re the only standard worth paying attention to. If we only find value in RCTs, researchers are wasting an awful lot of time and headspace on alternate methods. So, that recent NYTimes hit piece on the Center for Medicare and Medicaid Innovation strikes us as troubling.

    Aaron covered some important technical points yesterday. RCTs can have fantastic internal validity—when they’re conducted well, we can say with relative certainty how treatment did or did not affect the study population—but our capacity to generalize those results is often limited. Dan Diamond has a piece worth reading, too:

    CMMI’s approach isn’t totally above reproach; the data that the center is seeing from its pilots could be confused by secular trends, like changes in population, practices, and so on. That’s why, Harvard’s Jha acknowledged, it’s important to design studies with a contemporary control group and statistical testing.

    But under CMMI’s ambitious charter, researchers are attempting to track a range of payment and delivery reforms. And it’s hard to think of how the center could use an RCT for some of its projects.

    For example, I asked a half-dozen different researchers to construct a hypothetical RCT to test how accountable care organizations would work. All were stumped.

    These aren’t clinical trials where you can pass out pills and placebos and carefully record individual health outcomes; CMMI is all about changing institutional practices. And just because health policy is closer in proximity to medicine (and its many RCTs) doesn’t actually make health policy more amenable to this kind of study than any other policy domain.

    Take ACOs as an example. What would we supposedly randomize: patients, physicians, or entire hospital systems? Can you imagine the backlash if Medicare tried to foist the program on randomized-but-disinterested providers? Patients would be tricky, too. The way ACOs work now, a Medicare beneficiary is passively “assigned” to an ACO if their physician belongs to that ACO. But that beneficiary isn’t required to limit their care to the ACO—an acknowledged wrinkle—and they may not actually realize that they’re taking part in a new delivery paradigm. (That, itself, is a source of natural (and imperfect) randomness that could be exploited.) According to one Health Affairs brief, critics “believe that patients should have a choice about participating in an arrangement that could reward providers for reducing services.” That sort of rhetoric hardly bodes well for implementing randomized trials in the health services delivery setting.

    Moreover, CMMI wasn’t designed to focus on cumbersome, time-consuming, and relatively static experiments. This is a good thing.

    These demonstrations aim to do two things to health services delivery: improve quality while maintaining or decreasing costs, or reduce costs while maintaining or improving quality. The emphasis is on “rapid-cycle” evaluation—collecting and analyzing data in near-real time, providing feedback on the programs. Far from wasting resources, CMMI is actually bound by law to modify or terminate demonstrations that have insufficient evidence of success.

    Well-conducted policy trials are important and we can learn a lot from them. That said, they don’t come easy or cheap, so they’re not very common. Nor are they immune to threats to internal validity from contamination/crossover/attrition, problems that can be addressed by—wait for it—observational study techniques.

    The Oregon Medicaid experiment was a terrific empirical exercise. It’s also paradigmatic of limitations that policy RCTs face—an entire methods course could probably be taught on it. In order to correct potential biases inherent in the design, the authors employed instrumental variables, an observational technique. A constrained sample size meant power problems. A focus on Portland restricts the results’ external validity. Scholars have (and will continue to) debate the study’s findings and their generalizability.

    Empirical science, and every technique thereof, is imperfect and incremental. But it’s the best we have. Insisting on only one research modality—the RCT—and overlooking the potential gains and relevance of other approaches is costly, both in dollars and applicable knowledge.

    Share
    Comments closed
     
  • Helping research inform legislation

    The following post is coauthored by Sarah Jane Reed, Sarah K. Emond, and Austin Frakt. Sarah Jane Reed serves as Program Director for the Institute for Clinical and Economic Review (ICER), where she oversees operations and strategic planning for the New England Comparative Effectiveness Public Advisory Council (CEPAC).  She holds a Masters of Science in International Health Policy from the London School of Economics. Sarah K. Emond is responsible for the strategic direction of ICER as its Chief Operating Officer, including the implementation of ICER’s research through its flagship initiatives, CEPAC and the California Technology Assessment Forum (CTAF).  Sarah has a Masters of Public Policy from the Heller School at Brandeis University.

    Disclaimers: The views expressed here are the authors’ own and do not necessarily represent the views or opinions of ICER or CEPAC. Austin serves as a member of CEPAC.

    Increasingly, lawmakers are influencing medical policy through patient notification laws and insurance coverage mandates. Such laws are intended to benefit patients, but their inflexibility can cause them to be out of step with sound interpretations of clinical research.

    Consider breast cancer screening. Thirteen states have recently passed breast density notification legislation requiring radiologists to inform women when their mammogram results reveal they have dense breast tissue, which may mask abnormalities. (Approximately 50% of women have dense breasts.) Dozens more states have similar legislation pending. Some states have gone further, requiring insurance coverage of supplemental ultrasound screening for women with dense breasts.

    The issue has also caught the attention of Congress, where similar breast density notification legislation has been introduced.

    Notably, no states with laws such as these and none of the legislation introduced in Congress stratify their requirements by patient risk. Yet sensitivity to risk may, in fact, be what’s best for patients.

    We often think more health information is better. However, notifying women at low risk of breast cancer of their density status may raise more questions than it helps answer. To make informed decisions about future screening options for women with dense breasts, patients and providers need to weigh the benefits and risks of additional screening. Does supplemental screening catch more cancers? Does it help save lives?

    The New England Comparative Effectiveness Public Advisory Council (CEPAC) recently addressed these questions. CEPAC is an independently recruited Council of 18 practicing physicians, methodologists and public representatives from all six New England states who meet in public to discuss and vote on evidence reviews covering test and treatment options in high-impact clinical areas.

    Through its process, CEPAC discusses how evidence can be interpreted on a regional basis, taking into consideration factors such as prevalence, workforce issues, and utilization patterns that are unique to New England but affect how evidence can best be applied in policy and practice. The body also accepts and considers public comments, thereby incorporating a diverse range of stakeholder views and concerns.

    (CEPAC, and its sister organization, the California Technology Assessment Forum, are the flagship implementation initiatives of the independent non-profit, the Institute for Clinical and Economic Review.)

    At its last meeting in December, CEPAC deliberated on the latest evidence on supplemental breast cancer screening for women with dense breasts. In weighing the benefits and risks of supplemental screening, CEPAC examined the evidence on additional cancers detected, reduced mortality rates and the risks of further testing, including the possibility of false alarms.

    A majority of CEPAC voted that for women at low-risk for breast cancer, the evidence does not demonstrate a benefit of supplemental screening. During the deliberation, Council members highlighted the dearth of evidence on long-term outcomes, such as mortality, for these women. However, in women at a moderate- or high-risk for cancer, CEPAC voted that the benefits of supplemental screening outweigh the risks, with the strongest evidence supporting additional screening in women at higher risk for breast cancer. You can read the full report here.

    A discussion at the December meeting of how the evidence should influence policy and practice focused on changes needed in guidelines, clinical practice workflow. A common refrain during this discussion was, “is the policy ahead of the science?” In other words, in light of CEPAC’s votes, are laws that mandate dense breast notification to low-risk women doing more harm than good? This touches on the divisive issue of just how much of medical care should be shaped by legislation. Though CEPAC cannot resolve that question, it is clearly relevant in the case of dense breast tissue notification, as well as others.

    As states in New England, and nationally, contemplate legislation mandating that women be notified if they have dense breasts, more attention should be paid by policymakers to expert, fair, transparent, and publicly deliberative assessments of the current state of the relevant evidence. There is a real danger of laws getting ahead of science. And, all good intentions aside, that is not to the benefit of patients.

    Share
    Comments closed
     
  • Reference pricing as a solution to “doc shock”

    The following post is co-authored by Austin Frakt and Nicholas Bagley.

    Spurred by intense competition on the new health-insurance exchanges, insurers have been casting about for new and better ways to offer their products at the lowest possible price. One way they’re cutting costs is by narrowing the networks of physicians and hospitals that their enrollees can visit for care. Consumers, finding that their preferred plan doesn’t offer coverage for their favorite doctors and hospitals, are experiencing “doc shock,” or soon may.

    It doesn’t need to be this way. Though “preferred” or “network” contracting – the establishment of networks of health care providers willing to accept lower payments – is a standard cost-reduction technique, it’s not the only approach. It’s certainly not the best for consumers.

    Reference pricing is an appealing alternative. With reference pricing, insurers set the price they’re willing to pay for a given service or procedure, typically pegging it to a price at which it can be obtained at good quality – the reference price. A policyholder can then obtain that service or procedure at zero out-of-pocket cost at any provider willing to match that price. For providers that charge more, the policyholder—not the insurer—pays the difference.

    Although reference pricing for medical services isn’t common, there are encouraging signs of good performance where it has been implemented. James Robinson and Timothy Brown studied CalPERS, California’s insurance program for public employees, when it set reference prices for knee- and hip-replacement surgery. They found the reference-pricing initiative had profound effects on the market. CalPERS patients shifted their site of knee- and hip-replacement surgeries to lower-priced hospitals. High-cost providers came under a ton of pressure to lower their prices.

    And that’s exactly what they did. As Robinson and Brown documented, higher-priced hospitals reduced their prices down toward the reference price. Meanwhile, no CalPERS policyholders were left without any coverage with their preferred providers. They were free to obtain knee and hip replacement surgeries at any facility they pleased. So much for doc shock.

    As it stands, there’s no legal impediment to reference pricing on the exchanges. The Affordable Care Act only requires exchange plans to cover essential health benefits. It doesn’t dictate how much plans have to pay for those benefits. It only dictates what proportion of health care costs plans have to cover overall — their “actuarial equivalence.”

    That legal flexibility, however, does present a risk that reference pricing could shift the risk of high costs to policyholders. What if the reference price were set so low that no providers would accept it as full payment? This is a serious concern, but one that should be mitigated by a provision of the ACA requiring plans to guarantee the adequacy of their provider networks. Specifically, plans must “assure that all [covered] services will be accessible without unreasonable delay.” Assuming that the rule is properly enforced—and it should be—insurers can’t set reference prices so low that benefits are effectively unavailable.

    If reference pricing offers a much-needed alternative to tightly restricted networks, why do so few exchanges plans do it? There are at least two reasons. First, reference pricing is hard. When an insurer offers a fixed price for hip-replacement surgery, what precisely does that cover? Does it include the costs of treating an infection acquired in the aftermath of surgery? Or can the hospital bill separately for that treatment? Resolving those sorts of line-drawing problems would require considerable innovation from insurers.

    Second, providers may successfully resist reference pricing. If enough popular hospitals or physician groups refuse to accept reference prices as full payment for their services, people may be unwilling to purchase plans that reference price. Plans that reference price could lose customers—not attract them.

    These challenges notwithstanding, widespread agitation over constricted networks suggests that insurers should give reference pricing another look. Restricted networks are so unpopular that it’s possible—maybe even likely—that consumers would flock to plans that offer them more choices of hospitals and physicians. In the newly competitive market on the health-care exchanges, they should certainly have that option.

    Share
    Comments closed
     
  • A break from comments

    This is a joint post by Austin, Aaron, and Adrianna (the TIE admins and comment moderators).

    All TIE admins are in agreement that we need a break from comment moderation. It’s a lot of work and the benefits relative to costs have dwindled. We’d rather use our time in other ways. So, at least until the end of January, comments will be disabled on all TIE posts by default. We may open up comments now and then to solicit input on specific issues. We might invite comments on an occasional open thread, but we haven’t decided.

    This is an experiment, and we’ll revisit this decision at the end of January.

    This brief post doesn’t convey how much time and effort we’ve devoted over the past year or so in trying to find ways to make comment moderation less taxing on us. Our latest approach didn’t increase the burden,* but also wasn’t of substantial help in reducing it. The idea of shutting comments down altogether goes back at least a year; we would have done it long ago, but we recognize the value of comments to some readers, so we wanted to try other things first.

    Those other things are not working well enough. And so, after lengthy deliberation, we’ll try going (mostly) comment free.

    You are, of course, welcome to email us. Or, if you prefer a more public forum, you can tweet at us. And, you always have the option to start your own blog, start a comment thread on Reddit or similar sites, etc. We value feedback. We just need a break from the moderation duties. (And, no, we can’t run an unmoderated site. You would not believe the spam, even with a good spam filter running.)

    * As of this writing, Austin has received a grand total of zero inquiries about unpublished comments.

    Share
    Comments closed
     
  • Raising the Medicare eligibility age is now a REALLY bad idea

    This post is co-authored by Aaron Carroll and Austin Frakt.

    We’ve written so many times on how raising the Medicare eligibility age to 67 is a bad idea that we hesitate to do so again. (See the FAQ.) But a recent revision by the CBO of federal savings it would generate compels us to do this one more time.

    Implementing this option would reduce federal budget deficits by $19 billion between 2016 and 2023, accord to new estimates by CBO and the staff of the Joint Committee on Taxation (see Table 1). That figure represents the net effect of a $23 billion decrease in outlays and a $4 billion decrease in revenues over that period. The decrease in outlays includes a reduction in federal spending for Medicare as well as a slight reduction in outlays for Social Security retirement benefits. However, those savings would be substantially offset by increases in federal spending for Medicaid and for subsidies to purchase health insurance through the new insurance exchanges and by the decrease in revenues.

    Do you get that? Phasing this in starting in 2016 could save $19 billion over the next 8 years. That’s less than $3 billion a year. That’s… insane.

    Why isn’t it more? Well, once again, the more people you kick of Medicare, the more you get on Medicaid. That increases federal expenditures. More people will also need exchange insurance, too, which means more people needing subsidies. That will also increase federal expenditures. These expenditures reduce the savings to the federal government from the $63.5 billion it would have cost to cover the 65 and 66 year olds to only $23 billion in savings.

    And we’re not even counting the increase to state expenditures for the added Medicaid, the increased cost to employers who have to provide insurance, the increased cost to all Americans in higher premiums for adding those elderly people to the private risk pools, or the increased out of pocket expenses to those seniors. (We covered these costs in prior posts.) If this was a bad deal before, it’s worse now.

    The last time the CBO estimated the savings from increasing the Medicare eligibility age, they pegged the savings to the federal government at $113 billion over 10 years. The new report has the savings as much, much less. Why? It turns out the CBO made a bit of a mistake last time*:

    CBO’s analysis highlighted two points. First, at ages 65 and 66, beneficiaries who enrolled in Medicare when they turned 65 tend to be in much better health—and thus are substantially less expensive, on average—than beneficiaries who were already enrolled upon turning 65 (because of disability or end-stage renal disease). Second, the many 65- and 66-year-old beneficiaries who are workers (or workers’ elderly spouses) with employment-based health insurance are less costly to Medicare, on average, than other beneficiaries at those ages.

    Two things here. The first is more important. Some people who are 65 and 66 who are on Medicare have been on Medicare for some time. That’s because they have renal failure or some other major disability that qualifies them for Medicare before age does. It should go without saying that these people are way more expensive than your otherwise average 65- or 66-year old Medicare beneficiary. These people are also completely unaffected by raising the eligibility age. They’re not eligible due to age, but due to disability or renal failure.

    It appears to us that in their original analysis, the CBO looked at average spending of all 65 and 66 year olds, including all of those extra unhealthy people. But they aren’t relevant here. We want to know how much will be saved by the new policy. So when you look only at people who are new to Medicare at 65 and those that were and then turn 66, the savings from raising the eligibility age is much, much less.

    Additionally, some 65 and 66 year olds still get insurance from their jobs and only use Medicare as a secondary source of coverage. This is much, much cheaper for the program, too. Savings from them will be less.

    Put these two things together, and the new estimate for federal savings is much lower than it was before. But all the non-federal costs (not in the CBO report but covered by us before — see links above) remain, as does the concern about the viability of the exchanges and the fact that Medicaid hasn’t expanded in all states. So if raising the Medicare eligibility age before was a bad idea (and it was), it’s an even worse idea now.

    *There are some who will use this opportunity to attack the CBO and their analyses. We will not be among those people. The CBO does amazing work, consistently and often thanklessly. The fact that they found this mistake and corrected it – publicly – is to be respected, if not lauded.

    @aaronecarroll and @afrakt

    Share
    Comments closed
     
  • The individual mandate penalty and Medicaid

    This post is jointly authored by Austin Frakt and Adrianna McIntyre. 

    In a post last week reminding readers how the individual mandate penalty works, Ezra Klein wrote

    That $95 floor [in 2014] is there to encourage people to sign up for Medicaid (in states where Medicaid isn’t being expanded, people making that little money will be exempted from the mandate on affordability grounds).

    Perhaps this is the designed purpose of the mandate penalty for Medicaid-eligible individuals, but explaining why relies a different logic than that for encouraging people to enroll in an exchange plan.*

    With respect to the exchange-eligible population, the purpose of the mandate penalty is twofold. First, it serves to manage risk selection, i.e., balancing premiums with expected health care costs. It does so by encouraging relatively healthier people to enroll. Relatively sicker individuals who will use more health services don’t need such an incentive. If the premium and cost sharing are lower than the cost of their care, they have ample motivation to purchase coverage. Encouraging those for whom this would not be the case to also purchase coverage will keep premiums, and therefore subsidy cost, lower than they would otherwise be. The mandate penalty is supposed to provide that encouragement.

    Second, the penalty will generate revenue from non-enrollees. This revenue will offset at least some of the cost of the uncompensated care they may use.

    Neither of these rationales for the mandate apply for the Medicaid population.** Medicaid enrollees don’t pay premiums; they aren’t permitted, with the exception of some beneficiaries above 150% FPL. Therefore, risk selection of the program is irrelevant. There are no premiums to balance against costs. With respect to program financing, every enrollee can only add cost. So, using a penalty to encourage Medicaid enrollment costs tax payers more, never less, and has no impact on the costs for other enrollees.

    Also, if a Medicaid-eligible but uninsured individual uses hospital services, s/he will be enrolled in Medicaid at that time. The ACA includes “presumptive eligibility” regulations that allow hospitals to enroll patients at point of service, given some basic information about household size and income. There is no limited open enrollment period for Medicaid. Therefore, the penalty does not recover a cost that the system must otherwise incur. (Hospitals cannot turn away patients requiring urgent care, but physicians can refuse them for office visits.)

    So, for the Medicaid eligible population, the penalty is just a penalty. It doesn’t serve to balance risk in an important way. It doesn’t recover costs, even though it would generate revenue. That’s just extra revenue. Of course the penalty will serve the role of encouraging additional enrollment. And that might be a benefit insofar as that causally increases the use of valuable, preventive or chronic condition management care.

    But recognize that for what it is: pure paternalism. Are we are penalizing Medicaid-eligible individuals just because we think they’d be better off with coverage?

    * What follows doesn’t apply to Arkansas, where the Medicaid expansion is operating under a waiver that caps contributions on an aggregate per capita basis; enrolling disproportionately sicker individuals could drive the expansion costs above that ceiling. More on that later.

    ** Some people who are eligible for Medicaid in expansion states will have incomes below the threshold for filing income taxes ($10,000 for someone filing individually in 2013) and will not have to pay the penalty because they have a “hardship exemption”. This also applies to all individuals below the poverty line in states not expanding Medicaid.

    Share
    Comments closed
     
  • The Republican Study Committee has a “replace” plan.

    The following is jointly authored by Aaron and Austin

    For years now, we have heard that those opposed to Obamacare had a plan to “repeal and replace” it. They’ve certainly been working on the “repeal” part. In fact, some House Republicans are willing to shut down the government, if not risk U.S. default on its debt obligations, if Obamacare is not repealed. So, we know what they’re against, but what are they for. We’ve not heard a word about “replace”.

    That’s not terribly surprising. Reforming the health care system to cover more people and to reduce the rate of growth of health spending is hard. The Affordable care Act (ACA), love it or hate it, was designed to do these things. It reforms the individual insurance market through exchanges and provides subsidies to those with low to moderate incomes to help them purchase insurance. It also includes some reforms aimed at lowering the rate of growth of health care spending, though it is not yet clear to what extent it will do so. The law was designed to balance its costs (coverage expansion) with reduced health care spending (Medicare payment cuts) and new revenue (taxes).

    The law’s opponents have claimed it costs too much, will result in rationing, and limit freedom. Today, a group of House conservatives presented their version of a replacement plan, endorsed by the Republican Study Committee. In short, it throws poor Americans under the bus.

    The centerpiece of the plan is a universal, standard tax deduction of health insurance premiums, up to $7,500 for an individual and $20,000 for a family. This would level a playing field that is uneven today.

    Today, only insurance purchased through work is tax deductible. People who don’t get insurance through their jobs don’t get a deduction.

    There are two problems with the House plan though. The first is that it will obviously cost a lot of money. How much is not clear, but it won’t be insignificant. How will that be paid for? The second is that a tax deduction is much more valuable to someone who makes a lot of money than someone who makes little. But people with large incomes aren’t the ones who need help affording coverage. It’s those at the lower end of the socioeconomic spectrum who need the most assistance. Because of their low marginal tax rates, a tax deduction is of very little help.

    Sick Americans would receive very little help under the plan too. One of the ways the ACA helps the sick is by eliminating the ability of insurers to refuse to cover them (guaranteed issue) or to charge them more for being ill (community rating). The House plan weakens the guaranteed issue protection by extending it only to those who have continuous coverage. If you dropped (or were forced out) of prior coverage, you may not be able to get back in the market.

    For sick Americans, it replaces the ACA’s protections with a high risk pool in which premiums are capped at 200% of what healthy people pay in the rest of the market. To help offset the cost, the proposal sets aside $25 billion over 10 years. Still, sick people will pay very high premiums. If they become poor due to loss of work from their illness, they still have to pay those high premiums, or go uninsured.

    The plan includes a number of provisions that would encourage and expand the use of health savings accounts (HSAs). These are personal accounts that can be used to buy health care services or pay cost-sharing tax free. Again, the favorable tax treatment is of very little value to low income Americans. Moreover, low income Americans don’t have a lot of money to put aside for their future health care use. HSAs might be a helpful step. But they alone won’t help everyone.

    The rest of the proposal is a grab bag of old ideas that cannot work well as sketched out, won’t do very much, or are wasteful giveaways. For example, allowing insurers to sell policies across state lines would invite a “race to the bottom.” In time, all insurance would originate from states with the least regulation. The policies will be cheaper. But they’ll also be skimpier. They’ll be great if you’re young, healthy, or wealthy enough to afford to fill in the coverage gaps. They’ll be terrible if you are older, have a chronic condition, or, again, if you’re low income.

    The plan would reform medical malpractice by capping damages. However, studies show, malpractice suits doesn’t contribute as much to higher health spending as people think.

    Finally, the plan includes giveaways to the wellness industry, increasing the amount by which plans can increase premiums for those who don’t meet health standards. This is unlikely to do more than make sick people pay more for coverage. It would also permit gym memberships and nutritional supplements to be purchased tax free, up to a cap. These are giveaways to the fitness and supplements industry, as well as to people at the higher end of the socioeconomic spectrum who are more likely to go to gyms and use supplements anyway.

    There are some other provisions, including removing government support for comparative effectiveness research, but these are the high notes.

    The Affordable Care Act is intended to help people who don’t have insurance, especially those who are less than healthy, get it. The House proposal is intended to make insurance cheaper and easier to get if you are healthy.

    We understand that putting together a health plan is challenging. Nothing good comes without limitations and costs. That’s true of the House plan as well as the ACA. But if you’re committed to coming up with a way to expanding coverage while preserving the private insurance market, at least the ACA follows an established model. It happens to be how Massachusetts did it. It’s how Switzerland did it. And it’s how the conservative Heritage Foundation suggested doing it in 1989.

    The House is claiming it has a new way. But to us it only looks like a way back to the same problems that plague the system today.

    @aaronecarroll and @afrakt

    Share
    Comments closed
     
  • Singapore’s health system: commentary from the literature

    This post is coauthored by Austin Frakt and Aaron Carroll.

    We have already written many posts about Singapore’s health system (there’s a a tag for that), which is built around medical savings accounts (it’s Medisave program), though encompasses so much more. Unsurprisingly, we’re far from the first to comment on the system. But, contrary to what some have suggested, we’re not just interested in scoring political points. We want to know what data and evidence have to say about Singapore. This post summarizes a few points from some of the relevant literature from peer-reviewed journals.

    Scholarly literature on Singapore’s health system goes back at least as far as the 1995 Health Affairs paper by William Hsiao. (His paper is ungated. Ungated versions of those linked below may also exist. Use Google Scholar.) He described the country’s health spending trajectory just before and after Medisave was introduced. Medisave, you might remember, is the major source of cost-sharing for the people of Singapore:

    The per capita cost of health care in Singapore, in fact, rose faster after the introduction of the Medisave program in 1984 (Exhibit 2 [below]). Health expenditures per capita rose at an average rate of 13 percent per year-2 percent faster than the average before the introduction of Medisave. Part of this accelerated rate of increase was attributable to the upgrading of public hospital facilities but mostly caused by other factors. […]

    In spite of the high average rate of growth in GDP of 10 percent, Singapore’s health expenditures grew faster, rising from 2.5 percent to 3.2 percent of GDP between 1980 and 1993.

    NHE Singapore

    In other words, health care spending increased after the introduction of increased cost-sharing, which is not what most proponents of such changes would expect. These points are repeated in Michael Barr’s “critical inquiry” into Singapore’s medical savings account, published in the Journal of Health Politics, Policy and Law (JHPPL) in 2001. But this was not a randomized controlled trial, and causality is, of course, not proven.

    In an accompanying commentary, Mark Pauly responded with two valid points, among others. First, it’s been well established that the more something costs an individual, the less of it they buy. It’s even been established for health care. Cost sharing definitely matters. Second, casual, pre-post examination of time series is uninformative about the effects of an intervention. How would Singapore’s health spending have changed in the absence of the Medisave intervention? We don’t know.

    Compounding the difficulty in judging Medisave from a time series is that it was not the only intervention. It seems to be uncontroversial in Singapore that substantial government involvement (some may call it “intrusion”) in the health care market is necessary for good performance. Hsiao quoted a 1993 Singapore Ministerial Committee on Health Policy white paper, the first few pages of which can be viewed here:

    Market forces alone will not suffice to hold down medical costs to the minimum. The health care system is an example of market failure. The government has to intervene directly to structure and regulate the health system.

    Barr quoted a different passage of the same document justifies rationing, including by government intervention:

    We cannot avoid rationing medical care, implicitly or explicitly. Funding for health care will always be finite. There will always be competing demands for resources, whether the resources come from the State or the individual citizens. Using the latest in medical technology is expensive. Trade-offs among different areas of medical treatments, equipment, training and research are unavoidable.

    Intervene, the government did. In Health Affairs, Thomas Massaro and Yu-Ning Wong described some of the interventions, including control of physician and hospital supply, generous subsidization of hospital care, and hospital revenue caps. They wrote,

    Financing mechanisms alone do not define a health care system. Singapore has a clearly delineated policy that works in its setting. The state actively participates in every aspect of the delivery system, from physician supply to price setting and the establishment of service criteria. This willingness to intervene aggressively in the market (at levels probably unacceptable to most Americans) may be as important as the individual savings mechanism to its success.

    In a JHPPL commentary that accompanied Barr’s paper, Hsiao described other means of cost control.

    MediShield [Singapore’s opt-out, catastrophic health plan] adopted the risk selection practices of private insurance schemes by excluding as enrollees persons aged seventy and older and by not covering some expensive services, such as treatments for congenital abnormalities, mental illness, and HIV/AIDS. [Some of these policies may have changed since the paper’s publication in 2001.]

    Again, however, point granted to Pauly that some restrictions imposed by Singapore’s government are not altogether different from those imposed by commercial plans in the U.S., for better or worse. We take this to mean that there really aren’t all that many ways to control costs in all areas. Ultimately, you have to say “no” in some fashion.

    In another JHPPL commentary, Chris Ham makes what we think is the most important point:

    The broader lesson from Singapore is that health care reform continues to swing back and forth between a belief in market forces and the use of government regulation. In reality, health policy is replete with examples of market failures and government failures as policy makers experiment with different instruments. The variety of health care systems developed around the world indicates that the choice is neither pure markets nor government control but the balance to be struck between the two. And to return to our starting point, where the balance is struck will be shaped by social values and the political choices that follow from them.

    There’s one more challenge in assessing Singapore’s health system, raised by Barr.

    The government is highly secretive about the detailed operation of its system and has not made either the data source or method of its calculations available to anyone outside those in the Civil Service and the government who need to know—not to the public; not to academic researchers.

    This probably explains why, though there is a literature on the Singapore health system, it’s a modestly sized one. It should also cause anyone serious to hesitate before advocating that Singapore’s system, and its results, can be generalized without some concerns.

    By the way, JHPPL also published a letter to the editor by Meng-Kin Lim (we gather this is his homepage) and responses from Michael Barr and William Hsiao. Finally, here’s a paper that compares Shanghai’s experience with medical savings accounts to Singapore’s.

    The bottom line is that Singapore isn’t simply “cost-sharing”, “free market”, “competition”, and a “lack of government involvement”. If you endorse Singapore’s health care system, you’re buying into many things, and some truths, that libertarians and conservatives claim to dislike. We acknowledge that more cost sharing can reduce spending. But if that’s the only thing you endorse, then you’re not talking about Singapore.

    Share
    Comments closed
     
  • NEJM letters on the Oregon Medicaid study

    The following is jointly authored by Austin, Aaron, and Sam Richardson. Our letter to The New England Journal of Medicine (NEJM) was rejected on the grounds that our point of view would be adequately represented among the letters accepted for publication. Those letters are now published

    The letter that expresses ideas most similar to ours is by Ross Boylan:

    The abstract in the article by Baicker et al. states that “Medicaid coverage generated no significant improvements in measured physical health.” This is a misleading summary of the data reported in their article. The best estimates are that the Medicaid group had better outcomes than the control group according to most measures (see Table 2 of the article). The problem is that these findings are not statistically significant.

    So, the effects might have been zero. That is not the same as saying that they were zero, or even that they were small. Buried toward the end of the article is the statement, “The 95% confidence intervals for many of the estimates of effects . . . include changes that would be considered clinically significant.”

    Nevertheless, almost all the article, the related editorial, and related news reports, opinion pieces, and online discussions proceeded as if the effects had been found to be zero.

    If one objects, on the basis of a lack of statistical certainty, to the simple summary that the Medicaid group had better outcomes, then one should describe the substantive meaning of the confidence interval. An honest summary is that it is quite likely there were positive effects, though it is possible that they were zero or negative.

    Still, there is not one letter that dives deeply into the issues of power, as we have. (See also thisthat, and this.)

    Katherine Baicker and Amy Finkelstein, two of the original paper’s authors and leads on the wider study, wrote a response to the letters, which you can read in full at NEJM. One excerpt:

    In some cases, we can reject effect sizes seen in previous studies. For example, we can reject decreases in diastolic blood pressure of more than 2.7 mm Hg (or 3.2 mm Hg in patients with a preexisting diagnosis of hypertension) with 95% confidence. Quasi-experimental studies of the 1-year effect of Medicaid showed decreases in diastolic blood pressure of 6 to 9 mm Hg.

    Of course it is true that the study results reject, with 95% confidence, decreases in diastolic blood pressure mentioned in this quote. However, as Aaron wrote here and here, the prior work cited by the authors that suggests a 6-9 mm Hg drop in diastolic blood pressure was on a population of patients with hypertension. As he explained, and as I did again here, only a small fraction of the Oregon Health Study sample had high blood pressure:

    A key point is that blood pressure reduction should only be expected in a population with initially elevated blood pressure, which was the focus of the prior literature referenced above. In contrast, the headline OHIE result is for all study subjects, only a small percentage of whom had elevated blood pressure at baseline. Unfortunately, there is no reported OHIE subanalysis focused exclusively on subjects with hypertension at time of randomization. Depending on which metrics from the published results you examine, between 3% and 16% of the sample had elevated blood pressure at baseline. Taking the high end, 16% x 5 mm Hg = 0.8 mm Hg is in the ballpark of a reasonable expectation of the reduction in diastolic blood pressure the OHIE could have found (it was also the study’s point estimate) were it adequately powered to do so. Was it?

    No, which you can read about in full here. (And, no, power would still not be adequate even at twice this reasonable expectation.)

    We have high regard for the study and its authors. The limitations of power are functions of the sample well beyond their control. Nevertheless, we believe they need to be kept in mind for a complete understanding of the study’s findings.

    Share
    Comments closed
     
  • More Medicaid study power calculations (our rejected NEJM letter)

    Sam Richardson, Aaron, and Austin submitted a more efficiently worded version of the following as a letter to The New England Journal of Medicine (NEJM). They rejected it on the grounds that our point of view would be adequately represented among the letters accepted for publication. Those letters are not yet published.

    The Oregon Health Insurance Experiment (OHIE), a randomized controlled trial (RCT) of Medicaid, failed to show statistically significant improvements in physical health; some have argued that this rules out the possibility of large effects. However, the results are not as precisely estimated as expected from an RCT of its size (12,229 individuals) because of large crossover between treatment and control groups.

    The Experiment’s low precision is apparent in the wide confidence intervals reported.  For example, the 95% confidence interval around the estimated effect of Medicaid on the probability of elevated blood pressure spans a reduction of 44% to an increase of 28%.

    We simulated the Experiment’s power to detect physical health effects of various sizes and the sample size required to detect effects sizes with 80% power. As shown in the table below (click to enlarge), it is very underpowered to detect clinically meaningful effects of Medicaid on the reported physical health outcomes. For example, the study had only 39.3% power to detect a 30% reduction in subjects with elevated blood pressure. It would have required 36,100 participants to detect it at 80% power. Moreover, such a result is substantially more than could be expected from the application of health insurance.

    OHIE power table

    To estimate power levels shown in the table, we ran 10,000 simulations of a dataset with 5406 treatments and 4786 controls (the study’s reported effective sample sizes given survey weighting). We took random draws for Medicaid enrollment based on the probabilities reported in the study. We took random draws for each outcome: probabilities for the non-Medicaid population are given by the control group means from the study, adjusted for the 18.5% crossover of controls into Medicaid; the probability of the outcome for those on Medicaid is X% lower than the probability for those not on Medicaid, where X% is the postulated effect size.

    For each simulated dataset, we regressed the outcome on the indicator for treatment (winning the lottery), and the power is the percentage of the 10,000 iterations for which we rejected at p = 0.05 the hypothesis that winning the lottery had no effect on the outcome. To estimate the total sample size required for 80% power, we conducted a grid search for the lowest sample size that provided 80% probability of rejecting the null hypothesis, running 1000 simulations for each sample size. Our required sample sizes account for sampling weights, and are therefore comparable to the 12,229 total subjects from the study. We do not account for clustering at the household level or controls for household size (and demographic controls from the blood pressure analysis).

    Simulations were validated by comparing a subset of results to results that were computed analytically based on the 24.1 percentage point increase of Medicaid enrollment among treatments.Our simulation Stata code is available for download here. The analytic method is described here.

    The Experiment was carefully conducted and provides a wealth of new information about the effects of Medicaid on household finances, mental health, and healthcare utilization. However, it was underpowered to provide much insight into the physical health effects of Medicaid.

    ***

    Not included in our letter were the charts at the end of this post that relate effect size to power for all the measures in the study’s Table 2. To help you translate the proportional effect sizes into absolute values, first, here’s Table 2:

    Table 2 from the OHIE

    OHIE table 2

    You can multiply the relative effect sizes in the charts below by the control group mean to convert them to an approximation of the absolute, postulated effect size with Medicaid coverage. The horizontal line at 80% is the conventional cutoff for adequate power.

    The relative effect sizes in the middle chart below may seem small. But, remember, this is for the entire sample of subjects, most of whom are not candidates for improvement in these measures. They don’t have a blood pressure, cholesterol, or glycated hemoglobin problem. When you adjust effect sizes for the proportion of subjects with such issues and compare those to the literature, you find that the study was underpowered. We’ve already blogged about this here and here. For Framingham risk scores, the literature is uninformative, and we cannot conclude whether the study was adequately powered for those.

    Hopefully you can match up the lines in these charts with Table 2 from the study, above. If you have any questions, raise them in the comments.

    discrete

    continuous

    FRS

    Share
    Comments closed