• One journal’s web 2.0 strategy

    Edward Alan Miller in an editorial in the forthcoming issue of the Journal of Health Politics, Policy and Law:

    Our long-range goal is to aggregate a variety of Web 2.0 technologies—blogs, microblogs, social networking sites, file-sharing sites, mobile applications—into an integrated platform that facilitates an ongoing interactive dialogue among the journal’s editors, board members, authors, and readership.

    The potential role of social media in heightening journal impact is reflected in several recent studies. One example is a report that social media may represent a largely untapped postpublication review resource for assessing paper impact since articles that appear in Wikipedia have significantly higher citation counts than those that do not (Evans and Krauthammer 2011). Another example is a report that tweets within the first three days of article publication can predict citations, which normally take years to accumulate, with social media activity either increasing citations or reflecting the underlying qualities of the articles that also predict citations (Eysenbach 2011). In short, there appears to be a mutual interaction between social media and scholarly impact. On the one hand, social media ‘‘buzz’’ can lead to citations—that is, researchers being influenced by growing interest on social media. On the other hand, use of social media by researchers can lead to ‘‘buzz’’—that is, researchers creating interest, say, through Twitter, Facebook, Linked In, and other websites.

    Our near-term goals are manifold. First, we would like to increase awareness of the journal and its content—that is, engage in more effective outreach. Second, we would like to engage readers, editors, political scientists, health policy researchers, and others in an ongoing conversation, either continuing discussions begun in the journal, say, in the journal’s Point/Counterpoint section, launching new discussions stimulated by what was published, or enabling real-time immersion into current issues and debates in health policy and politics. Third, we would like to broaden the scope of social networking opportunities available to members of the JHPPL community.

    Though some of the details of this strategy aren’t clear to me, the general thrust appears sensible. The way I’d organize thinking in this area is as follows:

    1. The quality and importance of the journal articles themselves comes first. Without high-quality, scholarly work, one has little to blog and tweet about. Put another way, if you want people to get excited about your content—excited enough to blog and tweet about it—you’d better bring the good content first! Yet another way to put it is, don’t fall into the clickbait trap. That won’t cut it for a research publication, for which credibility and reputation for something other than “sensational” is paramount.
    2. It would seem foolish not to leverage the existing community favorably predisposed to the journal and active in social media. (Bill wrote about this here.) Clearly this requires purposeful outreach. That could be more than just emailing existing distribution lists to say, “Hey, we have a Twitter account!” It might include, for example, sharing embargoed copies of key, forthcoming papers with social media-active scholars so that they can be prepared to help establish the sought-after “buzz.”
    3. To the extent possible, every product of the journal should be maximized for social media dissemination. This includes, for example, ungating key papers (perhaps for a limited time) and including share buttons wherever possible.
    4. Do not overlook re-disseminating older work that becomes relevant as the policy debate shifts. A big mistake that most organizations make is to only (or mostly) promote what is new, not necessarily what is relevant. Just because it came out last year doesn’t make it obsolete, particularly when it’s still one of the latest and greatest papers on whatever policy issue is being discussed right now!
    5. Finally, if possible, be a source of good content that is published elsewhere. This adds credibility and is a way to demonstrate that one is about the ideas not just the brand. (But, yeah, it builds one’s own brand too.) Heck, if JHPPL or some other journal had a blog, why not blog on great work that appears elsewhere? Become a go-to curator, not just a journal article publisher. The wider audience you develop doing so will still be there when you blog on your own journal’s work. That’s good!


    Comments closed
  • Bigger health companies: Good for Medicare, maybe not for others

    The following appeared on The Upshot while I was on vacation (copyright 2014, The New York Times Company).

    Although Obamacare’s health insurance expansion has directly provided coverage to only about 4 percent of Americans, changes embedded in the Affordable Care Act could affect many more people, and not always in good ways.

    One such change is a provision that allows organizations that join forces to manage care for a large population to receive bonuses from Medicare for controlling costs and hitting quality targets (or face penalties if they do not). Medicare’s Accountable Care Organization model, as it’s called, favors larger health provider organizations that can manage the costs and quality of all types of care Medicare pays for, from primary care to high-intensity hospitalization and everything in between.

    If that model works, it’ll be welcome news for Medicare and its beneficiaries. But health economists, myself included, have long worried about what larger provider organizations mean for private health insurance plans, the ones that serve most Americans under 65, through employer-based coverage or policies purchased on the Obamacare exchanges.

    Larger organizations have greater market power to demand higher prices from those plans for doctor visits and hospital stays. And higher prices paid by plans translate into higher premiums for consumers. (This doesn’t apply to Medicare because its prices are set by the government, and no provider organization has so much market clout that it can force Medicare to raise prices.)

    The competitive advantages of greater size and scope are not lost on health care organizations: Bigger is better for the bottom line. In the past, hospitals and physician groups have merged with one another and with insurers to form larger organizations that command greater market clout and drive up private prices and premiums. A wave of hospital mergers in the 1990s was followed by accelerated costs of care in the 2000s. Researchers have generally found that hospital consolidation has increased price without commensurate increases in quality.

    hosp bulk

    A more recent trend has been the direct employment of physicians by hospitals. When hospitals hire physicians or assimilate physician practice groups, they seek to capture more physician referrals and gain greater leverage over insurers in negotiating prices for access to both hospitals and doctors.

    Recent work by scholars from the University of Pennsylvania highlights the trend in hospital employment of physicians. As the chart shows, the number of doctors employed by hospitals increased to over 120,000 from 80,000 between 2003 and 2011. About 13 percent of all doctors are now employed directly by hospitals. Other work by Stanford researchers shows that the integration of hospitals with physicians in this way has increased the prices paid to hospitals by private plans. Though these studies predate the law encouraging larger organizations, it’s a reasonable bet that the consolidation trend has continued.

    So while some provisions of the health reform law — like penalties on hospitals that have a high proportion of Medicare patients who must be readmitted within 30 days of a hospital stay — may already be improving care and health system efficiency, others, like this one, bear watching. What is good for Medicare and its patients may not always be good for the rest of Americans.


    Comments closed
  • I’m almost back (notes from my vacation)

    kneesOur four days of hiking in the White Mountains was spectacular, though it kicked my butt. Actually, it was my knees that felt kicked. For the first time in my life, they revolted, refusing to take the downhills pain free. It started on day one, and by the end of the third day, I could only manage about one mile per hour on rocky downhills. Each step felt like a hammer blow to the knee.

    Still, I made it through with the help of some route changes, bandages, poles, technique adjustment, grimacing, and cursing. It wasn’t the challenge I anticipated, but I was tested. Nevertheless, I ended the hike happy and pain free, having missed a few summits, but none on the days with decent views.2014-06-22 12.42.04

    The smartest move of the week was my wife’s brilliant idea to bring our bikes to Montreal, which we visited after hiking. We rode many miles daily (no knee pain), seeing far more of the city than we would have otherwise. The network of dedicated (and often median-separated) bike lanes is vast.

    cauliflowerAn additional benefit of lots of biking is we burned more calories and, so, could eat a lot more. Everyone told us Montreal has a lot of great food. They were right. Our best meals were at Laurie Raphael (h/t Marie Ventrone) and Robin des Bois (which my wife found in a guide book), though the Chinese tea house in Old Montreal was also delightful. St. Viateur Bagels (h/t Tyler Cowen) was also good. The bagels were slightly more like dense, soft pretzels than are New York style bagels.

    Old Montreal and other parts close to center city were fun enough, but our favorite destinations were further out: the botanical gardens, St. Helen’s Island, Parc Jean-Drapeau on Notre Dame Island, Mt. Royal Park, and Jean-Talon Market.

    Rufus Wainwright was the highlight of the acts we saw at the Jazz Festival.

    I’ve got some catching up to do so blogging, tweeting, and email will be slow for another day or so.


    Comments closed
  • Stairway to heaven

    More than a rock ballad.



    Comments closed
  • Hard disk, circa 1956

    Via David Grann, the specs are 5 megabytes and over one ton.

    hard disk


    Comments closed
  • Instead of blogging

    Instead of blogging, and other, regular work, I’m spending today at the Comparative Effectiveness Public Advisory Council (CEPAC) meeting in Burlington, VT. Our topic: treatment for opioid dependence. You can download meeting materials here and read some background here.

    Then, this weekend through next I’ll be largely off-internet, hiking in the White Mountains of New Hampshire and being a tourist in Montreal. Apart from some a pre-scheduled thing or two—one of which will be at The Upshot—you won’t hear from me, and I’ll be largely unreachable.

    (No, this isn’t my annual week off the internet. That’s in July. This is an extra week off the internet, something I still recommend everyone do, particularly those of you who are afraid to do so!)


    Comments closed
  • Literature update: Reference pricing and the effect of cuts to Medicare hospital prices

    Here are some notes from a couple of recent papers in areas I’ve blogged about in the past. They’re worth knowing about.

    1) Reference Pricing: “Paying on the Margin for Medical Care: Evidence from Breast Cancer Treatments,” by Liran Einav, Amy Finkelstein, and Heidi Williams (NBER)

    Medical expenditures in the US are high and increasing. [...] A natural economic solution which has not received much attention is a “top-up” design in which health insurance contracts would cover the cost of a baseline treatment, and patients could choose to pay the incremental cost of more expensive treatments out of pocket.

    This is also called “reference pricing.”

    [T]o our knowledge, [top-up design] has not received much attention in discussions of insurance coverage for different treatments, with the exception of a recent paper by Baicker, Shephard and Skinner (2012) who use a calibrated simulation model to explore this idea.

    See also Robinson and MacPherson (2012)Robinson and Brown (2013), and Pearson and Bach, Health Affairs, 2010.

    [E]vidence from randomized clinical trials has suggested no average difference in survival between mastectomy relative to lumpectomy with radiation (Fisher et al., 1985), mastectomy tends to be less expensive (Polsky et al., 2003).

    The approximately $10,000 difference in price between lumpectomy and mastectomy is primarily the cost of post-lumpectomy radiation.

    Using data on over 300,000 breast cancer patients in California diagnosed between 1997 and 2009, combined with data on the location of radiation treatment facilities, the authors estimate the welfare (consumer surplus) loss* of full coverage for lumpectomy and no coverage for lumpectomy relative to using mastectomy as a reference price for lumpectomy. To estimate the demand curve for lumpectomy, the authors convert travel time to a radiation treatment facility to price, monetizing by average hourly wage from the Bureau of Labor Statistics.

    A standard course of post-lumpectomy radiation therapy requires 25 round-trips to a radiation facility, spread over 5 weeks. Our key economic assumptions are that travel time can be monetized and that preferences for reduction in travel time are analogous to preferences for any other equivalent price difference. These assumptions allow us to use the variation in distance to the radiation facility as if it were variation in the relative price of lumpectomy, thus identifying the demand curve. [...] [We also assume] that there are not omitted patient characteristics correlated with both distance and demand for lumpectomy.


    We estimate, for example, that the efficient “top-up” policy – in which patients pay $10,000 on the margin for a lumpectomy – increases the lumpectomy rate by 15-25 percentage points relative to the UK-style “no top-up” regime, and decreases the lumpectomy rate by 35-40 percentage points relative to the US-style “full coverage” regime. Our estimates suggest total welfare gains from the “top-up” policy of between $700 and $1,800 per patient relative to a “no top-up” UK-style policy and between $700 and $2,500 per patient relative to a “full coverage” US-style policy.

    Those are the “ex-post” results, after onset of breast cancer. Considering ex-ante welfare, before onset of breast cancer, things change:

    The results indicate how the (total) efficiency ranking of the top-up policy relative to the US-style full coverage policy depends on risk aversion. For the lowest value of risk aversion we consider, social welfare is higher under the top-up policy, but for higher values of risk aversion it is higher under the US-style full coverage policy. The full-coverage policy always delivers higher total welfare than the UK-style “no top up” policy for our calibrated values. This illustrative analysis suggests that focusing solely on ex-post efficiency analysis could miss an important part of the picture, and that the ex-ante risk exposure generated by top-up policies could be much more costly than the allocative efficiencies these policies may provide.

    This makes slightly more formal the general knock on reference pricing—that it exposes consumers to greater risk. The paper’s charts are excellent. Here’s just one for the ex post consumer surplus analysis.


    “L” in the axis labels is for “lumpectomy.” Area DEC is the consumer surplus loss of full lumpectomy coverage, relative to reference pricing. Area AEB is the consumer surplus loss of no lumpectomy coverage, relative to reference pricing.

    But, see those seven dots in the lower right? Those are the data points from which the entire demand curve is estimated. As the authors are fully up-front about, this is an extreme, out-of-sample extrapolation: variation in the travel-time-cost of radiation therapy doesn’t come anywhere near the full range of price over which the demand curve extends. In light of this, what I like about the paper is that it makes explicit some welfare issues pertaining to reference pricing. In terms of leveraging data to actually estimate the size of consumer surplus gain/loss, there are significant limitations.

    2) Medicare Hospital Price Cuts: “Cutting Medicare Hospital Prices Leads to a Spillover Reduction in Hospital Discharges for the Nonelderly,” by Chapin White (Health Services Research)

    A demand inducement spillover occurs when one payer reduces the prices it pays and providers respond by increasing the volume of services provided to other payers’ patients. [...] A capacity spillover occurs when payments for one group of patients become more or less generous, and, as a result, providers adjust their capacity and change the volume of services provided to all patients. [...] Providers appear to adopt a general treatment style that they apply to their patient populations, rather than tailoring treatments based on each patient’s coverage [a treatment pattern spillover].

    Using data for 129 markets in ten states over years 1995–2009, White studied the effect of changes in Medicare hospital prices on

    the number of hospital discharges and days provided to the nonelderly by hospitals located in each market, and the mean nonelderly length of stay. We also measure the share of discharges for the elderly and the share of days provided to the elderly—these shares capture any possible shifts in hospital output away from the elderly.


    [R]egression results show that decreases in Medicare prices are associated with decreases in inpatient hospital utilization among the nonelderly. A 10-percent Medicare price cut is associated with around a 5-percent decrease in discharges among the nonelderly and an even larger decrease in hospital bed-days. Changes in the Medicare price are not associated in any statistically robust way with changes in the nonelderly length of stay, nonelderly case mix, or with changes in the share of utilization provided to the elderly. These findings suggest that hospitals have only limited ability or willingness to shift their inpatient services away from the elderly in response to Medicare price cuts.

    To give a sense of the magnitudes involved, we extrapolated our results to simulate the nationwide utilization effects of a 10-percent decrease in the Medicare price in 2012. That price reduction roughly matches the accumulated 10-year effect of the ACA on Medicare hospital prices. The reduction in the Medicare price leads to more than 1 million fewer discharges, and more than 9 million fewer hospital days, with the utilization reductions roughly evenly split between the elderly and nonelderly.

    Unless hospital prices for the nonelderly go up considerably in response to Medicare cuts (and prior work shows they don’t) or utilization is shifted to other settings, this work suggests that Medicare price reductions might reduce health care spending beyond the Medicare program.

    * If “consumer surplus” is a foreign concept, I recommend this short, accessible book.


    Comments closed
  • Blogging: Is it good or bad for journal article readership?

    While at the AcademyHealth Annual Research Meeting last week I had several conversations with editors and board members of various journals, among other attendees, about how blog summaries of academic literature change readership of journal articles. Do blog posts broaden access to people who would not otherwise ever know anything about health policy-relevant research? Or do they allow people who might otherwise read academic papers to skirt by without doing so? (And, if so, is that really a bad thing?)

    My guess: both! Let’s face it, almost nobody who isn’t a researcher or a policy wonk is going to read an article in an academic journal. To the extent that a blog summary reaches beyond the rarefied research and policy communities, it extends the reach of the literature and the ideas it conveys. If one is interested in building a case for the broad relevance of health services and health economics research (among other subject areas), this is unambiguously good.

    It is no doubt true, however, that many in the field use a blog summary as a substitute for other ways of engaging the literature, including reading the papers it references. Yet, I submit that few read an entire paper anyway. What people often do is read the abstract, then maybe the introduction and concluding discussion. Perhaps they add to that a light skim of other sections. Very often a blog post includes more about a paper than is in an abstract, and hits many of the points made in a concluding discussion, as well as some that aren’t made. So, though a blog post may be a substitute, it may not be substituting for any less engagement with a paper’s original and related ideas.

    Finally, I am also confident that for some a blog post is a complement to reading the whole thing. Scanning tables of contents for possible papers of interest is, perhaps, the floor of habitual engagement with the literature. Academics and researchers should probably do at least that. Yet, I know from conversations that even this gets overlooked by many. The torrent of literature is so voluminous these days that even keeping up with tables of contents is not so easy for the busiest researchers and academics. At least a post on one’s favorite blog might bring to one’s attention a paper that one really does want to read.

    What I think may worry some journal editors, board members, and other schoars is that blog posts might be “dumbing down” research to reach broader audiences. (I imagine Twitter further heightens this unease, even if it does expand the potential audience.) That’s certainly a worthwhile concern. It is possible to lose valuable nuance when attempting to simplify and interpret. But broadening access need not mean distorting the message. It all depends on how it’s done. I would hope TIE could be (and is) viewed as part of a “solution” to increasing understanding of the value and content of research. I would most certainly be upset if it was (or is) viewed as “distorting” or “dumbing down” research, or somehow as “the enemy” or a “bad influence.” If anyone in the field feels that way, I encourage him or her to bring that to my attention.

    I emailed about this with Nicholas Bagley, who responded

    When it comes to my work, I’m delighted when someone blogs about it. I figure only a tiny sliver of the population has the time to read the whole thing. The chance to expose more people to my ideas is exciting, even if they get just a simpler version of those ideas. And I’m skeptical that those who are really interested in the topic will decline to do so because they’ve read a summary; probably they wouldn’t have read it anyhow.

    Also, if I had to read everything you and Aaron blogged about, I wouldn’t do much else. Reading summaries gives me a breadth of knowledge that orients me when I engage more deeply in a particular set of problems.

    Responding to an early draft, Bill Gardner wrote me,

    I think that blogs can give you free space to think across disciplines and publish things that do not have a home in specialist journals. They also allow you to publish a more science-based commentary on current events than even an op-ed page will allow.

    (See also Bill’s recent post on research translators.) Comments are open on this post for one week so you can weigh in too. Having said that, I’ll be away and off-line for much of the next week, so please excuse the very long delay in posting your comment.


    Comments closed
  • What we know about hospital networks of exchange plans

    McKinsey did some impressive work collecting hospital network data for 2014 exchange (“marketplace”) plans.

    We have [] enhanced our hospital network database to include all products in all tiers in all 501 rating areas in the U.S. [...] Our database includes all 282 payors filing on the 2014 exchanges and all 4,773 acute care hospitals in the U.S. The payors offered a total of 20,818 on-exchange products across the five metal tiers; these products included 2,366 unique individual exchange networks.

    I believe one has to hit and scrape marketplace and/or plan websites repeatedly to collect such data, so it’s either a large, manual job or one requiring some clever programming. I have little doubt that many enterprising graduate students are building or have built a similar database, but McKinsey’s report is the first product based on such a thing that I’ve seen. It’s an interesting read.

    McKinsey categorized networks as broad (“more than 70 percent of all hospitals in the rating area participating”), narrow (“31 to 70 percent of all hospitals in the rating area participating”), and ultra-narrow (“30 percent or less of all hospitals in the rating area participating”). Here are four, of many, findings:

    1. “Broad networks are available to close to 90 percent of the addressable population” and cost 13%-17% more in premium relative to narrow networks.
    2. “There is no meaningful performance difference between broad and narrowed exchange networks based on Centers for Medicare and Medicaid Services (CMS) hospital metrics such as the composite value-based purchase score as well as its three sub-components (outcome, patient experience, and clinical process scores).”
    3. “26 percent of those who indicated they had enrolled in an ACA plan were unaware of the network type they had selected.”
    4. “Among the new entrants, Medicaid payors and provider-based plans offer the highest percentage of ultra-narrow networks (57 percent and 31 percent, respectively).”

    Finding 1 certainly gives the impression that broad networks are widely available, albeit for a price. However, it’s also true that, relative to incumbents’ 2013 individual market offerings, the proportion of plans with broad networks has fallen, as shown below.


    To the extent one believes CMS hospital metrics to be good measures of quality, finding 2 should be of some comfort to those who only have access to or can afford more narrow network plans.

    Finding 3 surprises me. I would have expected a greater percentage of enrollees to have no idea what kind of network their plan offers. Combining findings 2 and 3, maybe we need not be so concerned about potential opacity of network extent and quality. (Important caveat 1: the analysis in the report only applies to hospitals, not to physician networks. Important caveat 2: the survey results may not be accurately generalized given methodology.)

    Finding 4 is what I would expect of Medicaid- and provider-based plans. Medicaid’s relatively low payment rates aren’t going to appeal to a wide network of hospitals. And, one purpose of provider-based plans is to cater to the providers that offer them, even foreclosing access to those providers by other plans.

    I would have liked to have seen a clearer picture of how network extent varies by metal tiers. The silver tier is of particular interest since premiums of plans in that tier drive premium tax credits and it is only for plans in that tier that cost-sharing subsidies are available. It also, as it turns out, is the most popular plan type.

    The report has many other findings and charts. It’s worth a look.


    Comments closed
  • Relative value health insurance

    In our post on The Upshot last week, Amitabh Chandra and I discussed an idea proposed by Professor Russell Korobkin, relative value health insurance (RVHI, though we didn’t use that term in the piece). In a market for RVHI, plans would be transparently ranked according to the value (degree of cost effectiveness) of services they covered. We wrote

    [A] bronze plan could cover hospitalizations and visits to doctors for emergencies and accidents; genetic diseases; and prescription drugs that keep people out of hospitals. A silver plan could cover what bronze plans do but also include treatments a large majority of physicians find useful. A gold plan could be more inclusive still, adding coverage, for instance, for every cancer therapy shown to improve patient outcomes (no matter the cost) as long as it was delivered at a leading cancer center. Finally, a platinum plan could cover experimental and unproven cancer therapies, including, for example, that proton beam.

    Though Korobkin’s paper is, perhaps, the most thorough consideration of this idea, it’s not the first to propose it. Mark Pauly raised it in his Wussinomics paper (covered on TIE here).

    In particular, one could imagine (as I have suggested before) that insurers offer different plans choosing different cost effectiveness thresholds for new technology, and then consumers could pick the plan with the premium and technology level and growth rates that matched their preferences (Pauly 2005). Not gold, silver, and bronze, but slow-mo versus everything latest. This is Enthoven’s ideal model of managed competition, but it has never really happened. To be sure, there are bargain basement HMOs that will give you a modestly lower premiums than the slap-on-the-wrist PPO but, apart from varying the size of networks, plans have never systematically varied other dimensions of care, like the amount and form of new technology, and competed vigorously on that basis. Instead they waste their time trying to get people to exercise and eat less.

    Pauly (2005) is titled “Competition and New Technology” and says a bit more.

    Plans can thus adopt different policies toward new technologies, and consumers who have a choice among plans can select them based on differences in coverage (broadly defined to include not only reimbursement but also rules, limits, and incentives) of new technology, the implied differences in the growth of premiums, and the value that the consumer places on one relative to the other. As long as consumers face premium differentials that reflect cost, they can in principle choose the optimal plan to limit (or not) the use of new technology. Some plans might permit all new technologies to be used without limit; others might limit them. [...]

    For example, [] a consistent strategy would be to set a benchmark value of dollars per QALY and then adopt all new technologies with costs below that level and none above. Setting a lower threshold would yield a lower rate of growth in spending; plans could therefore vary based onwhat level they chose for their threshold. [...] An alternative would be a bottom-up strategy in which the plan set a target level for spending growth and then used cost-effectiveness analysis to choose the set of new technologies whose cost fit within the limit and which maximized the number of new QALYs delivered. [...]

    Having to face trade-offs between better things is preferable to no trade-offs at all. But dealing in a forthright way with the future path of this effort is surely important, and rejuvenated markets with relevant health plan choices could help a lot.

    Both of Pauly’s papers are worth reading in full.



    Comments closed