• On Google’s new Inbox

    Google’s new alternative to Gmail is here, and it’s called Inbox, available by invitation only at the moment. You can see Google’s pitch here and read how-to use guides here and here. My tweets on Inbox are here. Joshua Gans has written a review. Mine is below, informed by three days of Inbox use.

    What I Like

    For me, Inbox’s killer app is the ability to schedule emails and reminders to return to one’s inbox at a time and/or place of one’s choosing. This is called snoozing. (To date I have only used the time-based reminders, not place-based.) Some smarty-pants at Google has recognized that I, and no doubt gazillions of others, use their inbox like a task list anyway, leaving emails hanging around to remind us to do something, emailing ourselves to remind us to do something, starring things as reminders to do something, and so forth.

    Rather than forcing users to kludge all this onto an email program, Inbox has integrated it more intelligently. This makes sense, and it works. It’s for this feature alone that I have switched to using Inbox for a guestimated 95% of my email use. (More on the other 5% below.)

    I also like that the Done list (the analog to Gmail’s All Mail list) is presented in the order in which the user marked items as done. The most recent done stuff is at the top. In Gmail, the All Mail list is in order of when the last email was received. This is less useful to me, as I’m frequently scrolling my archive for that item I know I put away recently despite having received it weeks ago. The receipt date is less salient and relevant than the archived (“done’d”) date.

    What Google highly promotes is bundling, which is the combining of similar email conversations into groups. This is a bit like Gmail’s tabs, only bundles aren’t stored and presented as tabs, but as blocks of related email. And it’s a bit like tags only with tags, email conversations are only presented as grouped within their tag, not wherever they appear. I never liked Gmail’s tabs (disabled them on day 1) and I’m not a big tagger either. So, though potentially useful in rare instances, I won’t be bundling much. Google hypes bundling, but it’s not where the action is (for me). Still, no harm. I give bundling a mild “like.”

    Google could have done all the above right in Gmail and I’d have been happy.

    What is Not Ready or Not Useful

    Google claims that Inbox isn’t done, and it’s clear a lot of needed features are missing (one example: you can’t edit and manage signatures, which don’t appear on any emails, to my annoyance*). Indeed, how-to guides admit you’ll need to go back to Gmail to do some things like selecting multiple emails for bulk deletes and moves, doing anything in settings, emptying Trash, and on and on). Still, over three days, I’m not going back to Gmail much (5% of the time, maybe), yet I cannot completely abandon it. In time, I hope I can.

    Inbox allows you to “pin” stuff to your inbox, which keeps it there. I don’t get why this is useful. I can already keep stuff in my inbox by not removing it from my inbox. And, the whole point of Inbox, as far as I can see, is to help keep your inbox from being cluttered. This is why scheduling things to pop up later — and disappear in the meantime — is useful. Why pin when you can snooze and schedule?

    Google thinks pinning is key because it includes one-click pin buttons on emails and a switch at the top of one’s Inbox to toggle to a pined-only view. I just don’t get why Google thinks pinning is so important. There are almost no other buttons, an attempt at a very clean presentation.

    Meh. Obsessive cleanliness is an overrated design principle. Sure, one doesn’t want too much clutter. But a handful of buttons at the top, for archiving, marking as spam, moving to trash, etc. is a reasonable balance, as achieved in Gmail. In a nearly button-free Inbox, one has to click twice to do some of those things, or use keyboard shortcuts. (There are gestures for the touch-screen implementations on Android and iOS, which are kind of nice.) I also noticed that I have to now click twice to download an attachment. I don’t like this. One click should do it, per Gmail.

    Emails are presented in Inbox with some of their internal contents more visible without opening them, like dates of events, contact info, and thumbnails of attachments. I do not like this because it makes the email look big, take up too much real-estate on my page. Opening the email is not hard. I would toggle this off if I could. Or, I encourage Google to find a way to make it more space-efficient.

    Stars are gone. I don’t mind too much. It took me a bit of time to realize that “snooze until someday (unspecified)” is the equivalent to my use of Gmail stars. Inbox offers a ready way to see all one’s snoozed items (whether scheduled for a specific time or not). And that’s how I used Gmail’s Starred list. Still, the advantage of stars is that, with the couple of dozen or so that Gmail offers, users can flexibly mark emails as they see fit. I don’t see why stars need to go away, except to satisfy some need for cleanliness or to force people into snoozing things. Maybe not everyone wants to snooze!

    To Sum Up

    I like Inbox. I am using it almost exclusively. Scheduling/snoozing is the killer app. It helps me manage my life and my inbox. This is good.

    However, Google needs to bring in or bring back more of Gmails functionality to make Inbox a full-service email app. I believe this is their intention, but I don’t fully trust them. (Sorry, they lost my trust long ago.) Let’s wait and see.

    If you get an Inbox invitation, take it! I wish I could give you one, but Google has not granted me any (yet?). Perhaps my invites have been snoozed.

    * Really, this is very, very bad and should be added immediately.

    @afrakt

    Share
    Comments closed
     
  • The latest research on ACOs

    Today in NEJM you’ll find two studies and an editorial pertaining to ACO performance. Below is a brief summary and commentary.

    In Changes in Health Care Spending and Quality 4 Years into Global Payment, Zirui Song et al. examined cost and quality of care for patients served by providers participating in Blue Cross Blue Shield of Massachusetts’ Alternative Quality Contract (AQC). They compared them to the experience of comparable patients enrolled in certain employer-sponsored plans in other Northeastern states over 2009-2012. (If you’re not familiar with what the AQC is and does, read this.)

    In the 2009 AQC cohort, medical spending on claims grew an average of $62.21 per enrollee per quarter less than it did in the control cohort over the 4-year period (P<0.001). This amount is equivalent to a 6.8% savings when calculated as a proportion of the average post-AQC spending level in the 2009 AQC cohort. Analogously, the 2010, 2011, and 2012 cohorts had average savings of 8.8% (P<0.001), 9.1% (P<0.001), and 5.8% (P = 0.04), respectively, by the end of 2012. Claims savings were concentrated in the outpatient-facility setting and in procedures, imaging, and tests, explained by both reduced prices and reduced utilization. Claims savings were exceeded by incentive payments to providers during the period from 2009 through 2011 but exceeded incentive payments in 2012, generating net savings. Improvements in quality among AQC cohorts generally exceeded those seen elsewhere in New England and nationally.

    Here are two charts that illustrate some of the findings:

    AQC cost

    AQC qual

    In Changes in Patients’ Experiences in Medicare Accountable Care Organizations, J. Michael McWilliams et al. considered patients’ experiences with Medicare ACO contracts after one year, relative to before ACOs formed, comparing the change to that of matched Medicare patients not served by ACOs.

    Overall ratings of care and physicians and ratings of interactions with primary physicians did not change differentially in the ACO group, as compared with the control group, from the preintervention period to the postintervention period. In contrast, reports of timely access to care differentially improved in the ACO group. […]

    Overall ratings of care reported by patients in the ACO group with seven or more [chronic] conditions and HCC scores of 1.10 or higher improved significantly as compared with similarly complex patients in the control group (differential change, 0.11; 95% confidence interval [CI], 0.02 to 0.21; P = 0.02; differential change with adjustment for preceding trends, 0.20; 95% CI, 0.06 to 0.35; P = 0.005).

    In the editorial Accountable Care Organizations — The Risk of Failure and the Risks of Success, Lawrence P. Casalino wrote that

    ACOs represent the best attempt to date to move away from business as usual and toward health care that will improve patients’ health and will not bankrupt the country. If ACOs fail, it may be a long time before a similarly bold concept emerges. […]

    [Yet, t]he performance of ACOs to date has been promising but not overwhelming. Although some ACOs have gained a substantial return on their investment in improving the health of their patients, many have not. […]

    The ACO movement is unlikely to succeed unless health insurance plans dramatically increase their number of ACO contracts and unless CMS modifies specifications for its ACO programs — a course that the agency is considering.

    I think Casalino strikes the right tone. There are some encouraging findings about ACOs in the literature, both in the new work by McWilliams, Song, and colleagues, and in prior work. But it’s both early yet and unclear whether the most promising findings from the AQC can be generalized.

    Across public and private ACOs, 18 million Americans receive care from one. Massachusetts is particularly dense in ACOs, as Song et al. write: 85% of physicians in the state have entered the AQC, 72% of Tufts Health Plan commercial managed care enrollees are under global budgets, and five organizations have joined the Medicare Shared Savings Program. This makes Massachusetts a convenient laboratory for ACO-like models, but it also makes Massachusetts unusual and threatens generality of findings from the state. Perhaps other features of Massachusetts are responsible for a tendency for ACO participation and their outcomes.

    I would give ACOs another five or so years before drawing any strong conclusions about what they can do. Even a few years of generally positive results is insufficient to declare victory. It’s reasonable to be optimistic, but cautiously so. A lot could still go wrong.

    @afrakt

    Share
    Comments closed
     
  • Choosing a health plan is hard, even for a health economist (me)

    The following originally appeared on The Upshot (copyright 2014, The New York Times Company).

    A confession: I am a health economist, and I cannot rationally select a health plan.

    I buy health insurance through the Federal Employees Health Benefits Program, or F.E.H.B.P., which is very similar to the Affordable Care Act’s exchanges. Like the exchanges, the federal employee program runs an online marketplace with a choice of plans, which vary by region.

    Most workers don’t have a lot of choice among plans offered by their employer. But the federal employee program offers me about 20 plans to choose from, and a similar number to almost all other federal employees. This puts me in a position akin to a consumer selecting among many plansin an Affordable Care Act exchange or a Medicare beneficiary selecting among many Medicare Advantage plans.

    I have a lot of sympathy for consumers in these markets. Comparing health plans is hard, even for a health economist like me. (And it’s arguably harder on the Affordable Care Act exchanges, where consumers may also need to report income and apply for subsidies. Federal employees just need to choose a plan.) Each year when I shop for coverage through my employer, I feel like I’m buying myself at least as much grief as I am insurance.

    In one sense, buying health insurance is not different from buying any other product, like a laptop computer or a refrigerator. There are two things to consider: how much you pay (the price) and what you get (the quality). Quality can mean a lot of things for a health plan, and your criteria may differ from mine. For me, the most important aspect is which doctors and hospitals are in its network and, hence, most generously covered. (Some plans cover out-of-network providers less generously; some not at all.)

    A health plan’s price is more amenable to quantitative analysis, but still hard to assess.

    Each laptop has a sticker price, as does each refrigerator. Health insurance has not one but many price-like characteristics. The premium is the most salient price, perhaps. But there are lots of others like co-payments (fixed dollar amounts you pay each time you visit a doctor, get a lab test or pick up a prescription), co-insurance (a percentage of the cost you pay for each visit, test or prescription), and deductibles (how much you pay before your plan pays a single dollar). Complicating matters, deductibles do not apply to every service, and co-payments and co-insurance can vary by service — a different amount for a hospital stay vs. a primary care visit vs. a visit to a specialist, for a brand-name drug vs. a generic, and so forth.

    Given all this, computing something like a sticker price for a plan is daunting. The actual amount an insurance plan will cost me next year is its premium plus a complex interaction of its various other prices with the specific types of health care services my family will use. Fortunately, for federal employee plans, there is an online resource that helps simplify this calculation. Using The Guide to Health Plans for Federal Employees & Annuitants, federal employees can compare the total cost of premiums and cost sharing of plans for low, average and high levels of health care spending. (Low is about $3,000 in annual health care spending, and high is about $30,000.) This guide, which I have used, also includes plan quality ratings. A similar guide exists for the Illinois Affordable Care Act exchange, but for no others.

    One problem is that I don’t know how much or what kind of health care services my family will use next year. But based on past experience, I can make a reasonable guess as to whether it will be low, average or high. Seeing how my total cost for each plan varies across that range helps me understand the consequences of increases or decreases in use.

    But I would find it helpful to supplement that approach with a more precise calculation based on the level of health care my family tends to use. For instance, how much would each plan have cost me last year? The answer is a much closer analog to the type of sticker price one sees for refrigerators and laptops. I’m aware of no online insurance market that provides this type of information.

    Already, the lack of price transparency is enough to make a health economist despair. But it gets worse.

    Some aspects of plan quality are available to most consumers, like consumers’ ratings of customer service and how well doctors in each plan communicate with their patients. But a crucial feature of health plans is not as easily or widely accessible: the extent to which each covers services provided by one’s favorite doctors and hospitals. Except on a handful of Affordable Care Act exchanges and for federal-employee participants in the Washington area, such network information is typically available only on plans’ websites, making gathering and comparing plan networks prohibitively difficult. Moreover, which doctors are in a plan’s network can change over time.

    If I could not precisely price a laptop or assess the size of a refrigerator (or if that size could change after I bought it), I’d have a great deal of difficulty selecting the right one for me and my family. So when I shop for a health insurance plan each year, I have very little confidence that the one I select is the one I would choose were I to have more information available to me.

    And, as a health economist, I have very little confidence that a market with this degree of opacity of prices and quality can serve consumers well. Indeed, research has shown that Medicare beneficiaries have great difficultyin selecting the lowest-cost prescription drug plan. (A Medicare drug-plan pricing and coverage tool is available on the Medicare website, but it’s a fair bet that most beneficiaries do not use it.)

    Insurance markets do not need to be this opaque. We have the technology to track health care use electronically and to create online tools that could tell every health insurance consumer exactly how much each plan would have cost them based on prior-year utilization (as well as for a range of other utilization levels) and to what extent services provided by the doctors they saw and hospitals they visited would be covered next year. This is an effort that some policy experts have called for.

    Though the Affordable Care Act’s exchanges are new, some markets in which consumers can select among many plans are not. The F.E.H.B.P. has existed with plans in competition since 1960. The competitive Medicare Advantage market grew out of predecessor programs that stretch back nearly 30 years.

    Despite this long history, we do not yet offer consumers the tools they would need to become anything like rational market participants. This could change. Companies such as Picwell and Consumers’ Checkbook are working on developing and expanding consumers’ access to such tools. This is welcome news, and it’s about time.

    If even a health economist will confess to needing better access to price and quality information when choosing a health plan, it’s a sure bet that many other Americans need it as well.

    @afrakt

    Share
    Comments closed
     
  • Five more big data quotes: The ambitions and challenges

    For this roundup of quotes, I received input from Darius Tahir and David Shaywitz. Prior TIE posts on big data are here. As always, the quotes below do not reflect the views of the authors, but those of the people or community they’re covering. Click through for details.

    1. David Shaywitz in “Why Causation Is (Often) Not Causation – The Retro Humility Of Empiricism,” articulates the “strong version” of the “big data thesis”:

    A strong version of the canonical big data thesis is that when you have enough information, you can make unbiased predictions that don’t require an underlying understanding of the process or context – the data are sufficient to speak for themselves. This is the so-called “end of theory.”

    2. Darius Tahir reports on the content of a Rock Health slide deck:

    Healthcare accelerator Rock Health is predicting big advances for startups and healthcare providers using personalized, predictive analytic tools. The firm has observed $1.9 billion in venture dollars pouring into the subsector since 2011, with major venture capital firms keeping active.

    The use of predictive analytics, essentially looking at historic data to predict future developments to directly intervene in patient care, will only increase as data multiplies, the report argues.

    In 2012, the healthcare system had stored roughly 500 petabytes of patient data, the equivalent of 10 billion four-drawer file cabinets full of information.

    By 2020, the healthcare system is projected to store 50 times as much information, 25,000 petabytes, meaning machine intelligence will be essential to complement human intelligence to make sense of it all.

    See, in particular, pages 18, 19 of the slide deck from Rock Health. I found it interesting that there’s lots of use of “prediction” and “predictive” throughout the deck and no direct language of causality. This is appropriate. I also think that organizations that didn’t understand just this limitation would slip into causal language now and then. In other words, Rock Health, and likely others, know exactly what they’re selling. (I am not disparaging prediction here. It is useful. I am merely distinguishing it from causal inference.)

    3. Tim Hartford has written one of the best pieces on the limitations of big data I’ve read to date. Big data is often also “found data,” hence typically suffers from selection bias. It also invites a multiplicity of hypothesis testing; query the data enough and something (meaningless) will appear statistically significant, eventually. (More by David Shaywitz on this point here.) I recommend reading his piece in full; it includes many examples from Google, Twitter, Target, the city of Boston, and the history of polling. Here’s an excerpt, cobbled from snippets throughout:

    Cheerleaders for big data have made four exciting claims, each one reflected in the success of Google Flu Trends [which Hartford summarizes, as well as its later comeuppance]: that data analysis produces uncannily accurate results; that every single data point can be captured, making old statistical sampling techniques obsolete; that it is passé to fret about what causes what, because statistical correlation tells us what we need to know; and that scientific or statistical models aren’t needed because, to quote “The End of Theory”, a provocative essay published in Wired in 2008, “with enough data, the numbers speak for themselves”. [I quoted from and linked to that Wired article here.]

    Unfortunately, these four articles of faith are at best optimistic oversimplifications. At worst, according to David Spiegelhalter, Winton Professor of the Public Understanding of Risk at Cambridge university, they can be “complete bollocks. Absolute nonsense.” […]

    A recent report from the McKinsey Global Institute reckoned that the US healthcare system could save $300bn a year – $1,000 per American – through better integration and analysis of the data produced by everything from clinical trials to health insurance transactions to smart running shoes. […]

    “There are a lot of small data problems that occur in big data,” says Spiegelhalter. “They don’t disappear because you’ve got lots of the stuff. They get worse.” […]

    The Literary Digest, in its quest for a bigger data set, fumbled the question of a biased sample. It mailed out forms to people on a list it had compiled from automobile registrations and telephone directories – a sample that, at least in 1936, was disproportionately prosperous. To compound the problem, Landon supporters turned out to be more likely to mail back their answers. The combination of those two biases was enough to doom The Literary Digest’s poll. For each person George Gallup’s pollsters interviewed, The Literary Digest received 800 responses. All that gave them for their pains was a very precise estimate of the wrong answer.

    The big data craze threatens to be The Literary Digest all over again. Because found data sets are so messy, it can be hard to figure out what biases lurk inside them – and because they are so large, some analysts seem to have decided the sampling problem isn’t worth worrying about. It is. […]

    [B]ig data do not solve the problem that has obsessed statisticians and scientists for centuries: the problem of insight, of inferring what is going on, and figuring out how we might intervene to change a system for the better. […]

    Statisticians are scrambling to develop new methods to seize the opportunity of big data. Such new methods are essential but they will work by building on the old statistical lessons, not by ignoring them.

    4. David Shaywitz in “Turning Information Into Impact: Digital Health’s Long Road Ahead”:

    A leading scientist once claimed that, with the relevant data and a large enough computer, he could “compute the organism” – meaning completely describe its anatomy, physiology, and behavior. Another legendary researcher asserted that, following capture of the relevant data, “we will know what it is to be human.” The breathless excitement of Sydney Brenner and Walter Gilbert —voiced more than a decade ago and captured by the skeptical Harvard geneticist Richard Lewontin – was sparked by the sequencing of the human genome. Its echoes can be heard in the bold promises made for digital health today. […]

    [T]echnologists, investors, providers, and policy makers all exalt the potential of digital health. Like genomics, the big idea – or leap of faith — is that through the more complete collection and analysis of data, we’ll be able to essentially “compute” healthcare – to the point, some envision, where computers will become the care providers, and doctors will at best be customer service personnel, like the attendants at PepBoys, interfacing with libraries of software driven algorithms.

    5. David Shaywitz in “A Database of All Medical Knowledge: Why Not?” writes about the challenges of finding and assembling big data. Here’s the set-up:

    For scientists and engineers today, perhaps the greatest challenge is the structure and assembly of a unified health database, a “big data” project that would collect in one searchable repository all of the parameters that measure or could conceivably reflect human well-being. This database would be “coherent,” meaning that the association between individuals and their data is preserved and maintained. A recent Institute of Medicine (IOM) report described the goal as a “Knowledge Network of Disease,” a “unifying framework within which basic biology, clinical research, and patient care could co-evolve.”

    The information contained in this database — expected to get denser and richer over time — would encompass every conceivable domain, covering patients (DNA, microbiome, demographics, clinical history, treatments including therapies prescribed and estimated adherence, lab tests including molecular pathology and biomarkers, info from mobile devices, even app use), providers (prescribing patterns, treatment recommendations, referral patterns, influence maps, resource utilization), medical product companies (clinical trial data), payors (claims data), diagnostics companies, electronic medical record companies, academic researchers, citizen scientists, quantified selfers, patient communities – and this just starts to scratch the surface.

    @afrakt

    Share
    Comments closed
     
  • Management

    In his recent book David Cutler wrote about the importance of good management in the efficiency and productivity of health care organizations, among others. Here’s my summary:

    An instrument developed by McKinsey asks organizations about performance monitoring, target setting, and incentives/people management. In high performing organizations, information relevant to performance is fed back to workers who are empowered to stop processes to fix problems; goals are established to focus attention on areas for improvement; people are hired and promoted on the basis of performance.

    Bloom et al. used the McKinsey survey instrument to examine over 10,000 organizations internationally, mostly manufacturing firms, but also several hundred hospitals and some schools, including those in the U.S., U.K., Japan, Germany, and some developing countries.

    In manufacturing, firms that score better are also more profitable and successful. Higher scoring schools do better on standardized tests. Higher scoring hospitals also have better survival rates for heart attacks. Across all three domains, U.S. hospitals have lower management scores than manufacturing firms, but higher than hospitals in other countries.

    I’ve now read the Bloom study (ungated here and published by the Academy of Management here). Below is the ten-point summary by the authors. For the health care-specific point I’ve included the chart. For the rest, you can go to the source at one of the prior links.

    Before I get to the authors’ summary, though, a word about my interest in this area: Many of my favorite health policy experts have emphasized in conversation the importance of management. To large degree, it seems, what separates a broadly high-quality and efficient health care organization from one that is neither is management and organizational culture, which are intertwined.

    This shouldn’t be surprising. If you think about organizations you interact with in other sectors (restaurants, retail stores, and the like) the ones that (a) stick around and prosper and (b) that you enjoy patronizing are, disproportionately and in general, the well-managed and efficient ones. The others, well, suck, and tend to get out-competed. That is, you, in particular, and competition, in general, select(s) for good management.

    Relative to some other sectors, we don’t have as much competition in health care for a variety of reasons, some inherent (like a high degree of product differentiation and information asymmetry) and some amenable to policy (a high degree of third-party payment, constraints on market entry). With no strong mechanism to promote the well managed and to weed out the badly manged organizations, what type of organization settles in a given area might be somewhat random. In some places, we find high quality and greater efficiency (e.g., Kaiser, Intermountain, Geisinger, etc.). In others, we don’t.

    This raises a set of crucial questions, among them: (1) Assuming weaker market forces in health care than in other sectors, how do we promote good management? (2) To what extent can we promote stronger market forces in health care without sacrificing the important reasons they are weak in the first place? Notice I’ve phrased these to be relevant no matter your policy preferences. The first asks you to assume weaker market forces, without claiming that assumption cannot be loosened. The second lifts that assumption but asks you to recognize that there may be some “important reasons” market forces are weak (you decide which, and note there is heterogeneity among people about what they are) that we may wish to retain.

    When I look at what’s happening in health care today, I see an attempt to establish or laud conditions under which better managed organizations will thrive, with all their attendant benefits for patients, efficiency, and so forth. That’s the point of price transparency, pay-for-performance, bundled payments, ACOs, etc. That’s the hope for retail clinics, greater patient cost-sharing, narrow networks, reference pricing, and so on. But, notice, none of them directly promote or measure good management. In well-functioning markets ,we need not worry about that. In health care, we might, particularly if we want to understand more precisely to what extent the myriad approaches to greater efficiency and quality noted above work, if any. Do we know how to measure good management or promote it? I am not qualified to say, but I suspect the answer is somewhere between “no” and “not very well.” Am I right?

    Bloom et al. offer a means of measuring good management, though it may be incomplete and have important limitations. Here’s what they found:

    1. US manufacturing firms score higher [in management performance] than any other country. Companies based in Canada, Germany, Japan and Sweden are also well managed. Firms in developing countries, such as Brazil, China and India are typically less well managed (Figure 1). [Click through for figures not included below.]

    2. In manufacturing, there is a wide spread of management practices within every country. This spread is particularly notable in developing countries, such as Brazil and India, which have a large tail of very badly managed firms (Figure 2).

    3. Looking at other sectors, US firms in retail and hospitals also appear to be the best managed internationally, but US schools score poorly (Figure 3).

    management

    4. There is also a wide spread of management practices in non-manufacturing sectors (Figure 4).

    5. Publicly (Government) owned organizations have worse management practices across all sectors we studied. They are particularly weak at incentives: promotion is more likely to be based on tenure (rather than performance), and persistent low-performers are much less likely to be retrained or moved (Figures 5 and 6).

    6. Amongst private sector firms, those owned and run by their founder or their family descendants, especially firstborn sons, tend to be badly managed. Firms with professional (external, non-family) CEOs tend to be well managed (Figure 7).

    7. Multinationals appear able to adopt good management practices in almost every country in which they operate (Figure 8).

    8. There is strong evidence that tough product market competition is associated with better management practices, within both the private and public sectors (Figure 9).

    9. Light labor market regulation is correlated with the systematic use of monetary and non monetary incentives (related to hiring, firing, pay and promotions), but not monitoring or targets management (Figure 10).

    10. The level of education of both managers and non-managers is strongly linked to better management practices (Figure 11).

    @afrakt

    Share
    Comments closed
     
  • Death by stubbornness

    Via Strange Signs:

    stubborn

    @afrakt

    Share
    Comments closed
     
  • Should the Arkansas Medicaid program cover a $239,000/year treatment?

    Chris Conover says it should have the freedom not to do so:

    Last week, an advisory board recommended that Arkansas’s Medicaid program cover Kalydeco, a cystic fibrosis drug [which would cost the program] $239,000 per patient year.  [… B]ecause “ Arkansas appears to be the only state preventing patients who meet the eligibility criteria established by the U.S. Food and Drug Administration” the state is being sued on grounds that its policy violates a federal statute requiring state Medicaid programs to pay for all medically necessary treatments. […]

    [P]art of the reason the Arkansas lawsuit is getting leverage is because of evidence that cost appeared to be a factor underlying the decision to deny coverage for Kalydeco. […]

    The WHO considers a medical intervention to be “not cost-effective” if it costs more than three times a nation’s per capita GDP per year of life saved. With U.S. GDP per capita currently at $51,749, it is pretty obvious that $239,000 lies pretty far outside the bounds of what WHO would deem cost-effective.  […]

    [T]his 7-page National Health Law Program summary of medical necessity under Medicaid highlights the complexity of the problem. The upshot is that “medical necessity” is never defined explicitly either in the Medicaid statute or regulations. It has been fleshed out in case law and administrative rulings.  The Stanford definition of medical necessity which has been adopted by a number of state Medicaid programs [has] a very restrictive definition: “An intervention is considered cost effective if the benefits and harms relative to costs represents an economically efficient use of resources for patients with this condition.”

    Such a definition does not permit administrators to do what the Oregon Medicaid program did many years ago: rank order all treatments by their cost-effectiveness and eliminate from coverage all treatments above a certain cost per added year of life threshold. [Here’s one,* of many, papers on Oregon’s experience with cost-effectiveness ranking.] So how did Oregon get away with adopting cost-effectiveness rankings? By getting a waiver. […]

    Chris goes on to argue that Arkansas, and all states, should be able to apply cost-effectiveness criteria without waivers. More at the link.

    * Apart from the link in brackets, all others are in Chris’s original. They are not mine.

    @afrakt

    Share
    Comments closed
     
  • A pessimistic prediction about wellness programs

    This analysis suggests a future for ‘‘wellness’’ initiatives: a great deal of negative wellness [basically, underwriting, cost shifting, or back-door risk-rating] in the form of crude insurance rate or hiring discrimination, diverse and visible but largely cosmetic wellness programs used as cheap recruitment and retention, or corporate public relations. There will be some scope for positive wellness [investment in health promotion] in the same high-skill or low-turnover firms that have incentive to train employees. Just as American employers routinely demand skills from employees and then complain to governments when the requisite skilled employees do not appear in the open market, we should expect that American employers will be much more likely to demand health from prospective employees than they are likely to actually invest in it. Negative wellness policies and continued underprovision of population health is consequently the likely future.

    That’s from Scott Greer and Robert Fannion. The basic insight, which the paper spells out multiple times, is that there’s little incentive for employers to make wellness investments in workers who aren’t likely to stay long enough to generate a positive return. However, shifting the cost of health care onto workers who, all else being equal, cost employers more in health care costs (e.g., smokers) is in employers’ financial interests.

    @afrakt

    Share
    Comments closed
     
  • Ten impressions of big data: Claims, aspirations, hardly any causal inference

    “Big data” is all the rage. I am curious what people think big data can do, and what some claim it will do, for health and health care. I’m curious how people think causal connections will arise from (or using) big data. In large part this consideration seems overlooked, as if credible causal inferences will just emerge from the data, announcing themselves, dripping wet with self-evident validity. I am concerned.

    I’ve been collecting excerpts of articles on big data, many sent to me by Darius Tahir, whom I thank. What I’ve compiled to date is below and in no particular order. For each piece, the author (with link to original) is indicated, followed by a quote. In many cases, what’s quoted is not an expression of the author’s views, but a characterization of the views of individuals about whom the author is reporting. I encourage you to click through for details before jumping to conclusions about who holds what view.

    Also, do not interpret these as suggesting I do not see promise in big data. I do! I just think how we use data matters just as much as, if not more than, how much data we have. We should marry “big data” with “smart analysis” not just “big claims.”

    1. Bill Gardner has not overlooked causal inference:

    Here’s where the ‘big data’ movement comes in. We can assemble data sets with large numbers of patients from electronic health records (EHRs). Moreover, EHRs contain myriad demographic and clinical facts about these patients. It is proposed that with these large and rich data sets, we can match drug X and drug Y patients on clinically relevant variables sufficiently closely that the causal estimate of the difference between the effects of drug X and drug Y in the matched observational cohort would be similar to the estimate we would get if we had run an RCT.

    2. David Shaywitz echos Bill and also notes the views of others that begin to shade toward the magical or mystical (“something will emerge”):*

    Clinical utility, as Haddow and Palomaki write, “defines the risks and benefits associated with a test’s introduction into practice.” In other words, what’s the impact of using a particular assessment – how does it benefit patients, how might it adversely impact them? This may be easiest to think about in the context of consumer genetic tests suggesting you may be at slightly elevated risk for condition A, or slightly reduced risk for condition B: is this information (even if accurate) of any real value? […]

    The other extreme, which Stanford geneticist Atul Butte is perhaps best known for advocating, is what might be called the data volume perspective; collect as much data as you possible can, the reasoning goes, and even if any individual aspect of it is sketchy or unreliable, these issues can be overcome with volume. If you examine enough parameters, interesting relationships are likely to emerge, and the goal is to not let the perfect be the enemy of the good enough. Create a database with all the information you can find, the logic goes, and something will emerge.

    3. Darius Tahir reminds us that we’re most readily going to find correlations (implication: not causation) in a hypothesis-free space:

    Supplementing medical data with consumer data might lead to better predictions, he, and the alliance, reasoned.

    In the pilot program, the network will send its health data to a modeler, which will pair that information with consumer data, such as credit card and Google usage. The modeler doesn’t necessarily have a hypothesis going in, Cantor said.

    “They’re identifying correlations between the consumer data and healthcare outcomes,” he said.

    4. Amy Standen really frightens me with the scientific-method-is-dead idea:

    “The idea here is, the scientific method itself is growing obsolete,” […]

    [S]o much information will be available at our fingertips in the future that there will be almost no need for experiments. The answers are already out there. […]

    Now, Butte says, “you can connect pre-term births from the medical records and birth census data to weather patterns, pollution monitors and EPA data to see is there a correlation there or not.” […]

    Analyzing data is complicated and requires specific expertise. What if the search engine has bugs, or the records are transcribed incorrectly? There’s just too much room for error, she says.

    “It’s going to take a system to interpret the data,” she says. “And that’s what we don’t have yet. We don’t have that system. We will, I mean for sure, the data is there, right? Now we have to develop the system to use it in a thoughtful, safe way.”

    5. Chris Anderson says that numbers can speak for themselves:

    Today companies like Google, which have grown up in an era of massively abundant data, don’t have to settle for wrong models. Indeed, they don’t have to settle for models at all. […]

    With enough data, the numbers speak for themselves. […]

    “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot. […]

    Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.

    6. Bernie Monegain writes about about Partners HealthCare’s chief information officer James Noga’s dream of moving beyond prediction (for which correlations that aren’t causation can be useful) and designing interventions (for which causality is crucial):

    He likes to employ a travel analogy. Drivers once got maps to travel from one point to another — they basically figured it out themselves — then they went to predictive analytics to find the best route to get from point A to point B.

    “Then as you get into prescriptive analytics, it actually tells you on the way real time, an accident has happened and reroutes you,” said Noga.

    “With big data you’re really talking about data that’s fast moving and perpetually occurring, actually able to intercede rather than merely advise in terms of the care of patients,” he said. “On the discovery side with genetics and genomics using external data sources, I think the possibilities of what I would call evidence-based medicine, and being able to drive that to drive better protocols on the clinical side is endless in terms of the possibilities.”

    7. Veronique Greenwood offers concrete examples and a warning:

    Back in her office, [Jennifer Frankovich] found that the scientific literature had no studies on patients like this to guide her. So she did something unusual: She searched a database of all the lupus patients the hospital had seen over the previous five years, singling out those whose symptoms matched her patient’s, and ran an analysis to see whether they had developed blood clots. “I did some very simple statistics and brought the data to everybody that I had met with that morning,” she says. The change in attitude was striking. “It was very clear, based on the database, that she could be at an increased risk for a clot.” […]

    For his doctoral thesis, [Nicholas Tatonetti] mined the F.D.A.’s records of adverse drug reactions to identify pairs of medications that seemed to cause problems when taken together. He found an interaction between two very commonly prescribed drugs: The antidepressant paroxetine (marketed as Paxil) and the cholesterol-lowering medication pravastatin were connected to higher blood-sugar levels. Taken individually, the drugs didn’t affect glucose levels. But taken together, the side-effect was impossible to ignore. “Nobody had ever thought to look for it,” Tatonetti says, “and so nobody had ever found it.” […]

    There are numerous correlations like this, and the reasons for them are still foggy — a problem Tatonetti and a graduate assistant, Mary Boland, hope to solve by parsing the data on a vast array of outside factors. Tatonetti describes it as a quest to figure out “how these diseases could be dependent on birth month in a way that’s not just astrology.” Other researchers think data-mining might also be particularly beneficial for cancer patients, because so few types of cancer are represented in clinical trials. […]

    In the lab, ensuring that the data-mining conclusions hold water can also be tricky. By definition, a medical-records database contains information only on sick people who sought help, so it is inherently incomplete. Also, they lack the controls of a clinical study and are full of other confounding factors that might trip up unwary researchers. Daniel Rubin, a professor of bioinformatics at Stanford, also warns that there have been no studies of data-driven medicine to determine whether it leads to positive outcomes more often than not. Because historical evidence is of “inferior quality,” he says, it has the potential to lead care astray.

    Yet despite the pitfalls, developing a “learning health system” — one that can incorporate lessons from its own activities in real time — remains tantalizing to researchers.

    8. Vinod Khosla expresses some ambitions:

    Technology will reinvent healthcare. Healthcare will become more scientific, holistic and consistent; delivering better-quality care with inexpensive data-gathering techniques and devices; continual monitoring and ubiquitous information leading to personalized, precise and consistent insights. New medical discoveries will be commonplace, and the practices we follow will be validated by more rigorous scientific methods. Although medical textbooks won’t be “wrong,” the current knowledge in them will be replaced by more precise and advanced methods, techniques and understandings.

    Hundreds of thousands or even millions of data points will go into diagnosing a condition and, equally important, the continual monitoring of a therapy or prescription. […]

    Over time, we will see a 5×5 improvement across healthcare: 5x reduction in doctors’ work (shifted to data-driven systems), 5x increase in research (due to the transformation to the “science of medicine”), 5x lower error rate (particularly in diagnostics), 5x faster diagnosis (through software apps) and 5x cost reduction.

    9. Larry Page thinks government regulation is slowing the promise of big data:

    I am really excited about the possibility of data also, to improve health. But that’s– I think what Sergey’s saying, it’s so heavily regulated. It’s a difficult area. I can give you an example. Imagine you had the ability to search people’s medical records in the U.S.. Any medical researcher can do it. Maybe they have the names removed. Maybe when the medical researcher searches your data, you get to see which researcher searched it and why. I imagine that would save 10,000 lives in the first year. Just that. That’s almost impossible to do because of HIPPA. I do worry that we regulate ourselves out of some really great possibilities that are certainly on the data-mining end.

    10. Lindsey Cook writes about some of the barriers to big data (legal issues, physicians’ concerns, patients’ misunderstandings, technological barriers, misplaced research funding), though not about causal inference. Her piece includes a primer on what “big data” means (“an incredibly large amount of information”).

    Big data is already producing research that has helped patients. For example, a data network for children with Crohn’s disease and ulcerative colitis called ImproveCareNow helped increase remission rates for sick children, according to Dr. Christopher Forrest and his colleagues, who are creating a national network of big data for children in the U.S.

    * By Twitter, David points to his other work in this area, which I have not read at the time of this writing: here, here, and here.

    @afrakt

    Share
    Comments closed
     
  • What’s in a name: Medicaid “beneficiaries” edition

    Maybe I should have known this. Maybe I did know it and forgot. Maybe there’s a good reason for that.

    Surely the ACA’s implementers knew what they were doing when they began a campaign to convert all relevant Code of Federal Regulations language from Medicaid enrollees to Medicaid beneficiaries. Medicaid enrollees have always just been that—unlike Medicare beneficiaries—a naming convention emphasizing the provisional, conditional nature of the Medicaid entitlement. And the announcement accompanying the change acknowledged as much.

    The Code of Federal Regulations was revised on 15 and 16 July 2012 to change the word “recipient” to “beneficiary.” The following is excerpted from 77 FR 29002-01, which appeared on May 16, 2012 in the Federal Register:

    Removal of the Term “Recipient” for Medicaid: We have removed the term “recipient” from current CMS regulations and made a nomenclature change to replace “recipient” with “beneficiary” throughout the CFR. In response to comments from the public to discontinue our use of the unflattering term “recipient” under Medicaid, we have been using the term “beneficiary” to mean all individuals who are eligible for Medicare or Medicaid services.

    Just what is unflattering about the term “recipient” may be understood only in context; similarly, what is empowering about “beneficiary” may also only be understood in context. Medicare and Medicaid beneficiaries now stand on equal dignatorial ground.

    That’s from Ann Marie Marciarille’s “The Medicaid Gamble.”

    By and large, I tend to call people on Medicaid “enrollees,” and probably still will. There are two problems with “beneficiary.” First, it’s considered jargon and, apparently, is confusing or foreign to readers not steeped in health policy. Still, I do use the term, typically for people on Medicare, for which no active effort is required to receive the benefit.* Second, despite what they’re called, that’s just not the case for people on Medicaid. They really do have to enroll to benefit. So, I take issue with “beneficiary” even if it’s the regulation.

    * UPDATE: This is only true for age-based Medicare eligibility.

    @afrakt

    Share
    Comments closed