• On healthcare.gov error rates

    I don’t know a lot about healthcare.gov’s inner workings, so maybe the following is incorrect. With one caveat, which I mention below, it passed a vetting on Twitter (FWIW).

    • “The internal 80 percent target is the basis of a promise that has become an administration mantra in recent weeks: HealthCare.gov will ‘work smoothly for the vast majority of users’ by the end of November.” (Source.)
    • “This past Friday, the average [per page] error rate was approximately .75 or three quarters of one percent.” (Source.)
    • A CMS demonstration video showed that a healthcare.gov application can require over 100 pages. (Source.)

    Let’s do the math. Given the above, the probability of success of getting through one page is 1-0.0075 = 0.9925. Therefore, the probability of successfully getting through n pages is 0.9925n. If n is 100, the probability of completing an application successfully is 0.47 or, as luck would have it, 47%. That’s below 80%, the administration’s target. (If an application only took 30 pages, the probability of success would be just under 80%.)

    This has the appearance of a contradiction.

    The big assumption here is that errors are evenly distributed over individuals. If only 20% of the individuals are generating all the errors, there’s no contradiction, for example.

    Let’s suppose the errors are evenly distributed. It would then seem that reaching an 80% success rate would require individuals to make multiple application attempts. How many retries would be necessary, on average? If I’m doing the math right, the answer is the smallest integer m, such that

    (1-0.9925n) < 0.20

    With n = 100, I get m = 3. The simpler way to think about this is that with one try, 53% of people fail (47% succeed, per above). If those people retry, 53% of 53% fail, or 28%. It takes a third try to get below 20% failure, or 80% success, the administration’s goal. Now, the answer is fewer than 3 if one doesn’t have to go all the way back to the beginning of the process with each retry (e.g., if the system saves your place upon error). But, even in that case, it takes more than one try, on average, to successfully apply.

    This is why you can get stories of people having to try repeatedly to apply alongside with administration statistics that might suggest a more user-friendly experience. Again, there may be details that make the above moot. I just don’t know them. Delighted for corrections in the comments, on Twitter, or by email.

    @afrakt

    Share
    Comments closed
     
    • Possibly the error rate was calculated per page type, rather than per page access. That is, perhaps there are a lot of screens. Screen A is the login screen, screen B is the welcome screen for Alabama, and so forth. And possibly the calculation is averaged over screens: there’s a 30% error rate going to some screen that nobody goes to, and a 0.0001% rate on the login screen, and a 1% error rate on a certain Medicaid screen, for example. Then they average all the error rates for all the pages. That would explain why they are reporting a 0.75% error rate, and yet people are actually able to purchase insurance.

      0.75% is appallingly high. And the initial rate, they now say,was an astonishingly bad 6%. How they could release something so terrible is a mystery.

    • I didn’t actually do the math, but that’s the general line of thinking I went down when they started talking about error rates. I haven’t been on the site but it could be even worse than you suggest, assuming that you have multiple opportunities for errors on each page (i.e. have multiple inputs).

    • “How they could release something so terrible is a mystery.”

      I seem to remember an airport IT system a few years ago (Denver, was it?) that was so bad it had to be scrapped.

      The assumption here that Amazon (and other such sites) are getting it right is, I suspect, going to be shown to be very wrong at some point. (E.g. Adobe losing some large number of passwords.) Since I really need Amazon for my books and music, I really hate it that Amazon is expanding so radically. Other people are similarly dependent on Apple and/or Google. Letting one of them, or any other cloud provider function as your only backup is a bad idea.

      Software is, in mathematical principle, beyond human capabilities. As a program gets larger, the potential complexity grows faster than any computable function. (This is Alan Turing’s “halting problem”.) Computer science is so depressing I left the field for the easier life of the Japanese to English translator. At least Japanese is something humans can do.

    • Can require versus does require is the interesting semantic difference.

      A straight forward application where the family is all natural born citizens, has only W-2 income and there is no non-double-blood relations (no adoptions, no step kids, no half-siblings etc) probably hits far fewer pages than a family with mixed legal immigration and citizenship status, mixed parentage on families and complex income situations. That might be the “Can require” 100 screens scenario.

      I’m betting the average number of screens/pages hit on an application is significant lower than 100.