• Teaching Watson

    I don’t read Mashable, but someone tweeted this, so I checked it out. It has no sourcing or supporting links, which is disappointing. But the story is plausible and interesting.

    Both the Memorial Sloan-Kettering Cancer Center and WellPoint have gotten themselves a Watson, and have been training them in the last year to apply its learning algorithms and vast computing power to helping patients. Similar to Siri, Watson was designed to give useful answers to natural-language questions. Rather than spitting back a series of links like a traditional search engine, Watson tries to find the single, correct answer to whatever it’s asked.

    For Memorial Sloan-Kettering, that means giving Watson more than 600,000 pieces of medical evidence, two million pages of text from medical journals, and 1.5 million patient records to sift through before it was ready to answer real questions about cancer treatment. WellPoint got Watson up to speed with a similar data dump, including 14,700 hours of hands-on training from nurses.

    @afrakt

    Share
    Comments closed
     
    • In terms of potential for improving the health care system, Watson excites me more than any other development in health care and health policy.

      Maybe you could interview or invite a guest post from one of the folks spearheading these projects? The only one I know of is Martin Kohn, IBM’s chief medical science officer. He’s on twitter here: https://twitter.com/MSKohn

    • While reading a previous post on defensive medicine, I was wondering how much of it was due to “medical zebras”, testing for an extremely rare condition. If it is in issue, will Watson add even more zebras to the menagerie to be tested?

      I also wonder whether the medical field is sufficiently limited to avoid Watson’s Jeopardy answer of “Toronto” when the humans correctly answered “Chicago”. http://en.wikipedia.org/wiki/Watson_%28computer%29#First_match The downside risk of a wrong answer is much more serious in a medical setting than on a game-show; I also wonder whether the human operators of Dr. Watson will be in a position to contradict its “Toronto” answers should they arise.

    • Don Miller’s point is exactly right. Here’s the long story.

      AI had two camps back in the day (my all-but-thesis therein was 1984): the scruffies and the neats. The scruffies were interested in what it means to be a human being (i.e. what it would mean for a machine to “undestand”; what intelligence is), and the neats were interested in making the machines do interesting things. The scruffies lost. (There was a similar story in the Linguistics Wars; the guys who wanted to put meaning into the grammar lost; Chomsky won.)

      Which is to say, Watson makes no attempt to (or pretense of)”undertstanding” in the slightest. It’s just matching words. You can think of it as an automated reference search: a glorified Google. What comes back is stuff the doc probably ought to be considering, but to be any use as such, there are going to be lots of false positives (references that the doc doesn’t need to look at) to assure there are few false negatives (i.e. references that the doc should look at but get cut).

      Truth in advertising: I still very much think that neat AI (i.e. AI without a serious attempt at thinking about what it means to “understand”) is nothing more than glorified parlor tricks. But it’s not a nice thing to say…

    • From here:
      “WellPoint, based in Indianapolis, plans to make money by helping doctors make more accurate treatment decisions, reducing the time it has to spend pre-authorizing procedures ”

      which suggestions that Watson might function as a pre-approval service for Wellpoint doctors. A doctor would feed patient info to Watson and get back a recommended diagnosis and treatment, and that treatment would be pre-approved, Presumably, the doctor could ignore it and go through the normal approval route.