After man vs. machine on ‘Jeopardy,’ what’s next for IBM’s Watson

Relax, humans. Watson may have beat the most successful champions of “Jeopardy,” but it doesn’t mean machines are ready to take over the world.

Even Dan Gruhl, a researcher at IBM’s Almaden Research Center in San Jose who worked on Watson — a four-year project that involved about 20 to 25 people throughout IBM’s eight research labs — says so. He points to the 55 sparring sessions that Watson took part in from November to January before the big showdown with Ken Jennings and Brad Rutter, the game show’s most successful champions, this week. Watson — which IBM calls a DeepQA machine made up of custom algorithms, terabytes of storage and “many, many CPUs,” — won 71 percent of those mock games. Was Watson’s progress incremental, GMSV asked Gruhl in a phone interview Wednesday. “Oh, heck yeah,” he said.

When Gruhl talks about what’s next for Watson, he’s careful to stress that IBM sees Watson as a tool to help humans, not crush them. In particular, the folks at Almaden are working on how the technology behind Watson can be used in health care. For example, Gruhl said, Watson might scan through hundreds of pages of text of a person’s medical history, pull out what’s important and relay that to a doctor.

IBM this morning announced a partnership with Nuance, a provider of intuitive technologies, to commercialize Watson for use in health care. Columbia University Medical Center and the University of Maryland School of Medicine will also take part in the research and development effort.

What if Watson makes mistakes at the hospital? After all, Watson doesn’t know all the answers; its first “final Jeopardy” response, Toronto, was in a category called U.S. Cities. Gruhl points out that a doctor would be using Watson’s help, not relying solely on it. “In Jeopardy, if you get a question wrong, you might lose the game. In health care, that would be a problem,” Gruhl said.

“I see Watson’s capabilities not as a replacement for physicians but as an adjunct and tool to organize and highlight and prioritize information,” said Dr. Eliot Siegel, director of the Maryland Imaging Research Technologies Laboratory at the University of Maryland School of Medicine, in an e-mail.

Siegel said Watson can build on previous attempts at using artificial intelligence to help with medical diagnoses and treatments. “The first major breakthrough from the initial systems of the ’70s and ’80s is the ability to have the software ‘ingest’ vast amounts of data from textbooks, journal articles and other online resources. … The Watson team (has) technology that has the potential to result in a Renaissance in the application of artificial intelligence in medical data mining, data analysis, and decision support.”

Will doctors welcome Watson, maybe a little more than Jennings did during last night’s “Jeopardy” finale? (At one point, Jennings joked that he could “either unplug Watson or bet it all.” But in this chat on the Washington Post website,  the wry and witty Jennings did express his appreciation at being a part of the whole deal, perceived as “gimmicky” by some: “This is the coolest thing I will every do in my life by a factor of a million. The future is here.”)

“Watson will have to fit in to the physician’s workflow without interfering with the establishment of a meaningful doctor-patient relationship,” said Dr. Herbert Chase, professor of Clinical Medicine at Columbia University, in an e-mail. “Given the data-driven nature of medicine, gadgets have the potential to occupy a provider’s attention during a visit.”

But it will probably be years until something like Watson is available on an iPad for doctors, IBM’s Gruhl acknowledged. For now, the technology — which was not connected to the Internet but was fed information from it — requires stacks and stacks of servers, like the ones shown on “Jeopardy” when Watson was introduced.

When it does become more widespread and practical for doctors, Chase thinks both “physicians BG” and “physicians AG” (Before Google and After Google) will embrace the technology behind Watson. “There is simply not enough time in the day to answer all the questions that need to be answered to provide the best care possible,” and Watson can help with that. “The Google generation will not only not have to be convinced, they are going to be disappointed if there are not sufficient decision support tools like Watson that can promote efficient and effective medical care,” Chase said.

Note: To readers who just can’t get enough of IBM’s explanations of Watson, which took up a lot of time on “Jeopardy,” especially during the first episode of the three-day series, here’s a video that further explains the project and shows Watson’s progress over time. At the 17:25 mark is the explanation of how Watson went from taking two hours to respond to a Jeopardy question to three seconds.

 
 

Share this Post



 
 
 
  • Bill

    I’m stunningly less than impressed.
    A) Jeopardy may be the place where Watson shines brightest as it’s about fact memorization and recall, not creative thinking.
    B) According to Google Maps – Ohio, Kansas, and South Dakota all have cities called “Toronto” so Watson may not be far off
    C) combing through hundreds of pages of medical records isn’t that complicated. People were using technology to do that with thousands of pages of legal documents more than 10 years ago.
    Yes, this is a great publicity stunt.

    B

  • Larry Breyer

    I would really like to hear from Oracle, Google, and others capable
    of building similar machines.
    A game like Jeopardy between 3 competing machines
    sounds more interesting to me than Watson vs. Watson vs. Watson.

  • sd

    @Larry, +1. A new kind of “space race”?

    @Bill, yes, there was a huge amount of publicity in Watson’s competing on Jeopardy. However, to say Jeopardy does not reward creative thinking displays a misunderstanding of how the game is played. In addition to having to recall potentially-correct answers among millions of facts, a contestant (human or cybernetic) has to understand the nuance of the question’s wording and even the name of the category. Not only must Watson differentiate from among “to,” “too,” and “two;” it also must figure out that a correct question in a category called “Strange PAIRings” (emphasis mine) likely eliminates the “non-two” choices.

    Personally, I was far more interested that 1) IBM’s programmers apparently spent zero time on teaching Watson how to bet; and 2) some of the second- and third-choice answers truly seemed to be out of left field. I’d love to figure out the process by which Watson arrived at some of THOSE choices.

  • Bryan Henderson

    I read that the engineers put significant effort into teaching Watson how to bet, and that the formula involves many factors. I didn’t notice obvious failings in Watson’s bets.

  • DD

    Yes, the betting was programmed similarly to some of the other learning. They just gave it previous Jeopardy matches and it analyzed the general effect of bets given the relative amount of money in each player and remaining on the board, as well as how many questions remain on the board. One of the programmers said that this makes it very conservative when it’s ahead, but very aggressive when it’s behind. But it does know the importance of being up 2:1 at the end of Double Jeopardy and will bet aggressively to achieve that.

    They also spoke about the Toronto thing. There were a couple of things going on. First is that Watson isn’t really looking for an answer to any question. It’s just analyzing keywords in a particular way to get preliminary answers. Then those preliminary answers are examined to see which is the best fit. Also, they mentioned that in general the titles of a Jeopardy category aren’t a big help when doing searches (especially if it’s one like “potpourri” or so). So the best fit that it does doesn’t give a lot of weight to the category. In this case, the humans could easily discount non-US cities, but Watson really didn’t care so much. It wasn’t shown, but “Chicago” was reported to be its second choice.

  • sd

    The betting thing … I’ll accept that Watson was smart enough to consider its score when placing a bet. But it ignored custom by placing odd-valued bets. A typical Jeopardy contestant will bet, say, $1,000 or $3300 — not $639. I’ve seen similar bets by human contestants before, and it does not break a rule of the game. But it does raise (for me, anyway) the question of how you get a computer to understand that kind of custom. SMOP maybe.

 
 
css.php