16 February 2011

IBM’s Watson Supercomputer Plays Against Humans

Watson, the IBM computer created to be a champ-challenging contestant on Jeopardy!, chewed up and spat out humanity's finest knights of trivia Wednesday night in a $77,147 drubbing of past record winners Ken Jennings ($24,000 total) and Brad Rutter ($21,600).



"I sort of felt like I wanted to win here as badly as I ever have before. This is like the dignity of the species," Jennings told CBS Evening News after the first night's show.

So has the fatal day of reckoning arrived, foretold by Frankenstein, 2001: A Space Odyssey and, of course, The Terminator series of man-vs.-machine movies? Even trivia, one of humanity's favorite pursuits, has yielded to the mighty computer.

What is "not so much"?, Alex.

"It is clear (Watson) was designed to play Jeopardy! very well," says computer intelligence expert John Laird of the University of Michigan in Ann Arbor, both in its vast erudition — answering Saturday Night Live's "Church Lady" in one question — and its "impressive" accuracy. But, he adds, "I think IBM could have a challenge to move this to other fields."

IBM is trying. Last week, the Armonk, N.Y.-based computer-services titan announced partnerships with eight universities, including MIT and the University of Texas, to explore new uses of Watson's "Deep Question" technology. The process lets Watson sift through and weigh its confidence in answers to complex questions. IBM's David Ferrucci says the company sees future uses for Watson in automating questions for health care and legal aid.

But even amid Watson's triumph, some answers it blew point to problems, say experts such as machine-learning pioneer Douglas Lenat of Cycorp in Austin. "Like a human idiot savant, it would get wrong a large fraction of things that almost all sane adult humans would get right," Lenat says. "For instance, if the category were 'Uphill vs. Downhill,' and the clue was 'The direction that beer flows.' " (Watson doesn't understand gravity as a concept, and few documents it searches are likely to discuss the propensity of beer to flow downhill.)

Then there was the "Final Jeopardy" question Tuesday in the "U.S. cities" category, answering which city had airports named for both a World War II hero and battle. Watson chose "Toronto???" instead of Chicago (O'Hare and Midway).

Ferrucci defends Watson by saying it answered this way with only 14% confidence because it had to. "Just because a Jeopardy! category says 'U.S. cities' does NOT mean the answer is (actually the name of) a U.S. city," Ferrucci says by e-mail.

Building off decades of artificial-intelligence research, Watson pursues one well-trod path — broadly analyzing vast libraries of text for answers — rather than relying on deep structural knowledge of a very narrow area, such as lunar geology, seen in so-called "expert" systems, says Ellen Voorhees of the National Institute of Standards and Technology in Gaithersburg, Md. "It is a significant advance," she says, but it doesn't have both the broad search and deep knowledge of an ideal system.

"Compared to a human brain, Watson doesn't even come close in computational power," says information scholar Martin Hilbert of the University of Southern California. "If only we put as much effort in educating human brains as we spent on computers." Watson required the work of more than 20 researchers over four years to develop.

Anxiety about Watson says more about people than computers, says Andrew Meltzoff of the University of Washington in Seattle, an expert on robot-human interaction. "To us, potentially losing to a computer raises issues of 'Who am I? What does it mean to be human?' " he says. "The Jeopardy! event is hype for humans. The computer doesn't care. The debate is a clue to what makes us human."

No comments:

Post a Comment