I watched PBS’ NOVA show “Smartest machine on earth” last night, which was about both the general topic of artificial intelligence ( AI ), and the specific topic of the upcoming airing of IBM’s Watson appearance on Jeopardy! next week.
Two things I couldn’t discern:
(i) whether Watson will get to know the other players’ answers during the game, and the announcer’s feedback as to whether an answer is correct or not. I had thought not, but the show last night said that it had made a big difference in Watson’s ability to compete at the champion-level when he had access to those answers. That makes sense, since hearing other player’s correct answers informs a player about what sort of answer is wanted in a particular Jeopardy! category. That part of the show now makes me think that Watson will get informed about the other players’ answers during the actual game, in spite of the fact that it hasn’t any hearing apparatus, and
(ii) how clear the promoters are going to be about the premature buzzer button pushing lockout feature that is an advantage to Watson.
The section on the history of AI was interesting — basically, the story of Watson is the story of how Watson came about in spite of the many missteps AI took. As I commented in an earlier essay on my “Intelligence and Automation” page, to acquire natural language capabilities, it seems, machines have to learn, and they have to learn from unstructured formats.
I’d think, then, that learning requires the machine to be subjected to whatever the analog of being immersed in an environment is for a human. That suggests that an AI project requires both designing the algorithms the machine is to use, and an analog of an environment that is appropriate for machine learning. What’s so interesting is that it is now accepted that AI does have to pay attention to designing machines that can learn; the show made a big deal about “Machine Learning”, featuring Tom Mitchell of CMU quite prominently. Mitchell wrote the now-classic book “Machine Learning” and is chair of CMU’s Machine Learning department in CMU’s School of Computer Science. The approach of machines that learn seems to be the opposite of some earlier views in philosophy of AI that saw things in terms of a contrast between humans who had to learn things over time, and machines that just had to follow rules that humans devised for them. This approach gives me hope that AI may one day soon shed its reputation of false promises. The approach is not at all new, though: that’s what Alan Turing envisioned in his papers on intelligence and computing machinery: a machine that would learn.
I would have liked to see a bit more about Alan Turing and Charles Babbage on the NOVA section of the show about the history of computing and AI. I expect to learn more about Charles Babbage, who preceded Turing, soon — I’ll post about that after I get a chance to read Laura Snyder’s book “The Philosophical Breakfast Club”, which is about four thinkers of that era who were all friends, all undergraduates together, one of whom was Babbage. (The book comes out this month.)
I’ve read a bit of Alan Turing, and written a bit on it. I feel that Turing was insightful about many cultural aspects of language learning that the field of AI is finally being forced to acknowledge. What sounded fanciful to the AI researcher of the 1970’s looking to codify language learning and common sense reasoning as a set of rules no longer sounds so unrelated to serious AI research. Turing, I think, was way ahead of those who followed him in the AI field on many points that seemed off-topic to them. Another issue he mentioned regarding the project of building a computing machine with language capability, which he envisioned as a machine that could learn, was this: It couldn’t go to school with human children, he said, because the other children would make fun of it.
Maybe Turing was wrong about who would make fun of it, though: insensitive children or insensitive adults. In NOVA’s show last evening, it is related that Watson faced such a problem almost as soon as it left its home environment of IBM. David Ferucci tells what happened when he brought his children to some practice runs of Jeopardy! It was they who pointed out that the announcer, a stand-in playing Alex Trebek’s role during the practice runs, was making fun of Watson. Keenly aware of the situations a schoolchild who’s different faces, they asked “Why is he being so mean to Watson?”
Not so fanciful as it may seem at first: Recall that once the machine’s isolated social situation of not having any hearing apparatus was rectified by texting it the answers the other players gave, and the announcer’s reply to each of them as to whether the answer was right or wrong — well, there weren’t nearly so many occasions to laugh at Watson’s answers after that.