A machine finally passes the legendary Turing test and convinces users they're communicating with a real person—but the achievement is less than it seems
Huge news for people raising 13-year-olds who can’t get enough of that particular hell. Now there’s a computer program that can simulate the experience too!
That’s the headline that has set the computer world buzzing, as word comes out of the Royal Society in London that for the first time, a computer has passed the legendary Turing test, which had stood unmet since 1950. Named for computer pioneer Alan Turing—who famously declared that if a computer were ever developed whose behavior was indistinguishable from a human’s, the machine must then be said to be capable of thought—the test required at least 33% of human subjects to be fooled into thinking they were conversing with a human during a keyboard exchange with a computer that lasted five minutes.
So one computer finally achieved that, posing as a 13-year-old boy named Eugene Goostman who, like most kids, likes candy and hamburgers, and, like fewer kids, is the son of a gynecologist. That means he might have picked up a disproportionate amount of information about medical arcana or have other bits of knowledge more or less unique to him, but would otherwise be unremarkable. And that, in turn, pretty much describes the clumpy, uneven knowledge base of most kids—which was the whole idea. As Vladimir Veselov, “Eugene’s” developer explained, this allowed the program to “claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything.”
But here’s the thing: the point of the Turing test is not so much to give the computer a pop quiz on medicine or current events, it’s to create a program that can follow the thread of a conversation in a believable way. And if you’ve chosen a 13-year-old as your model for that, you’ve set your bar pretty low. I’m raising a 13-year-old even as we speak, and I can tell you there is no age group on the planet as adept at the art of the unresponsive non-sequitur as hers. If I ask her if she’s done her home work, the answer could just as easily be “yes,” “no” or “tapioca.” If I ask what she wants for dinner she will hear that question—I’m sure she hears it—and then respond by complaining that her sister is annoying her. These are, you will note, technically answers. The fact that they are answers that have nothing to do with the question I asked seems not to be relevant to her.
Not that a computer modeled on my 11-year-old would be any more responsive—unless it was a computer built with eyes that could roll on cue whenever I say something the program considers embarrassing, which would be more or less all the time. And certainly, a 14-, 15- or 16-year-old computer program would be little better, since it wouldn’t be required to do much more than send out remote commands to slam doors and then sit in utter, world-weary silence no matter what you said to it.
So nice try, Turing guys. But if you really want a meaningful win, you’re going to have to aim a little further up the age spectrum. If you don’t believe me, ask my daughter. I predict her answer will be “purple.”