• U.S.

HOW HARD IS CHESS?

7 minute read
David Gelernter

We already knew that computers are first-rate at solving equations, entertaining children, burying friends and enemies under E-mail and doing many other useful chores. They have also been brushing up on their chess. By the end of the second game between Deep Blue and Garry Kasparov last week, it was clear that IBM’s extraordinary computer was playing better chess than any machine ever had before. After Saturday’s game ended in a draw, the match was still tied at one win and three draws apiece, but technology watchers were pretty well agreed: if the machine doesn’t triumph this time, it is likely to triumph before long.

And why bother about the actual date on which the computer finally vanquishes the human world champion? After all, it can already beat you. That in itself is suggestive and important, because no human being can play chess without thinking. And no human could beat the chess champion of the world, even in a single game, without bringing significant intelligence to bear. Shouldn’t we conclude that Deep Blue must be a thinking computer, and a smart one at that, maybe brilliant? Maybe a genius? Aren’t we forced to conclude that Deep Blue must have a mind? That henceforth Homo sapiens will be defined as “one type of thinking thing”?

No. Deep Blue is just a machine. It doesn’t have a mind any more than a flowerpot has a mind. Deep Blue is a beautiful and amazing technological achievement. It is an intellectual milestone, and its chief meaning is this: that human beings are champion machine builders. All sorts of activities that we thought could be done only by minds can in fact be done by machines too, if the machine builders are smart enough. Deep Blue underscores the same lesson about human thought we learned a couple of generations ago from mechanical calculators. You can’t do arithmetic without using your mind, but when a calculator does arithmetic, we don’t conclude that it has a mind. We conclude that arithmetic can be done without a mind.

Winning at chess, of course, is much harder than adding numbers. But when you think about it carefully, the idea that Deep Blue has a mind is absurd. How can an object that wants nothing, fears nothing, enjoys nothing, needs nothing and cares about nothing have a mind? It can win at chess, but not because it wants to. It isn’t happy when it wins or sad when it loses. What are its apres-match plans if it beats Kasparov? Is it hoping to take Deep Pink out for a night on the town? It doesn’t care about chess or anything else. It plays the game for the same reason a calculator adds or a toaster toasts: because it is a machine designed for that purpose.

Computers as we know them will never have minds. No matter what amazing feats they perform, inside they will always be the same absolute zero. The philosopher Paul Ziff laid this out clearly almost four decades ago. How can we be sure, he asked, that a computer-driven robot will never have feelings, never have a mind? “Because we can program a robot to behave any way we want it to behave. Because a robot couldn’t mean what it said any more than a phonograph record could mean what it said.” Computers do what we make them do, period. However sophisticated the computer’s performance, it will always be a performance.

Not so fast, someone might say. The human brain is a machine too. How can we dismiss Deep Blue as just a machine when we don’t dismiss the human brain as just a machine?

Because if your brain is just a machine, it’s a machine that can do one trick that computers have no hope of doing. A trick that is intrinsic to the machinery, that can’t be duplicated onto some other machine, stored on a disc, reworked by smart programmers or appropriated by Microsoft. Because of the stuff it is made of, or the way its parts are arranged, the brain is a machine that is capable of creating an “I.” Brains can summon mental worlds into being, and computers can’t.

But might not scientists be able one day to build a machine in the laboratory with the same remarkable capacity? I doubt it. But if they do, that machine will be, chances are, an exact replica of the brain itself.

That said, don’t sell computers short. What’s important about Deep Blue’s success is what it tells us about the nature of computer science. We like to think of it as a fast-moving field. In fact, it is plodding but not easily discouraged. In the 1950s, many scientists decided that chess playing was an area in which computers could make rapid headway. Some predicted the imminent coming of a world-champion computer. But the problem turned out to be much harder than they imagined, as did many other problems in artificial intelligence. Outsiders tended to write the whole effort off; computer scientists, they figured, talked a good game but couldn’t deliver. The researchers themselves dug in their heels, set to work and produced Deep Blue. Progress has been made on other long-standing problems also: getting computers to translate English into Russian, for example, or to identify objects by sight.

Simulating thought in general, as opposed to solving a particular, sharply defined problem, has proved considerably harder. One of the biggest obstacles has been technologists’ naivete about the character of human thought, their tendency to confuse thinking with analytical problem solving. They forget that when you look out the window and let your mind wander, or fall asleep and dream, you are also thinking. They tend to overlook something that such mind-obsessed poets as Wordsworth and Coleridge understood two centuries ago: that thought is largely a process of stringing memories together, and that memories are often linked by emotion. No computer can achieve artificial thought without achieving artificial emotion too. But even in that arcane field, some progress has been made.

The key technique behind Deep Blue is “parallel computing.” To solve a hard problem fast, use lots of computers simultaneously. Deep Blue is a computer ensemble: 32 general-purpose computers, each one attached to eight special-purpose processors. Parallel computing used to be (believe it or not) controversial. Some computer scientists were worried that programmers wouldn’t be able to manage lots of computers simultaneously. In retrospect, it was a piece of cake.

The more powerful your computer, the more sophisticated the behavior it can imitate. In the long run I doubt if there is any kind of human behavior computers can’t fake, any kind of performance they can’t put on. It is conceivable that one day, computers will be better than humans at nearly everything. I can imagine that a person might someday have a computer for a best friend. That will be sad–like having a dog for your best friend but even sadder.

Computers might one day be capable of expressing themselves in vivid prose or fluent poetry, but unfortunately they will still be computers and have nothing to say. The gap between human and surrogate is permanent and will never be closed. Machines will continue to make life easier, healthier, richer and more puzzling. And human beings will continue to care, ultimately, about the same things they always have: about themselves, about one another and, many of them, about God. On those terms, machines have never made a difference. And they never will.

David Gelernter is professor of computer science at Yale and author of Drawing Life: Good, Evil and Mailbombs in Modern America, to be published in September by the Free Press.

More Must-Reads from TIME

Contact us at letters@time.com