Getty Images; Photo Illustration by Mia Tramz for TIME
September 11, 2015 12:52 PM EDT

“Will robots need rights?” is like asking “Will robots be Jewish? Environmentalist? Libertarian?” At first such questions seem silly. Like all machines, robots will be whatever we make them. If robot makers become truly expert, the question might not even come up—a robot that seems entirely human will be taken for human and granted rights as a matter of course.

But there’s a different angle that makes such questions deep and important. Today’s robots have “minds” made of digital computers—ordinary, everyday computers—and this will continue to be true for quite a while. Such computers can think, in a way, but won‘t ever be conscious. And it makes no sense to say that an unconscious, synthetic object (a lawn mower, say, or a bowling ball, or a pair of pants) has rights. Having rights implies that you are, were, or will be part of society. Only a “you” can have rights, not an “it.” Unconscious machines don’t qualify.

Why can’t digital computers be conscious? Because consciousness is like rust: it happens to certain physical objects under certain conditions. Rust happens to an unpainted steel cabinet that is left outside in the rain. Consciousness happens to a human or animal brain when it is alive and awake.

Digital computers aren’t candidates. Software isn’t intended to produce physical changes in the computer it’s running on; it produces information. If you want consciousness, digital computers are the wrong kind of stuff and software is the wrong tool to achieve it.

That doesn’t mean there will be no rules regarding robots. The American flag is unconscious and synthetic, yet there are rules against handling it disrespectfully. The flag itself couldn’t care less, but people care. It will be the same with robots.

It’s likely that some robots of the future will look and sound human. It would be wrong to abuse such a robot. Smashing it to bits with a hammer would look as if you were attacking a person. And even if every last onlooker knew it was a mere robot—knew that it could experience exactly nothing—you could not make such an attack without making real cruelty just a little less revolting, a little easier to stomach.

Will robots be treated kindly? Of course not. People aren’t. But fantasy doesn’t hurt, much.

Gelernter is professor of computer science at Yale University and the author of the forthcoming book The Tides of Mind: Uncovering the spectrum of consciousness

Contact us at

Read More From TIME

Related Stories