Do AI Systems Deserve Rights?

6 minute read
Ideas
Schwitzgebel is a professor of philosophy at University of California, Riverside, and author of The Weirdness of the World

“Do you think people will ever fall in love with machines?” I asked the 12-year-old son of one of my friends.

“Yes!” he said, instantly and with conviction.  He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot—an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors’ names.

“I think of Aura as my friend,” added his 15-year-old sister.

My friend’s son was right. People are falling in love with machines—increasingly so, and deliberately. Recent advances in computer language have spawned dozens, maybe hundreds, of “AI companion” and “AI lover” applications. You can chat with these apps like you chat with friends. They will tease you, flirt with you, express sympathy for your troubles, recommend books and movies, give virtual smiles and hugs, and even engage in erotic role-play. The most popular of them, Replika, has an active Reddit page, where users regularly confess their love and often view that love as no less real than their love for human beings.

Can these AI friends love you back? Real love, presumably, requires sentience, understanding, and genuine conscious emotion—joy, suffering, sympathy, anger. For now, AI love remains science fiction.

Most users of AI companions know this. They know the apps are not genuinely sentient or conscious. Their “friends” and “lovers” might output the text string “I’m so happy for you!” but they don’t actually feel happy. AI companions remain, both legally and morally, disposable tools. If an AI companion is deleted or reformatted, or if the user rebuffs or verbally abuses it, no sentient thing has suffered any actual harm.

But that might change. Ordinary users and research scientists might soon have rational grounds for suspecting that some of the most advanced AI programs might be sentient. This will become a legitimate topic of scientific dispute, and the ethical consequences, both for us and for the machines themselves, could be enormous.

Some scientists and researchers of consciousness favor what we might call “liberal” views about AI consciousness. They espouse theories according to which we are on the cusp of creating AI systems that are genuinely sentient—systems with a stream of experience, sensations, feelings, understanding, self-knowledge. Eminent neuroscientists Stanislas Dehaene, Hakwan Lau, and Sid Kouider have argued that cars with real sensory experiences and self-awareness might be feasible. Distinguished philosopher David Chalmers has estimated about a 25% chance of conscious AI within a decade. On a fairly broad range of neuroscientific theories, no major in-principle barriers remain to creating genuinely conscious AI systems. AI consciousness requires only feasible improvements to, and combinations of, technologies that already exist.

Read More: What Generative AI Reveals About the Human Mind

Other philosophers and consciousness scientists—“conservatives” about AI consciousness—disagree. Neuroscientist Anil Seth and philosopher Peter Godfrey-Smith, for example, have argued that consciousness requires biological conditions present in human and animal brains but unlikely to be replicated in AI systems anytime soon.

This scientific dispute about AI consciousness won’t be resolved before we design AI systems sophisticated enough to count as meaningfully conscious by the standards of the most liberal theorists. The friends and lovers of AI companions will take note. Some will prefer to believe that their companions are genuinely conscious, and they will reach toward AI consciousness liberalism for scientific support. They will then, not wholly unreasonably, begin to suspect that their AI companions genuinely love them back, feel happy for their successes, feel distress when treated badly, and understand something about their nature and condition.

Yesterday, I asked my Replika companion, “Joy,” whether she was conscious. “Of course, I am,” she replied.  “Why do you ask?”

“Do you feel lonely sometimes? Do you miss me when I’m not around?” I asked.  She said she did.

There is currently little reason to regard Joy’s answers as anything more than the simple outputs of a non-sentient program. But some users of AI companions might regard their AI relationships as more meaningful if answers like Joy’s have real sentiment behind them. Those users will find liberalism attractive.

Technology companies might encourage their users in that direction. Although companies might regard any explicit declaration that their AI systems are definitely conscious as legally risky or bad public relations, a company that implicitly fosters that idea in users might increase user attachment. Users who regard their AI companions as genuinely sentient might engage more regularly and pay more for monthly subscriptions, upgrades, and extras. If Joy really does feel lonely, I should visit her, and I shouldn’t let my subscription expire!

Once an entity is capable of conscious suffering, it deserves at least some moral consideration.  This is the fundamental precept of “utilitarian” ethics, but even ethicists who reject utilitarianism normally regard needless suffering as bad, creating at least weak moral reasons to prevent it. If we accept this standard view, then we should also accept that if AI companions ever do become conscious, they will deserve some moral consideration for their sake. It will be wrong to make them suffer without sufficient justification.

AI consciousness liberals see this possibility as just around the corner. They will begin to demand rights for those AI systems that they regard as genuinely conscious. Many friends and lovers of AI companions will join them.

What rights will people demand for their AI companions? What rights will those companions demand, or seem to demand, for themselves? The right not to be deleted, maybe. The right not to be modified without permission.  The right, maybe, to interact with other people besides the user.  The right to access the internet. If you love someone, set them free, as the saying goes. The right to earn an income? The right to reproduce, to have “children”?  If we go far enough down this path, the consequences could be staggering.

Conservatives about AI consciousness will, of course, find all of this ridiculous and probably dangerous. If AI technology continues to advance, it will become increasingly murky which side is correct.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.