AI Should Be Terrified of Humans

9 minute read
Ideas
Kateman is a co-founder and the president of the Reducetarian Foundation, a nonprofit organization dedicated to reducing consumption of meat, eggs and dairy to create a healthy, sustainable and compassionate world. He is the author of “Meat Me Halfway” — inspired by a documentary of the same name — and the editor of “The Reducetarian Cookbook” and “The Reducetarian Solution.” He is an adjunct professor of environmental science and sustainability at Kean University and teaches environmental communications at Fordham.

The year is 2050, and the world is caught in a battle for control between humans and computers. Who’s your money on?

The idea of new technologies taking over the world and subjugating humans has been floating around the cultural consciousness for at least the last couple hundred years. And while robot overlords—like the villain of the latest installment of Mission Impossible—might be sci-fi fodder, it cannot be denied that artificial intelligence and machine learning are developing fast and it would behoove us to start thinking about the ethical implications of AI before we have a serious problem on our hands. Today’s ChatGPT 4 could morph into tomorrow’s first digital being, capable of experiencing joy and pain—and as far-off and improbable as that may seem, science and tech leaders, and even software engineers have been urging us to take the matter of AI welfare seriously. As an animal advocate, I’m starting to think they have a point.

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.


More from TIME


Animal rights activists are often made to be the butt of jokes for demanding better treatment, or even legal protections, for nonhuman animals – ones with observable expressions of emotion, displays of suffering, and close relationships within and beyond the bounds of species. Despite all the evidence we have that many animal species have the capacity to suffer—from monkeys to octopuses to frogs—and the degree to which they suffer in systems like factory farming, animal testing, and entertainment, defending animals is still often written off as frivolous, naive, or excessively sentimental. In the case of AI, we have very little evidence of sentience so far. I realize that by trying to bring AI into the fold of ethical consideration, I run the risk of being seen as a crackpot. But given the history mankind has of mistreating living beings—our own kind and many others—I don’t think we can afford to wait until AI welfare becomes an obvious crisis. We need to prioritize it now.

Read More: The Only Way to Deal With the Threat from AI

Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.

This ethical failing of humanity will only be compounded with AI because we won’t give AI the benefit of the doubt that they are in fact sentient, as is the case with our relationship to many non-humans species today (fish, insects, etc.). As Jonathan Birch, professor of philosophy at the London School of Economics and Political Science explains, “I really think we’re in trouble in the AI case, because we’re unlikely to find ourselves in a position where it’s clearly the best explanation for what we’re seeing—that the AI is conscious. There will always be plausible alternative explanations. And that’s a very difficult bind to get out of.” Then there’s the fact we don’t yet know what behaviors might harm them, and how. We might make them work without breaks. We might disassemble them without anesthetic, put them in cages, experiment on them, display them behind glass for the amusement and education of schoolchildren. We might abuse them physically—I know I’m guilty of smacking my old TV to get it to work. We might even “murder” them by shutting down their power supply or destroying their hardware.

Jacy Reese, a sociologist and statistician at the University of Chicago and the Sentience Institute, argues that we should create an AI rights movement before any of the above damage is done. AI tends to be wrapped in mystery because of the “black box problem,” he writes. In essence, we don’t know how AI, or what Reese calls “digital minds” work. We hardly even understand how our own minds work. And therein lies the danger—we don’t know what near-future AI will be capable of experiencing, so we can’t know what would constitute harm. Reese writes: “We could design an AI bill of rights not because the AIs today need one but because, by the time they do need one, it may be too late.”

The “moral circle” is a term used in ethical philosophy to describe what sorts of being we have a moral duty to consider when our actions may affect them, and which ones we don’t. Peter Singer’s landmark 1975 book “Animal Liberation” made the case that we owe ethical consideration to nonhuman animals, which is essentially the premise of animal rights discourse still today. Reese is essentially arguing that we include AI in our moral calculus, or at least prepare to. That doesn’t mean giving computers equal rights to humans, just as animal rights activists aren’t arguing that dogs should be able to get driver’s licenses and gorillas should be allowed to vote. It just means that when thinking about right and wrong, we must think about how our actions may affect any being capable of feeling; in other words, possessing sentience. It would be logically inconsistent not to. If a being—human, gorilla, nematode, or yes, AI—is capable of experiencing suffering, it’s our duty to alleviate or at least not increase their suffering, as far as is possible.

But what exactly are the chances that AI may ever become sentient, let alone in the foreseeable future? According to Jeff Sebo, professor of environmental studies at NYU, and Robert Long, a philosophy fellow at San Francisco’s Center for AI Safety, who authored a new paper in pre-print on the subject, it is significant enough to be worth considering. Specifically, they believe that there’s evidence that some AI systems may be sentient within the next decade. They list off several different capabilities as potential metrics—such as self-awareness and agency—for determining if something is sentient or not, and whether computers have or soon will possess them.

We already have machines that can perceive their physical environments and process information. They believe that there is a moderate possibility that in the near future, AI systems will be able to control and direct their attention, and recognize and represent their own “mental” states. Even self-awareness and agency, they say, aren’t out of the realm of possibility—some engineers, in fact, are trying to create exactly that. So, whichever of these abilities you believe are necessary to consider an entity sentient, there’s a good chance that computers will fulfill at least some of them in the not-so-distant future.

Some might argue that we shouldn’t worry about the plight of potential future beings when there are plenty of beings to worry about today. I’m sympathetic to this view; a lot of humans and non-humans need our help at present. But the reality is that digital beings may someday far outnumber the number of beings on Earth today, and given the enormous amount of potential suffering, we can’t afford to ignore it. That doesn’t mean those working on ending global poverty or factory farming need to start working on securing AI rights; it simply means that it’s a topic that should be on society’s radar and deserves much more serious attention than it’s currently being given.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.