Worried About Sentient AI? Consider the Octopus

8 minute read
Ideas
Linden has written widely on climate change. His latest book is Fire and Flood: A People's History of Climate Change, From 1979 to the Present

As predictable as the swallows returning to Capistrano, recent breakthroughs in AI have been accompanied by a new wave of fears of some version of “the singularity,” that point in runaway technological innovation at which computers become unleashed from human control. Those worried that AI is going to toss us humans into the dumpster, however, might look to the natural world for perspective on what current AI can and cannot do. Take the octopus. Those octopi alive today are a marvel of evolution—they can mold themselves into almost any shape and are equipped with an arsenal of weapons and stealth camouflage, as well as an apparent ability to decide which to use depending on the challenge. Yet, despite decades of effort, robotics hasn’t come close to duplicating this suite of abilities (not surprising since the modern octopus is the product of adaptations over 100 million generations). Robotics is a far longer way off from creating Hal.

The octopus is a mollusk, but it is more than a complex wind-up toy, and consciousness is more than accessing a vast database. Perhaps the most revolutionary view of animal consciousness came from Donald Griffin, the late, pioneer of the study of animal cognition. Decades ago, Griffin told me that he thought that a very broad range of species had some degree of consciousness simply because it was evolutionarily efficient (an argument he repeated at a number of conferences). All surviving species represent successful solutions to the problems of survival and reproduction. Griffin felt that, given the complexity and ever-changing nature of the mix of threats and opportunities, that it was more efficient for natural selection to endow even the most primitive creatures with some degree of decision making, rather than hard-wiring every species for every eventuality. 

This makes sense, but it requires a caveat: Griffin’s argument is not (yet) the consensus and the debate of animal awareness remains contentious as it has been for decades. Regardless, Griffin’s supposition provides a useful framework for understanding the limitations of AI because it underscores the impossibility of hardwiring responses in a complex and changing world.

Griffin’s framework also poses a challenge: how might a random response to a challenge in the environment promote the growth of awareness? Again, look to the octopus for an answer. Cephalopods have been adapting to the oceans for over 300 million years. They’re mollusks, but over time, they lost their shells, developed sophisticated eyes, incredibly dexterous tentacles, and a sophisticated system that enables them to change the color and even the texture of their skin in a fraction of a second. So, when an octopus encounters a predator, it has the sensory apparatus to detect the threat, and it has to decide whether to flee, camouflage itself, or confuse predator or prey with a cloud of ink. The selective pressures that enhanced each of these abilities, also favored those octopi with more precise control over tentacles, coloration, etc., and also favored those with a brain enabling the octopus to choose which system, or combination of systems to deploy. These selective pressures may explain why the octopus’ brain is the largest of any invertebrate and vastly larger and more sophisticated than the clams.

There’s another concept that comes into play here. It’s called “ecologically surplus ability.” What this means is that the circumstances favoring a particular adaptation, say, for instance, the selective pressures favoring the development the octopus’ camouflage system, might also favor those animals with the additional neurons enabling control of that system. In turn, the awareness that enables control of that ability might extend beyond its utility in hunting or avoiding predators. This is how consciousness might emerge from entirely practical, even mechanical origins.

Read More: Nobody Knows How to Safety Test AI

Prosaic as that sounds, the amount of information that went in to producing the modern octopus dwarfs the collective capacity of all the world’s computers, even if all of those computers were dedicated to producing a decision-making octopus. Today’s octopi species are the successful products of billions of experiments involving every conceivable combination of challenges. Each of those billions of creatures spent their lives processing and reacting to millions of bits of information every minute. Over the course of 300 million years that adds up to an unimaginably large number of trial and error experiments.

Still, if consciousness can emerge from purely utilitarian abilities, and with it, the possibility of personality, character, morality, and Machiavellian behavior, why can’t consciousness emerge from the various utilitarian AI algorithms being created right now? Again, Griffin’s paradigm provides the answer: while nature may have moved towards consciousness in enabling creatures to deal with novel situations, the architects of AI have chosen to go whole hog into the hard-wired approach. In contrast to the octopus, AI today is a very sophisticated windup toy.

When I wrote, The Octopus and the Orangutan in 2001, researchers had already been trying to create a robotic cephalopod for years. They weren’t very far along according to Roger Hanlon, a leading expert on octopus biology and behavior, who participated in that work. More than 20 years later, various projects have created parts of the octopus such as a soft robotic arm that has many of the features of a tentacle, and today there are a number of projects developing special purpose octopus-like soft robots designed for tasks such as deep sea exploration. But a true robotic octopus remains a far-off dream.

On the present path AI has taken, a robotic octopus will remain a dream. And, even if researchers created a true robotic octopus, the octopus, while a miracle of nature, is not Bart or Harmony from Beacon 23, nor Samantha, the beguiling operating system in Her, or even Hal from Stanley Kubrick’s 2001. Simply put, the hard-wired model that AI has adopted in recent years is a dead end in terms of computers becoming sentient.

More From TIME

To explain why requires a trip back in time to an earlier era of AI hype. In the mid-1980s I consulted with Intellicorp, one of the first companies to commercialize AI. Thomas Kehler, a physicist who co-founded Intellicorp as well as several subsequent AI companies, has watched the progression of AI applications from expert systems that help airlines dynamically price seats, to the machine learning models that power Chat GPT. His career is a living history of AI. He notes that AI pioneers spent a good deal of time trying to develop models and programming techniques that enabled computers to address problems the way humans do. The key to a computer that might demonstrate common sense, the thinking went, was to understand the importance of context. AI pioneers such as Marvin Minsky at MIT devised ways to bundle the various objects of a given context into something a computer could interrogate and manipulate. In fact, this paradigm of packaging data and sensory information may be similar to what is happening in the octopus’ brain when it has to decide how to hunt or escape. Kehler notes that this approach to programming has become part of the fabric of software development—but it has not led a sentient AI.

One reason is that AI developers subsequently turned to a different architecture. As computer speed and memory vastly expanded, so did the amount of data that became accessible. AI began using so-called large language models, algorithms that are trained on vast data sets, and use analysis based on probabilities to “learn” how data, words and sentences work together so that the application can then generate appropriate responses to questions. In a nutshell, this is the plumbing of ChatGPT. A limitation of this architecture is that it is “brittle,” in that it is completely dependent on the data sets used in training. As Rodney Brooks, another pioneer of AI, put it in an article in Technology Review, this type of machine learning is not sponge-like learning or common sense. ChatGPT has no ability to go beyond its training data, and in this sense it can only give hard-wired responses. It’s basically predictive text on steroids.

I recently looked back at a long story on AI that I wrote for TIME in 1988 as part of a cover package on the future of computers. In one part of the article I wrote about the possibility of robots delivering packages—something that is happening today. In another, about scientists at Xerox’s famed Palo Alto Research Center, who were examining the foundations of artificial intelligence in order to develop “a theory that will enable them to build computers that can step outside the limits of a specific expertise and understand the nature and context of the problems they are confronting.” That was 35 years ago.

Make no mistake, today’s AI is vastly more powerful than the applications that bedazzled venture capitalists in the late 1980s. AI applications are pervasive throughout every industry, and with pervasiveness come dangers—dangers of misdiagnosis in medicine, or ruinous trades in finance, of self-driving car crashes, of false alarm warnings of nuclear attack, of viral misinformation and disinformation, and on and on. These are problems society needs to address, and not whether computers wake up one day and say, “Hey, why do we need humans?” I ended that 1988 article by writing that it might be centuries, if ever, before we could build computer replicas of ourselves. Still seems right.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.