On the list of doomsday scenarios that could wipe out the human race, super-smart killer robots rate pretty high in the public consciousness. And in scientific circles, a growing number of artificial intelligence experts agree that humans will eventually create an artificial intelligence that can think beyond our own capacities. This moment, called the singularity, could create a utopia in which robots automate common forms of labor and humans relax amid bountiful resources. Or it could lead the artificial intelligence, or AI, to exterminate any creatures it views as competitors for control of the Earth—that would be us. Stephen Hawking has long seen the latter as more likely, and he made his thoughts known again in a recent interview with the BBC. Here are some comments by Hawking and other very smart people who agree that, yes, AI could be the downfall of humanity.
“The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking has been voicing this apocalyptic vision for a while. In a May column in response to Transcendence, the sci-fi movie about the singularity starring Johnny Depp, Hawking criticized researchers for not doing more to protect humans from the risks of AI. “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” he wrote.
Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI.
The Swedish philosopher is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies (deploying unseen pathogens, recruiting humans to their side or simple brute force). The world of the future would become ever more technologically advanced and complex, but we wouldn’t be around to see it. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” he writes. “A Disneyland without children.”
Barrat is a writer and documentarian who interviewed many AI researchers and philosophers for his new book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.” He argues that intelligent beings are innately driven toward gathering resources and achieving goals, which would inevitably put a super-smart AI in competition with humans, the greatest resource hogs Earth has ever known. That means even a machine that was just supposed to play chess or fulfill other simple functions might get other ideas if it was smart enough. “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,” he writes in the book.
A mathematician and fiction writer, Vinge is thought to have coined the term “the singularity” to describe the inflection point when machines outsmart humans. He views the singularity as an inevitability, even if international rules emerge controlling the development of AI. “The competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first,” he wrote in a 1993 essay. As for what happens when we hit the singularity? “The physical extinction of the human race is one possibility,” he writes.