An open letter with signatures from hundreds of the biggest names in tech, including Elon Musk, has urged the world’s leading artificial intelligence labs to pause the training of new super-powerful systems for six months, saying that recent advances in AI present “profound risks to society and humanity.”
The letter comes just two weeks after the public release of OpenAI’s GPT-4, the most powerful AI system ever released, which has led researchers to slash their expectations for when AGI—or artificial general intelligence that surpasses human cognitive ability—will arrive. Many experts fear that, as an AI arms race heats up, humanity is sleepwalking into catastrophe.
Read More: The AI Arms Race Is Changing Everything
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter says. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The list of signatories includes the names of more than 1,000 people alongside Musk, including Apple co-founder Steve Wozniak, Sapiens author Yuval Noah Harari, and some of AI’s most distinguished academics responsible for multiple breakthroughs in machine learning. As of Tuesday, no OpenAI employees had signed the letter, although CEO Sam Altman’s name briefly appeared then disappeared from the list of signatories. At least four Google employees, including three from its subsidiary AI lab DeepMind, Emad Mostaque, the CEO of Stability AI and Tristan Harris, executive director of the Center for Humane Technology also appeared on the list. TIME was not able to verify that all signatures were genuine.
More from TIME
Read More: Elon Musk Is Bringing the Culture Wars to AI
“Contemporary AI systems are now becoming human-competitive at general tasks,” states the letter, which was hosted on the Future of Life Institute’s website. “We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
Read More: The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter
The letter calls on all AI labs to agree to use the proposed six-month pause to continue with safety research. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” Those behind the letter note that they are not calling for a AI development in general to be paused, rather “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Read More: Why Bill Gates Believes Generative AI Will Be ‘Revolutionary’
Gary Marcus, a signatory of the open letter and author of the book Rebooting AI, told TIME he added his name because he felt a responsibility to speak out. “There are serious near-term and far-term risks and corporate AI responsibility seems to have lost fashion right when humanity needs it most,” he said. “Blundering so quickly into uncharted territory with so much at stake doesn’t seem like a great way to go. If the [corporations] won’t take these risks seriously enough, it is important for the rest of us to speak up.”
Read More: AI-Human Romances Are Flourishing—And This Is Just the Beginning
Simeon Campos, the CEO of AI safety startup SaferAI, told TIME he signed the letter because it is impossible to manage the risks of systems when even the inventors of those systems don’t know exactly how they work, don’t know what they’re capable of, and don’t know how to place limits on their behavior. “What are we currently doing?” Campos said. “We’re scaling such systems to unprecedented levels of capabilities in a race at full speed with transformative effects on society. We must slow down the development of these systems to let society adapt and accelerate alternative AGI architectures that are safe and formally verifiable by design.”
The open letter ends on a hopeful note. “Society has hit pause [before] on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo at billy.perrigo@time.com