The CEOs of the world’s leading artificial intelligence companies, along with hundreds of other AI scientists and experts, made their most unified statement yet about the existential risks to humanity posed by the technology, in a short open letter released Tuesday.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the letter, released by California-based non-profit the Center for AI Safety, says in its entirety.
The CEOs of what are widely seen as the three most cutting-edge AI labs—Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic—are all signatories to the letter. So is Geoffrey Hinton, a man widely acknowledged to be the “godfather of AI,” who made headlines last month when he stepped down from his position at Google and warned of the risks AI posed to humanity.
Read More: DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution
The letter is the latest effort by those within the tech industry to urge caution on AI. In March, a separate open letter called for a six-month pause on AI development. That letter was signed by prominent tech industry figures including Elon Musk, but it lacked sign-on from the most powerful people at the top of AI companies, and drew criticism for presenting a solution that many said was implausible.
Read more: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
Tuesday’s letter is different because many of its top signatories occupy powerful positions within the C-suite, research, and policy teams at AI labs and the big tech companies that pay their bills. Kevin Scott, the CTO of Microsoft, and James Manyika, a vice president at Google, are also signatories to the letter. (Microsoft is OpenAI’s biggest investor, and Google is the parent company of DeepMind.) Widely revered figures on the technical side of AI including Ilya Sutskever, OpenAI’s chief scientist, and Yoshua Bengio, winner of the Association for Computing Machinery’s Turing Award, are also signatories.
More from TIME
The letter comes as global governments and multilateral organizations are waking up to the urgency of somehow regulating artificial intelligence. Leaders of G7 nations will meet this week for their first meeting to discuss setting global technical standards to put guardrails on AI development. The European Union’s AI Act, which is currently under scrutiny by lawmakers, will likely set similar standards for the technology but is unlikely to fully come into force until at least 2025. Altman, the CEO of OpenAI, has publicly called for global AI regulations but has also pushed back at the E.U.’s proposals for what such regulations should look like.
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks,” a statement accompanying the letter reads, noting that its purpose if to “overcome this obstacle and open up discussion.” The statement adds: “It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
Read More: The AI Arms Race Is Changing Everything
Some criticism of the March letter calling for a six-month pause came from progressives in the artificial intelligence community, who argued that talk of an apocalyptic future distracted from harms that AI companies perpetrate in the present. Dan Hendrycks, the Center for AI Safety’s director, wrote on Twitter that both near-term and long-term risks are in the scope of the latest letter published on Tuesday. “There are many important and urgent risks from AI, not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization. These are all important risks that need to be addressed,” he wrote. “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”
Read More: Microsoft’s CEO Doesn’t Think Now’s the Time to Stop on AI
Notably absent from the list of signatories are employees from Meta. The company’s AI division is widely regarded as close to the cutting edge in the field, having developed powerful large language models, as well as a model that can outperform human experts at the strategy game Diplomacy. Meta’s chief AI scientist, Yann Lecun, has previously rubbished warnings that AI poses an existential risk to humanity.
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- The Reinvention of J.D. Vance
- How to Survive Election Season Without Losing Your Mind
- Welcome to the Golden Age of Scams
- Did the Pandemic Break Our Brains?
- The Many Lives of Jack Antonoff
- 33 True Crime Documentaries That Shaped the Genre
- Why Gut Health Issues Are More Common in Women
Write to Billy Perrigo at billy.perrigo@time.com