A group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter on Tuesday warning against the dangers of advanced AI as they allege companies are prioritizing financial gains while avoiding oversight.
Thirteen employees, eleven of which are current or former employees of OpenAI, the company behind ChatGPT, signed the letter entitled: “A Right to Warn about Advanced Artificial Intelligence.” The two other signatories are current and former employees of Google DeepMind. Six individuals are anonymous.
The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation. “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” the letter says.
“We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk,” OpenAI spokeswoman Lindsey Held told the New York Times. “We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.”
Google DeepMind has not commented publicly on the letter and did not respond to TIME’s request for comment.
Leaders of all three leading AI companies—OpenAI, Google DeepMind and Anthropic—have talked about the risks in the past. “If we build an AI system that’s significantly more competent than human experts but it pursues goals that conflict with our best interests, the consequences could be dire… rapid AI progress would be very disruptive, changing employment, macroeconomics, and power structures … [we have already encountered] toxicity, bias, unreliability, dishonesty,” AI safety and research company Anthropic said in a March 2023 statement, which is linked to in the letter. (One of the letter signatories who currently works at Google DeepMind used to work at Anthropic.)
Read More: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy
The group behind the letter alleges that AI companies have information about the risks of the AI technology they are working on, but because they aren’t required to disclose much with governments, the real capabilities of their systems remain a secret. That means current and former employees are the only ones who can hold the companies accountable to the public, they say, and yet many have found their hands tied by confidentiality agreements that prevent workers from voicing their concerns publicly. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the group wrote.
“Employees are an important line of safety defense, and if they can’t speak freely without retribution, that channel’s going to be shut down,” the group’s pro bono lawyer Lawrence Lessig told the New York Times.
Eighty-three percent of Americans believe that AI could accidentally lead to a catastrophic event, according to research by the AI Policy Institute. Another 82% do not trust tech executives to self-regulate the industry. Daniel Colson, executive director of the Institute, notes that the letter has come out after a series of high-profile exits from OpenAI, including Chief Scientist Ilya Sutskever. Sutskever’s departure also made public the non-disparagement agreements that former employees would sign to bar them from speaking negatively about the company. Failure to abide by that rule would put their vested equity at risk. “There needs to be an ability for employees and whistleblowers to share what's going on and share their concerns,” says Colson. “Things that restrict the people in the know from speaking about what's actually happening really undermines the ability for us to make good choices about how to develop technology.”
The letter writers have made four demands of advanced AI companies: stop forcing employees into agreements that prevent them from criticizing their employer for “risk-related concerns,” create an anonymous process for employees to raise their concerns to board members and other relevant regulators or organizations, support a “culture of open criticism,” and not retaliate against former and current employees who share “risk-related confidential information after other processes have failed.”
“The thing that we haven't seen at all anywhere,” Colson says, “is requirements being placed upon these companies for things like safety testing, and any sort of limitation on companies being able to develop these models if they don't comply with cybersecurity requirements, or safety testing requirements.”
Governments around the world have moved to regulate AI, though progress lags behind the speed at which AI is progressing. Earlier this year, the E.U. passed the world’s first comprehensive AI legislation. Efforts at international cooperation have been pursued through AI Safety Summits in the U.K. and South Korea, and at the U.N. In October 2023. President Joe Biden signed an AI executive order that, among other things, requires AI companies to disclose their development and safety testing plans to the Department of Commerce.
-With additional reporting by Will Henshall/Washington
More From TIME
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com