As companies race to build more powerful AI, safety measures are being left behind. A report published Wednesday takes a closer look at how companies including OpenAI and Google DeepMind are grappling with the potential harms of their technology. It paints a worrying picture: flagship models from all the developers in the report were found to have vulnerabilities, and some companies have taken steps to enhance safety, others lag dangerously behind.
The report was published by the Future of Life Institute, a nonprofit that aims to reduce global catastrophic risks. The organization’s 2023 open letter calling for a pause on large-scale AI model training drew unprecedented support from 30,000 signatories, including some of technology's most prominent voices. For the report, the Future of Life Institute brought together a panel of seven independent experts—including Turing Award winner Yoshua Bengio and Sneha Revanur from Encode Justice—who evaluated technology companies across six key areas: risk assessment, current harms, safety frameworks, existential safety strategy, governance & accountability, and transparency & communication. Their review considered a range of potential harms, from carbon emissions to the risk of an AI system going rogue.
“The findings of the AI Safety Index project suggest that although there is a lot of activity at AI companies that goes under the heading of ‘safety,’ it is not yet very effective,” said Stuart Russell, a professor of computer science at University of California, Berkeley and one of the panelists, in a statement.
Read more: No One Truly Knows How AI Systems Work. A New Discovery Could Change That
Despite touting its “responsible” approach to AI development, Meta, Facebook’s parent company, and developer of the popular Llama series of AI models, was rated the lowest, scoring a F-grade overall. X.AI, Elon Musk’s AI company, also fared poorly, receiving a D- grade overall. Neither Meta nor x.AI responded to a request for comment.
The company behind ChatGPT, OpenAI—which early in the year was accused of prioritizing “shiny products” over safety by the former leader of one of its safety teams—received a D+, as did Google DeepMind. Neither company responded to a request for comment. Zhipu AI, the only Chinese AI developer to sign a commitment to AI safety during the Seoul AI Summit in May, was rated D overall. Zhipu could not be reached for comment.
Anthropic, the company behind the popular chatbot Claude, which has made safety a core part of its ethos, ranked the highest. Even still, the company received a C grade, highlighting that there is room for improvement among even the industry’s safest players. Anthropic did not respond to a request for comment.
In particular, the report found that all of the flagship models evaluated were found to be vulnerable to “jailbreaks,” or techniques that override the system guardrails. Moreover, the review panel deemed the current strategies of all companies inadequate for ensuring that hypothetical future AI systems which rival human intelligence remain safe and under human control.
Read more: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy
“I think it's very easy to be misled by having good intentions if nobody's holding you accountable,” says Tegan Maharaj, assistant professor in the department of decision sciences at HEC Montréal, who served on the panel. Maharaj adds that she believes there is a need for “independent oversight,” as opposed to relying solely on companies to conduct in-house evaluations.
There are some examples of “low-hanging fruit,” says Maharaj, or relatively simple actions by some developers to marginally improve their technology’s safety. “Some companies are not even doing the basics,” she adds. For example, Zhipu AI, x.AI, and Meta, which each rated poorly on risk assessments, could adopt existing guidelines, she argues.
However, other risks are more fundamental to the way AI models are currently produced, and overcoming them will require technical breakthroughs. “None of the current activity provides any kind of quantitative guarantee of safety; nor does it seem possible to provide such guarantees given the current approach to AI via giant black boxes trained on unimaginably vast quantities of data,” Russell said. “And it’s only going to get harder as these AI systems get bigger.” Researchers are studying techniques to peer inside the black box of machine learning models.
In a statement, Bengio, who is the founder and scientific director for Montreal Institute for Learning Algorithms, underscored the importance of initiatives like the AI Safety Index. “They are an essential step in holding firms accountable for their safety commitments and can help highlight emerging best practices and encourage competitors to adopt more responsible approaches,” he said.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com