Some Top AI Labs Have ‘Very Weak’ Risk Management, Study Finds

3 minute read

Some of the world’s top AI labs suffer from inadequate safety measures—and the worst offender is Elon Musk’s xAI, according to a new study. 

The French nonprofit SaferAI released its first ratings Wednesday evaluating the risk-management practices of top AI companies. Siméon Campos, the founder of SaferAI, says the purpose of the ratings is to develop a clear standard for how AI companies are handling risk as these nascent systems grow in power and usage. AI systems have already shown their ability to anonymously hack websites or help people develop bioweapons. Governments have been slow to put frameworks in place: a California bill to regulate the AI industry there was just vetoed by Governor Gavin Newsom. 

“AI is extremely fast-moving technology, but AI risk management isn’t moving at the same pace,” Campos says. “Our ratings are here to fill a hole for as long as we don’t have governments who are doing assessments themselves. 

To grade each company, researchers for SaferAI assessed the “red teaming” of models—technical efforts to find flaws and vulnerabilities—as well as the companies’ strategies to model threats and mitigate risk.

Of the six companies graded, xAI ranked last, with a score of 0/5. Meta and Mistral AI were also labeled as having “very weak” risk management. OpenAI and Google Deepmind received “weak” ratings, while Anthropic led the pack with a “moderate” score of 2.2 out of 5.

Read More: Elon Musk's AI Data Center Raises Alarms.

xAI received the lowest possible score because they have barely published anything about risk management, Campos says. He hopes the company will turn its attention to risk now that its model Grok 2 is competing with Chat-GPT and other systems. “My hope is that it’s transitory: that they will publish something in the next six months and then we can update their grade accordingly,” he says. 

Campos says the ratings might put pressure on these companies to improve their internal processes, which could potentially lessen models’ bias, curtail the spread of misinformation, or make them less prone to misuse by malicious actors. Campos also hopes these companies apply some of the same principles adopted by high-risk industries like nuclear power, biosafety, and aviation safety. “Despite these industries dealing with very different objects, they have very similar principles and risk management framework,” he says. 

SaferAI’s grading framework was designed to be compatible with some of the world’s most important AI standards, including those set forth by the EU AI Act and the G7 Hiroshima Process. SaferAI is part of the US AI Safety Consortium, which was created by the White House in February. The nonprofit is primarily funded by the tech nonprofit Founders Pledge and the investor Jaan Tallinn. 

Yoshua Bengio, one of the most respected figures in AI, endorsed the ratings system, writing in a statement that he hopes it will “guarantee the safety of the models [companies] develop and deploy…We can't let them grade their own homework.”

Correction, Oct. 2: The original version of this story misstated how SaferAI graded the companies. Its researchers assessed the "red teaming" procedures of the models; they did not conduct their own red teaming.

More Must-Reads from TIME

Contact us at letters@time.com