Illustration by TIME; reference image courtesy of Dan Hendrycks

In 2021, Dan Hendrycks, at the time a Ph.D. candidate in computer science at the University of California, Berkeley, posted on an online forum, predicting that in 2023 Elon Musk would re-enter “the fight to build safe advanced AI.” Hendrycks was right—Musk launched xAI in July 2023. What he didn’t see coming was being named xAI’s safety adviser.

After completing his Ph.D., Hendrycks had founded the Center for AI Safety (CAIS), a San Francisco–based nonprofit with more than a dozen employees. Hendrycks spends his time researching ways to make AI safe, overseeing efforts to increase the number of people doing the same, and informing policymakers and the public about the hazards. It was CAIS that in May published a statement warning that the risk of extinction from AI should “be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” It was signed by more than 500 prominent academics and industry leaders.

It was around the same time that the Center for AI Safety newsletter—which is curated and edited by Hendrycks—responded to reports that Musk would found a new AI organization. The newsletter warned that Musk’s decision would likely increase the amount of competition between AI developers, resulting in less time and attention spent making sure AI systems are safe. Hendrycks emailed Igor Babuschkin, one of the senior xAI employees, to ask how xAI intended to approach AI safety, and the recruitment process proceeded from there. Hendrycks credits his graduate research on safety for helping him secure the xAI position and says he did not meet Musk until late in the interview process.

Hendrycks, now 28, says that despite his joining xAI, the warning stands: “I think competitive pressures are the largest risk factor.” He mentioned competitive pressures in his first meeting with Musk, who “agreed we should not try to win the extinction race.”

He hopes the government or international institutions will address these. In the meantime, Hendrycks expects xAI to be a “top-tier” AI company given Musk’s vast resources, and he would rather that he were advising on how to make sure that xAI’s systems are as safe as they can be. (Hendrycks, who still runs CAIS, says he holds no equity in xAI and is paid an annual salary of $1, to avoid conflicts of interest.)

Ultimately, though, Hendrycks thinks that the most dangerous competition won’t be between companies. “The corporate labs are potentially a bit of a distraction.” He predicts that as AI systems become increasingly powerful, militaries will realize their relevance for national security. He worries that without strong international coordination, an international arms race will ensue. Such an arms race, he predicts, would have an 80% chance of resulting in a catastrophe that could kill all or most of humanity. The urgency of that threat pushes him to work long hours—but he also comforts himself with the thought that he will have to work only another 20 years at most. Beyond that, he predicts either his job will have been automated or he—like everyone else—will be dead.

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST