What to Know About Elon Musk’s New AI Company, xAI

6 minute read
Updated: | Originally published:

Elon Musk wants to “understand the true nature of the universe.” At least that’s what his new AI company, xAI, said on its website as he announced its formation on Wednesday.

Musk incorporated xAI in Nevada in March this year and reportedly purchased “roughly 10,000 graphics processing units”—hardware that is required to develop and run state-of-the-art AI systems. The company has not said how it is financed but the Financial Times reported in April that Musk was discussing getting funding from investors in SpaceX and Tesla, two companies he runs.

The company has not shared much detail about its intentions, but said on its website that its team would be joining a Twitter Spaces call on July 14 to take questions. (Musk is the owner of Twitter.) The AI company will work closely with Twitter, now called X Corp., and Tesla, as well as other companies “to make progress towards our mission,” xAI said on its website.


More from TIME


 

xAI and Musk’s history with artificial intelligence

The team at xAI, led by Musk, includes former employees of prominent AI companies OpenAI and DeepMind, as well as Microsoft and Tesla. Dan Hendrycks, director of the Center for AI Safety, is listed as an adviser. The Center for AI Safety has been highly vocal about safety concerns. The organization released a statement signed by hundreds of AI scientists and experts, as well as the CEOs of some of the top AI companies in May, which said that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Demis Hassabis, CEO of DeepMind, Sam Altman CEO of OpenAI, and Dario Amodei, CEO of Anthropic were among the signatories.

Musk did not sign the Centre for AI Safety statement, but he did sign an open letter published in March by the Future of Life Institute, which called on AI companies to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Musk was one of the founding chairs of OpenAI, along with Altman, OpenAI’s CEO. Musk was among a group of investors, including Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys, and YC Research, who committed $1 billion to fund OpenAI in 2015. Musk has said he contributed $100 million of this $1 billion, but has also put the figure at half that.

Read More: A Brief History of Elon Musk Saying One Thing and Doing Another at Twitter

The circumstances of Musk’s departure aren’t entirely clear. According to an OpenAI blog post and later tweets by Musk, he left OpenAI in 2018 to prevent conflicts of interest as Tesla became more focused on AI. Semafor reported in March that Musk had proposed that he assume leadership of OpenAI, and walked away after his proposal was rejected. The Financial Times reported in April that Musk’s departure was also due to clashes with other board members and staff over OpenAI’s approach to AI safety.

Since he left in 2018, Musk has been critical of the direction OpenAI has taken. In an interview with Tucker Carlson on Fox News in April, Musk said of the company “they are now closed source, and they are obviously for-profit, and closely allied with Microsoft.” Microsoft and OpenAI have a multibillion-dollar partnership, which grants OpenAI use of Microsoft’s cloud computing capacity in exchange for OpenAI’s artificial intelligence systems powering Microsoft’s products.

In March, Musk tweeted “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” OpenAI has previously said that it moved from non-profit status to a “hybrid of a for-profit and nonprofit” because the computational demands of training state-of-the-art AI systems mean that OpenAI would need to raise more funding than would be available to a standard non-profit.

In the April interview with Carlson, Musk also said he was worried that AI models are being trained to be “politically correct,” and pledged to create a “Truth GPT,” which he said would be a “maximum truth-seeking AI.”

Read More: Elon Musk Is Bringing the Culture Wars to AI

Musk, who has frequently spoken about the risks posed by advanced AI systems in the past, also said in the interview that he was founding a new AI organization to prevent “AI dystopia.” However, experts and researchers, including xAI adviser Hendrycks, have expressed concerns that the addition of another highly-resourced company to the AI ecosystem could further fuel the race to develop powerful AI systems, at the expense of efforts to make them safe.

Read more: The AI Arms Race Is Changing Everything

In April, in response to reports that Musk would likely start a new AI company, Hendrycks wrote that the “emergence of a new major AI developer will likely increase competitive pressures,” and that the “pressures to race ahead can cause actors to make sacrifices, especially when there are tradeoffs between safety and competitiveness.”

On Wednesday, in a discussion on Twitter Spaces with Congressmen Ro Khanna and Mike Gallagher, Musk reaffirmed that his approach to creating safe AI would be based on the AI being truth-seeking. From an AI safety standpoint, a maximally curious AI, one that is trying to understand the universe, is going to be pro-humanity,” Musk said. “From the standpoint that humanity is just much more interesting than not humanity.”

Jess Whittlestone, head of AI policy at the Centre for Long-Term Resilience, a U.K.-based think tank, told TIME in an email that this is “a pretty unorthodox (and I think quite naive) approach to AI safety. I’m not sure we can even say clearly what it would mean for an AI to be “maximally curious” and it’s a huge leap to assume that this means an AI will be pro-humanity. The whole challenge right now is that we cannot fully understand these AI models or predict their behavior, and I don’t see anything in Musk’s approach which suggests it will solve that.”

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com