Big Tech Companies Were Investors in Smaller AI Labs. Now They’re Rivals

9 minute read

Amazon and Microsoft have, so far, stood slightly apart from the artificial intelligence arms race. While Google and Meta made developing their own AI models a top priority, Microsoft and Amazon have invested in smaller technology companies, in return receiving access to those companies’ AI models that they then incorporated into their products and services.

Microsoft has invested at least $13 billion in OpenAI, the company behind ChatGPT. As part of this agreement, OpenAI gives Microsoft exclusive access to the AI systems it develops, while Microsoft provides OpenAI with the computational power it needs. Anthropic has deals with both Amazon and Google, receiving $4 billion and up to $2 billion from each, respectively, in exchange for Anthropic making its models available through Amazon and Google’s cloud services platforms. (Investors in Anthropic also include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

Now, there are signs that the two technology giants are wading deeper into the fray. In March, The Verge reported that Amazon has tasked its AGI team with building a model that outperforms Anthopic’s most capable AI model, Claude 3, by the middle of this year. Earlier this month, The Information reported that Microsoft is training a foundation model large enough to compete with frontier model developers such as OpenAI.

While there are many types of AI systems that are used in a multitude of ways, the big trend of the last couple of years is language models—the AI systems that can generate coherent prose and usable code, and that power chatbots such as ChatGPT. While younger companies OpenAI and Anthropic, alongside the more established Google DeepMind, are in the lead for now, their new big tech rivals have advantages that will be hard to offset. And if the tech giants come to dominate the AI market, the implications—for corporate concentration of power and for whether the most powerful AI systems are being developed safely—could be troubling.

A change in strategy

Over the course of the 2010s, AI researchers began to realize that training their AI systems with more computational power would reliably make them more capable. Over the same period, the computational power used to train AI models increased rapidly, doubling every six months according to researchers at Epoch, an AI-focused research institute.

The specialized semiconductor chips required to do that much computational work are expensive, as is employing the engineers who know how to make use of them. OpenAI CEO Sam Altman has stated that GPT-4 cost over $100 million to train. Needing more and more capital is why OpenAI, which was founded in 2015 as a nonprofit, changed its structure and went on to ink multibillion dollar deals with Microsoft, and why Anthropic has signed similar agreements with Amazon and Google. Google DeepMind—the AI team within Google that develops Google’s most powerful AI systems—was formed last year when Google merged its elite AI group, Google Brain, with DeepMind. Much like OpenAI and Anthropic, DeepMind started out as a startup before it was acquired by Google in 2014. 

Read More: Amazon’s Partnership With Anthropic Shows Size Matters in the AI Industry

These partnerships have paid off for all parties involved. OpenAI and Anthropic have been able to access the computational power they need to train state-of-the-art AI models—most commentators agree that OpenAI’s GPT-4 and Anthropic’s Claude 3 Opus, along with Google DeepMind’s Gemini Ultra, are the three most capable models currently available. Companies behind the frontier have so far tried alternative business strategies. For example, Meta gives more thorough access to its AI models in order to benefit from developers outside the company tuning them up, and to attract talented researchers who prefer to be able to openly publish their work.

At quarterly earnings reports in April, Microsoft and Amazon reported bumper months, which they both partly credited to AI. Both companies also benefit from the agreements in that a large proportion of the money flows back to them, as it's used to purchase computational power from their cloud computing services units.

However, as the technical feasibility and commercial utility of training larger models has become apparent, it has become more attractive for Microsoft and Amazon to build their own large models, says Neil Thompson, who researches the economics of AI as the director of the FutureTech research project at the Massachusetts Institute of Technology. Building their own models should, if successful, be cheaper than licensing the models from their smaller partners and give the big tech companies more control over how they use the models, he says.

It’s not only the big tech companies that are making advances. OpenAI’s Altman has pitched his company’s products to a range of large firms that include Microsoft customers.

Who will win out?

The good news for OpenAI and Anthropic is that they have a head start. GPT-4 and Claude 3 Opus, alongside Google’s Gemini Ultra, are still in a different class from other language models such as Meta’s Llama 3, according to a popular chatbot ranking site. OpenAI notably finished training GPT-4 back in August 2022.

But maintaining this lead will be “a constant struggle,” writes Nathan Benaich, founder and general partner at venture capital firm Air Street Capital, in an email to TIME. “Labs are in the challenging position of being in constant fundraising mode to pay for talent and hardware, while lacking a plan to translate this model release arms race into a sustainable long-term business. As the sums of money involved become too high for US investors, they’ll also start having to navigate tricky questions around foreign sovereign wealth.” In February, the Wall Street Journal reported that Altman was in talks with investors including the U.A.E government to raise up to $7 trillion for AI chip manufacturing projects.

Read More: The UAE Is on a Mission to Become an AI Power

Big technology companies, on the other hand, have ready access to computational resources—Amazon, Microsoft, and Google account for 31%, 24%, and 11% of the global cloud infrastructure market, respectively, according to data from market intelligence firm Synergy Research Group. This makes it cheaper for them to train large models. It also means that, even if further development of language models doesn’t pay off commercially for any company, the tech companies selling access to computational power via the cloud can still profit.

“The cloud providers are the shovel salesmen during the gold rush. Whether frontier model builders make money or lose it, cloud providers win,” writes Benaich. “Companies like Microsoft and Amazon sit in an enviable position in the value chain, combining both the resources to build their own powerful models with the scale that makes them an essential distribution partner for newer entrants.”

But while the big technology companies may have certain advantages, the smaller companies have their own strengths, such as greater experience training the largest models, and the ability to attract the most talented researchers, says Thompson.

Anthropic is betting that its talent density and proprietary algorithms will allow it to stay at the frontier while using less computational resources than many of its competitors, says Jack Clark, one of the company’s co-founders and head of policy. “We're going to be on the frontier surprisingly efficiently relative to others,” he says. “For the next few years, I don't have concerns about this.”

If Big Tech wins

It is still very much an open question whether big technology companies will manage to outcompete their smaller investees. But if they were to, there could be implications for market competition and for efforts to ensure the development of powerful AI systems benefits society. 

While it could be argued that more companies entering the foundation model market would increase competition, it is more likely that the vertical integration will serve to increase the power of already powerful technology companies, argues Amba Kak, co-executive director of the AI Now Institute, a research institute that studies the social implications of artificial intelligence.

“Viewing this as ‘more competition’ would be the most inventive corporate spin that obscures the reality that all the versions of this world serve to consolidate the concentration of power in tech,” she writes to TIME. “We need to be wary of this kind of spin especially in the context of heightened antitrust scrutiny from the UK CMA, the FTC and European Commission.”

Read More: U.K. Competition Watchdog Signals Cautious Approach to AI Regulation

Larger companies coming to dominate could also be troubling because the smaller companies that currently lead were explicitly founded in order to ensure that the building of powerful AI systems goes well for humanity, says Anton Korinek, an economics professor at the University of Virginia. OpenAI’s founding goal was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole,” and Anthropic’s founding goal was “to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people.” 

“In some sense, you can say, the AGI labs—OpenAI, Anthropic, DeepMind—were all founded on the basis of idealism,” he says. “Large shareholder owned and controlled corporations, they just can't follow that strategy—they have to produce value for the shareholder ultimately.”

Even so, companies like OpenAI and Anthropic cannot act entirely in the public interest, because they’re also exposed to commercial incentives through the need to raise funds, says Korinek. “It's part of that broader movement, that capital in the form of [computational power] is becoming the most important input,” he says. “If your training runs are in the millions, it is much easier to raise philanthropic funding for this. But if your training rounds are in the billions, you do need financial returns, in the way that our economy is currently organized.”

With reporting by Billy Perrigo/San Francisco

More Must-Reads from TIME

Write to Will Henshall at will.henshall@time.com