Jonathan Ross

CEO, Groq

3 minute read

Jonathan Ross had already made his name at Google, designing the custom chips that the company would go on to train its AI models on, when Amazon and Microsoft came calling. It was 2016, and both companies tried to poach him to help build their own chips, he says. One of the suitors—Ross won’t say which—privately told him that it would be bad for the world if Google and China were the only two entities in charge of the world’s most advanced AI, and that he should help them to become a check on their power. “Three’s not that much better than two,” Ross recalls thinking at the time. “But I like your pitch—I’m going to go do this and make it available for everyone.”

That’s the founding story, as Ross tells it, of Groq, now one of the buzziest AI chipmaking startups in Silicon Valley. Groq’s chips, called language processing units (LPUs), aren’t designed for the initial training period of AI models. Instead, they’re optimized to run large language models as fast as possible once they’ve been created. What distinguishes LPUs from others is their efficiency: 10 times faster and 10 times cheaper to run, according to a Groq slide deck, than industry-standard graphics processing units (GPUs). On stage at a conference in Dubai in February, Ross demonstrated the speed of a Meta chatbot running on a cluster of Groq LPUs. It spit out several paragraphs in seconds, much faster than the industry standard. The ground, however, is shifting fast: Cerebras Systems claimed on Aug. 27 that its cloud platform is twice as fast as Groq's and 20 times as fast as GPU-based rivals.

Groq was valued at $2.8 billion at its most recent funding round, led by BlackRock. But it’s still a minnow compared to Nvidia, its main competitor, which is worth around $3 trillion. Groq’s LPUs, critics point out, are faster but also less flexible than Nvidia’s GPUs, which restricts its potential customer base. But Ross sees space in the market for both. “Nvidia will sell every single GPU that they make, and we will sell every LPU we make,” he says. “The demand is insane.” Ross is also betting that by making faster computing more ubiquitous, the demand for AI—and the chips it runs on—will only increase. “As AI [chips] get cheaper, people are going to buy more,” he says, “because there's more things you'll be able to afford for them to do.”

More Must-Reads from TIME

Write to Billy Perrigo/Dubai at billy.perrigo@time.com