To run an AI model, computers must constantly shift vast amounts of data between separate memory and logic chips, a process that chokes performance. To solve this, Cerebras Systems in 2019 engineered a dinner plate–sized chip—the largest ever—that embeds both memory and logic. “People thought we were mad hatters,” says Andrew Feldman, Cerebras’s CEO and co-founder, given the huge technical hurdles. In March, the company released a third generation of the chip, the record-fast Wafer-Scale Engine 3 (WSE-3), which can train models 10 times bigger than OpenAI’s GPT-4, and will comprise the Condor Galaxy 3, a supercomputer under construction in Texas.
More Must-Reads from TIME
- How the Electoral College Actually Works
- Your Vote Is Safe
- Mel Robbins Will Make You Do It
- Why Vinegar Is So Good for You
- The Surprising Health Benefits of Pain
- You Don’t Have to Dread the End of Daylight Saving
- The 20 Best Halloween TV Episodes of All Time
- Meet TIME's Newest Class of Next Generation Leaders
Contact us at letters@time.com