Dario Amodei

CEO, Anthropic

3 minute read

As a young researcher, Dario Amodei helped prove the so-called “scaling laws” of AI: the observation that neural networks perform reliably better given more data and computing power. The finding underpins almost everything about today’s AI boom. It has given tech companies the confidence to invest previously unthinkable sums on training new models, with spectacular results. But as computing power and data become more ubiquitous, it also means that holding back the tide of AI is that much harder—even if the technology proves to be dangerous.

Amodei spends a lot of time thinking about that duality as the CEO of AI company Anthropic. He’s worried that humans may one day lose control of advanced AI if it becomes smarter than us. He’s concerned that, even before that, AI could be used by non-state actors to make biological, chemical, or cyber weapons. And he’s painfully aware that superintelligent AI, if humanity is not careful, might not have our best interests at heart. Yet he believes that to solve all of those problems, labs need to build and experiment on the most powerful models: That, to have a chance at building safe AI, you need to tip toe right up to the threshold of danger.

That argument may be true, but it’s also good for business. Anthropic’s latest model—Claude 3.5 Sonnet—was by some measures the most powerful publicly accessible AI upon its launch. The company expects to make nearly $850 million in revenue this year by selling its AIs direct to businesses and consumers. By all accounts Claude is generally safe, and we are not yet at the stage where AI poses the “existential” threats that many at Anthropic (and elsewhere) are worried about. Under Amodei, Anthropic granted the U.K. government’s AI Safety Institute early access to a version of Claude 3.5 for safety-testing, becoming the first AI company to publicly state it had done so. It was also the first lab to commit to building its own institutional safety measures proportionate to the risks of new models before those models are trained. OpenAI and DeepMind later followed its lead, committing to similar policies. “Building all this stuff from the ground up is difficult,” Amodei told TIME in May. “We've tried to make commitment to safety a positive, rather than something that hurts us.”

*Disclosure: Investors in Anthropic include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com