You ever watch Star Trek?” Sam Altman, the CEO who has become the most visible face of the current artificial-intelligence boom, has just called us an Uber. The 38-year-old serial entrepreneur has lately become known for talking up the risks of AI, but he is at his most animated in talking about its possibilities. So transformative is this new technology that responds naturally to our verbal commands that he envisions new hardware for it—something, eventually, like the Star Trek holodeck, in which characters use their voice to conjure and interact with 3D simulations of the world. An interface like that feels “fundamentally right,” he says.
Altman’s company, OpenAI, is only seven years old. It has fewer than 500 employees. Pipe some pan flutes and whale sounds into the airy, light-filled lobby of its headquarters in San Francisco’s Mission District, and it could almost be mistaken for a spa. But in the span of 6 months, the company—through its viral product ChatGPT—has vaulted AI into public consciousness. Few doubt it’s at the vanguard of a revolution that will, for better or worse and probably both, change the world.
ChatGPT is almost certainly the most rapidly adopted product in the history of technology. It’s also one of the more versatile, capable of responding to a vast array of user prompts, from “Tell me a joke” to “Draft 10 slides with ideas to grow revenue at a hair salon.” It can write poetry and explain scientific concepts. Altman says he uses it for routine tasks, like pulling highlights from his overflowing email inbox or to “draft a tweet that I was having a hard time with.” Essentially a super-powerful auto-complete tool trained to generate language by observing patterns in large quantities of data, it has its limits—including a disconcerting inability to separate truth from fiction. OpenAI’s warning about this, placed beneath the text input box, hasn’t stopped people from using it for homework, investment advice, and even therapy.
Consumer-facing AIs had hit the market before, but something about ChatGPT’s text-message-inspired, conversational interface clicked. In the days following the Nov. 30 release, OpenAI employees were glued to their screens, posting graphs in the company Slack channel as usage numbers took off. “It just kept going up and to the right at a steeper and steeper angle,” says Diane Yoon, OpenAI’s vice president of people. Two months later, ChatGPT had more than 100 million unique visitors, according to data from Similarweb. Instagram took 30 months to reach that level.
A weekly newsletter featuring conversations with the world’s top CEOs, managers, and founders. Join the Leadership Brief.
It was the start of an AI arms race. Google declared an internal “Code Red” and fused its two AI labs—Google Brain and DeepMind—into one organization. Microsoft, having already invested $3 billion in OpenAI, poured in an additional $10 billion. Billions more flowed into startups and the stocks of public companies that could plausibly (and implausibly) claim AI would supercharge their growth. In March, OpenAI upped the stakes again, releasing an even more powerful tool called GPT-4.
Tempering all the promise is real fear. There’s little doubt AI will make many jobs extinct, as new technology does, even as it creates new ones. It’s also enhancing the ability of bad actors to flood us with fake content masquerading as truth and fake voices that sound eerily like those of our loved ones. Can we trust what we see or hear? Altman acknowledges, with unsettling matter-of-factness, that the answer is probably no. “You can’t trust a voice you hear over the phone anymore,” he says. “We just all need to start telling people this is coming.”
In the wrong hands, these tools could cause even worse problems, launching cyberattacks or causing havoc in financial markets. And if AIs were to become capable of making plans on their own and acting on them—especially if those plans aren’t “aligned” to human values—it’s possible to imagine them deciding humans are obstacles to their goals. Altman himself joined dozens of other tech leaders and scientists recently to sign a statement that cast the development of AI as a risk on par with pandemics and nuclear war. The worst-case scenario, he said earlier this year, is “lights out for everyone.”
Read more: TIME’s Full Interview With OpenAI CEO Sam Altman
This has become Altman’s calling card, championing the possibilities of AI while urging policymakers to get going on rules of the road to mitigate the dangers. “I’m a Midwestern Jew,” says Altman, who grew up in St. Louis. “I think that fully explains my exact mental model—very optimistic, and prepared for things to go super wrong at any point.” A related theme of Altman’s is adaptability: the idea that success comes from the ability to reorient yourself to even radically new circumstances. In his professional life, adaptability is part of a skill set that guided Altman to great wealth in his late 20s and early 30s, as he helped launch thousands of new companies as a partner and later president of the renowned startup accelerator Y Combinator. It also informs his belief that we as a species can avoid the worst of what AI could bring. “Society is capable of adapting as people are much smarter and savvier than a lot of the so-called experts think,” he says. “We can manage this.”
The evangelist preaching about risks yet plowing ahead anyway is just one of the dualities that come through in conversation with Altman. He is an outspoken advocate for AI regulation, with his own opinions about which ones should apply to his company. (“You should be skeptical of any company calling for its own regulation,” he acknowledges.) He is an avowed capitalist who says he has no equity in OpenAI, structured his company to cap investors’ profits, and advocates a universal basic income program to temper inequality, which many believe AI will exacerbate. He professes faith in the ability of these models to continually improve, despite the fact that he and his colleagues concede they have limited insight into how the technology will evolve. “Even the people who create them don’t actually know what they can and can’t do,” says Helen Toner, a member of OpenAI’s board. “I expect that it’s going to be probably years before we really know all the things that GPT-4 can and can’t do.”
How much we’re able to trust the human beings who are “tuning” these powerful machine algorithms—both their intentions and their capabilities—will be one of the great recurring questions of the coming years. In conversation with OpenAI employees across a range of departments, a recognition of AI’s dangers is a near universal talking point. It’s a far cry from the playbooks of tobacco, fossil-fuel, and social media executives who spent years denying possible harms before finally being forced to acknowledge reality.
The semantics start to sound a bit Orwellian. Yoon, the HR chief, says OpenAI doesn’t use the term competitors, a nod to the importance of collaboration with others in the field in order to avoid bad outcomes. When asked about the AI arms race, a company spokesman objected to the metaphor, saying “the whole arms race is a bit triggering for us.”
Of course, it’s hard to argue that OpenAI didn’t play a significant role in triggering what is now unfolding in the industry. “It is a race,” says Tristan Harris, the ethicist who co-founded the Center for Humane Technology, but collaboration among the major players will be key. “We need to coordinate because it’s not about getting OpenAI to more safety. That wouldn’t do anything because everyone else would just keep going faster.” Harris is concerned that “advances in capabilities are exponential and advances in safety measures are linear,” and about “the commercial incentives driving the show rather than conscious considerations about what world we want to see.”
Altman views the ChatGPT interface as an advance over the iPhone in its simplicity, and says it was inspired by his own love of texting as a kid. The decision to give ChatGPT a “horrible,” robotic name, Altman says, was a very deliberate one; he frets about the temptation to anthropomorphize AI tools, which can undermine the distinction between humans and machines. Here is another duality: ChatGPT is trained to remind users that it is incapable of having opinions. And yet ChatGPT’s human qualities—its conversational interface, its liberal use of the first-person pronoun—were a key part of what made it go viral.
Despite the massive investment from Microsoft and the shift to a profit model that, even capped, allows for 100 times returns, OpenAI still considers itself a research lab committed to its original mission of ensuring artificial general intelligence “benefits all of humanity.” The company’s culture is defined by it. “I think it’s important to push the technology as an antidote to the bad use,” says chief operating officer Brad Lightcap. “If this project had started 60, 70 years ago, it probably would’ve been a government-funded effort.”
OpenAI brought in a reported $28 million in revenue last year, which would be less than half of what a typical car dealership brings in. But Altman says he feels little immediate pressure to bring the company’s commercial success to the level of its clout. Asked how much time he spends worrying about competition, he says, “You’re not going to believe me on this, but almost none at all.” What keeps him up at night, he says, is not surging competition from language models such as Google’s LaMDA, Meta’s LLaMA, and Anthropic’s Claude. “This is super different than who gets a little bit more or less market share,” Altman says. “We’ve got to figure out how to manage this and have this go well.”
A few days after our visit, Altman embarked on a five-week blitz across six continents. The trip, he says, was designed to get him out of the Silicon Valley echo chamber. To some extent, it was also a victory lap—and an attempt to encourage and influence global AI regulation as nation-states wake up to the power of the technology he steers. On his tour, Altman addressed the U.S. Senate, met the British and Indian Prime Ministers, opined on forthcoming E.U. AI regulation, and urged collaboration with China.
On May 24, Altman spoke at a university lecture hall in London. The queue to get in snaked down the road and around a corner. Instead of disappearing backstage after the talk, Altman hopped down into the crowd, where he was surrounded by students and journalists. He posed for selfies and gamely answered questions. After heading out through a revolving door, he had a short discussion with protesters who had come to picket, one carrying a sign that read Stop the suicide AGI race. With no bodyguard or PR handler in sight, it was a starkly different scene from earlier stage-managed tours by Mark Zuckerberg and others.
As with tech companies before his, there is some daylight between what Altman says and what happens behind the scenes. At the London event, Altman told reporters that OpenAI may decide to “cease operating” in the E.U. as a result of the bloc’s forthcoming AI regulation. In a meeting last year with E.U. officials, which had not been previously reported, OpenAI pushed back against wording that would have required “general purpose” AI models like ChatGPT to comply with the same rules as AI tools considered by the E.U. to be “high risk.”
Read More: OpenAI Lobbied E.U. to Water Down AI Regulation
In our conversations, Altman expressed deep optimism about society’s ultimate ability to adjust to AI’s risks. To ensure that the people you hear on the phone or see on video are who they say they are, for example, he foresees society coming to use a mix of technical and social measures, such as code words or cryptographic keys that verify identity. He sees the promise for AI to eventually do a huge amount of the mundane tasks that occupy our days, and at the same time grapple with prompts like “Discover the cure for cancer.” Says Altman: “The exciting parts are almost too long to list.”
And yet, he has also mused aloud about whether OpenAI did “something really bad” in creating ChatGPT. Altman has long been reported to be a doomsday prepper—with guns, medicines, and gas masks at the ready. He rolls his eyes at the characterization, which he finds overblown, but adds that he does find survivalism “an interesting hobby.”
“Look, if AGI goes wrong, no bunker’s going to help anyone,” he says, adding later: “The scary part is just sort of putting this lever into the world will for sure have unpredictable consequences.”
Read TIME’s Full Interview With OpenAI CEO Sam Altman
Buy a print of the TIME100 Companies cover featuring Sam Altman here
- Introducing the 2024 TIME100 Next
- The Reinvention of J.D. Vance
- How to Survive Election Season Without Losing Your Mind
- Welcome to the Golden Age of Scams
- Did the Pandemic Break Our Brains?
- The Many Lives of Jack Antonoff
- 33 True Crime Documentaries That Shaped the Genre
- Why Gut Health Issues Are More Common in Women