For this week’s TIME100 Most Influential Companies cover story about OpenAI and its CEO Sam Altman, TIME’s former editor-in-chief Edward Felsenthal sat down with a number of company executives in early May, including two sessions with Altman, transcribed below. The conversations have been condensed and edited for clarity.
Edward Felsenthal: What are you using ChatGPT for in your daily life?
Sam Altman: One thing I use it for every day is help with summarization. I can’t really keep up on my inbox anymore, but I made a little thing to help it summarize for me and pull out important stuff from unknown senders, and that’s very helpful. I paste it in there every morning. I used it to translate an article for someone I’m meeting next week, to prepare for that. This is sort of a funny thing, I used it to help me draft a tweet that I was having a hard time with. That was all today.
Were you surprised by how viral the reaction to the launch was?
Not as much as it might have seemed from the outside. We thought it was going to excite people, but the people we spend a lot of time with in our bubble had already gotten pretty excited by the technology. And so in some sense it was like, “Wow, these numbers are going nuts. This is wild to watch.” But I remember a lot of the discussion that first week was, why hadn’t this happened before?
Other AIs had been in the world before.
I think the user experience mattered a lot. It’s not just the UI, it’s the way we tuned the model to have a particular conversational style. It’s very much inspired by texting. I was a huge early adopter and a super user of text messaging.
What will the interface look like as the technology integrates more deeply into our lives?
You’ll be able to do this with two-way voice , and it’ll feel real time. You’ll be able to talk like two people doing conversations, and that’ll be powerful. You can [eventually] imagine a world where, as you are talking, it’s like the Star Trek holodeck. But I think the thing that will matter most is how much of the stuff you want to happen can happen from a relatively small amount of conversation. As these models get to know you better and are capable of more, you can really imagine a world where you have a fairly simple and short conversation with the model, and a huge amount of things get done on your behalf.
And is that through our phone or is it everywhere?
I think it’s everywhere, all at once. Now people are still in the phase where they’re saying, “I’m an AI company.” But pretty soon we’ll just expect all the products and services we use to have some intelligence baked in, and it’ll just be expected like a mobile app is today.
You’ve described this technology as both the greatest threat to the human existence and the greatest potential advancement for humanity.
Definitely one of the confusing parts of this technology is just the overall power—the good, the potential bad. I think we can do a lot to maximize the good and manage and mitigate the bad, but the scary part is just sort of putting this lever into the world will for sure have unpredictable consequences. The exciting parts are almost too long to list, but I think this is transforming the way people do their work. It’s transforming the way people learn. It’s going to transform the way people interact with the world. In a deep sense, AI is the technology that the world, that people have always wanted. Sci-Fi has been talking about this for a very long time.
Thinking about it as a parent, one of the things that seems scary is, how do we know our kids are really our kids. You get a call, “I need money, I need help.”
That’s going to be a real problem and a real problem soon. It’s not just as parents, it’s thinking about our own parents who are already disproportionately victims of these ransom phone calls. I think we just all need to start telling people this is coming. You can’t trust a voice you hear over the phone anymore. And society is capable of adapting, as people are much smarter and savvier than I think a lot of the so-called experts think.
Do I need a code word with my kid?
I think it’ll be a stack of many solutions. People will use code words they’ll verify over video. That’ll work for a while. There will be technology solutions that help. People will exchange cryptographic keys and many other ideas too. We’ll just need a combination of technical and social solutions to operate in a different way. I am worried, but we will adapt. We’re good at this. We as a society.
Eric Schmidt and Jonathan Haidt argue that AI will make our problems with social media worse. Are you worried about that?
I think social media is in such a volatile place right now that I’m of course nervous about it. I can see a bunch of ways that AI makes it better too. I think these things are just hard to predict.
You’ve said the worst case scenario for AI is lights out for everyone.
We can manage this, I am confident about that. But we won’t successfully manage it if we’re not extremely vigilant about the risks, and if we don’t talk very frankly about how badly it could go wrong.
You’ve been reported in the past to be a doomsday prepper.
Look, if AGI goes wrong, no bunker’s going to help anyone. So I think it’s gotten caricatured very wildly. I do think survivalism is an interesting hobby. And it was funny because during the early days of the pandemic, a lot of people were like, “Maybe that was all a good idea.” But it’s not something that I spend any serious amount of time or effort on. I think AGI is going to go fantastically well. I think there is real risk that we have to manage through, but I think a bunker is an irrelevant piece of any of it.
I’m a Midwestern Jew, I think that fully explains my exact mental model, very optimistic and prepared for things to go super wrong at any point.
Why should the public trust a for-profit company like OpenAI or for-profit broader industry to put the good of the world ahead of profit?
Well, first of all, we’re not a for-profit company. We’re a non-profit that has a capped profit subsidiary. We thought really hard and designed a structure where our nonprofit has full control and governance over a capped for-profit subsidiary that can make a certain amount of money for its investors and employees to let us do what we need to do. These models are extraordinarily expensive.
Capped profit is still profit.
Sure. And I don’t think profit is bad. I’m very in favor of capitalism. I think it is the least bad system that we have yet invented. So I’m totally down for people to make profits. I think that’s great. I just think that the shape of this technology requires a different than normal incentive system.
Can you offer some specifics on what you see as government’s role in regulating AI?
I think that [we need regulation for] models that are above a certain power threshold. We could define that by capabilities, which would be the best. Or we could define it by the amount of computing power that goes into creating the model, which would be the easiest but I think imperfect. I think models like that need to be reported to the government. They should have the oversight of the government. They should be audited by external orgs. They should be required to pass a set of evaluations for some of the safety issues. That would be a really great policy. I hope that becomes global at some point.
You’ve spoken about a global body, that we need an oversight board the way we’ve looked at, for example, atomic energy.
Yeah, look, I am deeply not an expert here. And also you should be skeptical of any company calling for its own regulation. But I think we are calling for something that would affect us the most. We’re saying you’ve got to regulate the people at the frontier the most strongly. But I think these systems are already quite powerful and will get tremendously more powerful. And we have come together as a global community before for very powerful technologies that pose substantial risks that one has to overcome in order to get to the tremendous upside.
Other specific proposals?
There are some minor short-term things that I think hopefully are very non-contentious. I think all generated content should have to be tagged as generated. And the fact that we can’t even agree on this yet seems like a miss. And I can go through other specifics that I think are good in the short term. But this thing that I really think the world needs, this IAEA-like international coordination on very powerful training hardware, that’s going to take a while and is super important to do. AI advocates need to just start advocating for that. It really hasn’t happened since the IAEA in a meaningful-with-teeth way.
What do you think could get done in the U.S.?
I think we can get short-term AI regulation done for sure. That example of let’s identify all generated content as such. Let’s require independent audits for these systems and safety standards they have to meet. I think that’s doable. I’m somewhat optimistic that the longer-term coordination is doable too.
There was a recent report that Microsoft, your partner, and Google are already lobbying in the EU to have some of the regulations not apply to general purpose AI. How do we make sure the regulations when they happen are comprehensive and real?
It is our responsibility to educate policymakers and the public about what we think is happening, what we think may happen, and to put technology out into the world so people can see it. We think it’s very important to our mission to deploy things like ChatGPT so that people gain some experience and feeling of the capabilities and limitations of these systems. And it is our institutions and civil society’s role to figure out what we as a society want.
You’ve had an approach at OpenAI to bring a new product into the world early and let people engage with it rather than waiting until it’s more fully formed. This comes with risks. You discover things you didn’t expect. Is this “move fast and break things”?
No, but it is engage with the world, show people what’s happening, what’s going to happen, and get their input to build something that works for as much of society as well as possible. And the alternative, which some AI research efforts do advocate for, is people can’t handle this. It’s too powerful, it’s too scary. We’re going to build it in secret and make all the decisions ourselves and then push a button and drop a powerful AGI in the world. And I find that really deeply unappealing. I think the whole goal is to build something here that makes people’s lives better in hopefully a gigantic way. And we’ve got our opinions. Sometimes we’re right; sometimes we’re wrong. But there is nothing like putting something out and then going to talk to people. That input from the world about what we’re doing, what we should do, is super important.
You’ve supported universal basic income and expressed the concern that AI will deepen already severe inequality in the world. But you also said to me that you think that’s a ways off before we have the political will [to make it happen].
I hope AI can reduce inequality and more than that, I hope it can dramatically lift up the floor in the world. I am OK with a world that has trillionaires in it if no one is in poverty, if everyone’s life is getting better every year, if we can really raise the standard of living for people tremendously. I realize not everyone agrees with that. [But] I think AI very naturally, and we see this again and again through this one long technological revolution that we’ve all been living through, will raise the floor in a big way.
Are you in a different head space now than you were [before the release of ChatGPT]?
It’s wildly busy, that’s tough. And I get a lot of people’s anxiety projected onto me and that’s tough too, but I have always thought that you can sort of surprise yourself by what you can adapt to. Humans are just unbelievably adaptable. So this just feels like the new normal now. And other than too much email, I’ve sort of gotten used to everything else.
What are we getting wrong?
I think one thing that people are getting wrong in the frame is, is this a tool or a creature? And even people who know it’s a tool because it’s sort of easy to anthropomorphize, get caught up in the over-creaturizing it. And I think that’s a mistake. And I think it leads to mistaken thinking. This is a tool.
Also even though in the long term I think all of the hype is warranted, the short term hype is pretty disconnected from the current reality of the technology. We need to start acting for sure. We’ve been trying to raise the alarm bells on this for eight years now. But it’s very important to get it right, and the whole world is not getting disrupted this year. Acting with care and caution is important.
And yet it takes years to get actual policy in place.
For sure. I used to dutifully go to DC a couple of times a year and everyone was perfectly nice, they smile at you and they say, “Oh yeah, this AI thing, it might be important.” But they didn’t really care.
One of the reasons that we believe in this strategy is that for people to take this seriously, really engage with it, understand it, it’s not enough to tell them about it. You have to show people and people need to use the technology themselves, get a sense for it, the limits, the benefits, the risks. The conversation really needs to start now because it will take a while to figure this out, but every government is now paying serious attention.
You wrote in your Moore’s law post from a couple of years ago that this is going to massively accelerate inequality, and you talked about a need to redistribute income in some form. When? Companies are already making fortunes from AI. What is the marker for knowing when that should happen?
It was funny, when I wrote that post, it just got panned. “You’re crazy, this stuff doesn’t make any sense. It’s totally impossible.” And now those same people [are saying], “you didn’t go far enough. We need to do this right away. We need to put this stuff in place.” My sense is we are still years away, I don’t know how many, but a decent number of years at a minimum away from AI affecting the economy enough that we need to and are politically capable of getting something like that done. But I don’t think we’re decades away.
I would still love for basic income to happen in the world today. I think it’s just a good policy AI aside, but it doesn’t seem politically feasible right now.
The analogy that we hear everybody using in these early days of generative AI is social media. Does that speak to you?
Actually, it doesn’t. The analogy on this is something closer to nuclear materials or synthetic biology in terms of how we have to think about it. Social media had very social effects. One person using social media without anyone to listen to them has extremely limited impact on the world. But one person with nuclear material or one person making a pathogen could have tremendous impact on the world. Now, one person can also do tremendous good; one person can cure cancer or some small number of people can cure cancer. [But] it’s not an inherently social experience.
What’s coming in six months, a year?
There’s a lot of stuff that’ll come. We’ll get images and audio and video in there at some point, and the models will get smarter. But the thing that I think people are really going to be happy about is right now if you tried something 10,000 times and took the best one out of 10,000 responses, it’s pretty good for most questions, but not all the other ones….GPT-4 has the knowledge most of the time, but you don’t always get its best answer. And how can we get you the best answer all the time, almost all the time? If we can figure that out, and that’s like an open research nut to crack, that’ll be a big deal.
You’ve written about and talked about points where a slowdown might be warranted.
100%.
Have we hit one yet? What are the markers? How do you know when it’s time to hit the brakes?
If the models are improving in ways that we don’t fully understand, that would be one. If there’s significant societal disruption, that would be another. If we don’t feel like we’re making sufficient progress on alignment technology for the projected capabilities of the next train run, that would be a third.
You’ve got a huge valuation now after the Microsoft investment. How much pressure do you feel to start to really kickstart the revenue engine?
Not much. We set up a really thoughtful deal with Microsoft. We’re a super mission-focused company.
Do you think it’ll happen? I mean, are you focused—
Do I think revenue will get big?
Yeah.
Yeah, I think so, but I don’t think we’ll squeeze it like other people would. I’m sure we’ll grow a lot slower than we could.
And what pressure do you feel from this explosion of investment in the space and startups everywhere you turn?
You’re not going to believe me on this, but almost none at all. I’ve at least been consistent about saying this for years. This is just different than anything else. Society is going to fundamentally change. This is super different than who gets a little bit more or less market share. We’ve got to figure out to manage this and have this go well.
A weekly newsletter featuring conversations with the world’s top CEOs, managers, and founders. Join the Leadership Brief.
More Must-Reads from TIME
- Introducing the 2024 TIME100 Next
- The Reinvention of J.D. Vance
- How to Survive Election Season Without Losing Your Mind
- Welcome to the Golden Age of Scams
- Did the Pandemic Break Our Brains?
- The Many Lives of Jack Antonoff
- 33 True Crime Documentaries That Shaped the Genre
- Why Gut Health Issues Are More Common in Women
Contact us at letters@time.com