Illustration by TIME; reference image courtesy of Reid Hoffman

The entrepreneur Reid Hoffman has successfully bridged multiple generations of technology breakthroughs. He worked at Apple in the ’90s, was a founding member of PayPal’s board of directors, co-founded LinkedIn, and played a crucial role in Facebook’s founding.

Recently, Hoffman, 58, has turned his full, undivided attention to what he believes is the next great technological revolution. Hoffman was one of the first investors in OpenAI, and his venture-capital firm, Greylock Partners, has invested hundreds of millions of dollars in dozens more AI companies. (All his investments in the past two years have been related to AI.) More recently, Hoffman co-founded Inflection AI, an AI chatbot startup, and wrote a book in collaboration with AI called Impromptu.

Despite other AI leaders calling for a pause in AI development because of the technology’s risks, Hoffman believes that the world should be taking an accelerationist approach to AI. “Every month, every year, every week that there’s earlier [AI] access causes human goodness,” he says.

In a TIME interview, Hoffman defended his position in the face of risks, and talked about his desire to bring an AI assistant to every home and job. (This interview has been condensed and edited for clarity.)

TIME: Did you have a light-bulb moment that persuaded you to go all-in on AI?

Reid Hoffman: When I started seeing what DeepMind was doing with AlphaGo, and seeing what the scale [computing power] was enabling, I was like, “I actually think that unlike the previous AI waves, this one is going to create a lever that will move industries and move the world.” In the very near term, I probably got convinced when I was watching the progress between GPT-2 and GPT-3 while I was on the board of OpenAI. Applying scale to transformers was part of what OpenAI did to cause the trigger of the tsunami that now everyone’s doing.

You co-wrote a book called Blitzscaling, in which you advocated a hyperaggressive approach to growing out a business. Do you believe the principles of blitzscaling apply to AI companies?

The short answer is yes. Basically, what’s actually going on with AI is scale revolution. There wasn’t a eureka moment, or a new algorithm. Many of the algorithms and compute techniques had actually already been pioneered for decades. It was beginning to build scale computers, with hundreds and thousands of GPUs in a dense computational network using deep learning and other techniques, which is what has occasioned the current AI revolution.

Because we’re living in the mobile internet connected age, it makes economic sense to spend hundreds of millions of dollars—soon to be billions—on these large-scale computers, enabling billions of consumers and people working in companies to do these intelligent applications.

There are many critics of the blitzscaling approach, who argue that it poses all sorts of externalities on society at large: that it leads to monopolies, introduces huge unnecessary risk, and encourages unethical behavior.

The truth of the matter is that in a globally internet mobile-connected world, speed-to-scale is who wins. Whether it’s winning a search engine, like Google, or winning various social networks, like Facebook, LinkedIn. That is just the pattern, to be descriptive, of what wins.

But that’s part of the reason I have a chapter in my book about responsible blitzscaling. There are a bunch of ethical blitzscalers: LinkedIn, Netflix, Airbnb, Google. I do think that there’s a bunch of good that comes from blitzscaling too, like when you begin to get revolutions in enablement, like Airbnb. I think the release of ChatGPT has been an unadorned positive so far in its rise.

I can see a lot of different ways that AI can be applied to climate change, to pandemics. When you ask, “Should you blitzscale AI?” I go, “Well, a lot of these things that are the major risks for humanity, AI could be in the positive ledger on them.” So the sooner it gets here, the better.

I can see a line of sight today to a medical assistant or tutor on every smartphone. Think about that elevation of humanity. So every month, every year, every week that there’s earlier [AI] access causes human goodness.

But generative AI models declaratively state falsehoods all the time. Meta’s Galactica allowed people to create real-looking scientific research papers that were completely bogus. Doesn’t science and medicine seem like a particularly bad fit for this technology?

We should not say, “We should just hand all of our medical care to GPT-4 today.” That would be the idiot position. Let’s apply it intelligently.

And there’s a whole bunch of very good R&D on how to massively reduce hallucinations [AI-generated inaccuracies] and get more factuality. Microsoft has been working on that pretty assiduously from last summer, as has Google. It is a solvable problem. I would bet you any sum of money you can get the hallucinations right down into the line of human-expert rate within months. So I’m not really that worried about that problem overall.

With your company Inflection, you hope to build and release an AI assistant that could help you make dinner, solve conflicts, and give you advice. Since different people have different values—including within one country or even family—how are you thinking about instilling value systems into these AI companions?

I’m not of the belief that technology is neutral. Technology always embodies certain kinds of values, and you try to make them as broad and as universal and as humanity-positive as you can.

It’s true that one culture will value conformity with the government better, and this culture will value privacy more. You want to be clear, as you’re building technology, what values you’re building in. You want to be in dialogue with your customers in your society, where your home base, where you’re offering your product, is. And you want to be accepting feedback to where you need to upgrade your values, or where your implementation isn’t meeting your values.

Then, everyone—from individuals to societies—can decide how to inflect with that. Do they decide they will negotiate with the company and change it? Do they decide to regulate it or put it in the box or forbid it?

We set up Inflection as a public-benefit corporation. We’re trying to emphasize being kind and compassionate, and leaning against hatred and violence. So we put that in as a mission statement for a public-benefit corporation and did that from the beginning.

I think all tech companies and AI companies have one of the highest necessities of articulating what values they’re building their technology to, what values they should be held accountable to, how they’re working to hold themselves accountable, and who they’re engaging to help them.

You counseled Vice President Kamala Harris on AI development. What did you tell her?

“Here are all the amazing ways it can amplify humanity. If you’re worried about job transition, AI can be built to help people with job transitions, find alternative work, and learn how to do that work.

“If we’re going to have these waves of transition of jobs in AI, like in customer service or truck drivers, we should be making sure that AI is also helping us with the human transition.”

Stability AI’s Emad Mostaque recently predicted that the AI industry “will be the biggest bubble of all time.” Does that phrase concern you?

If he was right, it would be a concerning phrase. I think that there will be a bunch of AI companies that will be foolishly invested in. Because when everyone goes, “Oh my God, we all know it’s a platform of the future,” you get a flood of capital into foolish things.

But I think that the potential for AI for every individual, for every job, for every company, for every industry, for every society, is as yet underdescribed, rather than over-. I think that for smart investors, this will be like investing in the internet, in mobile, cloud: all things we’ve done at Greylock fairly well.

More Must-Reads From TIME

Contact us at letters@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST