Illustration by TIME; reference image courtesy of Sayash Kapoor and Arvind Narayanan

The discourse around AI can be awash with hyperbole. Those trying to sell their products and services boast about the things that AI systems can do already. Those who are concerned about the risks from AI often make arguments grounded in fear of capabilities that do not yet exist. And those trying to secure investment rave about the benefits that AI will bring.

In 2019, Arvind Narayanan, a professor of computer science at Princeton University, gave a talk titled “How to recognize AI snake oil.” The talk went viral: the slides were downloaded tens of thousands of times and his tweets were viewed by millions. Sensing an appetite, Narayanan teamed up with Sayash Kapoor, one of his Ph.D. students, to write a book on the topic, which will be published in 2024. They have been sharing their ideas as they develop and commenting on recent developments in AI on their Substack, AI Snake Oil.


More from TIME


TIME spoke with Narayanan and Kapoor about AI misinformation, snake oil, and existential risks. (This interview has been condensed and edited for clarity.)

TIME: Why is there so much AI snake oil going around?

Arvind Narayanan: I think there’s a long list of things. One thing I would start with is, rather than looking at the supply of bullsh-t, look at the demand for bullsh-t. If the demand exists, the supply will automatically materialize. If people are not buying it, it doesn’t matter if someone’s spinning hype out there.

And so where does the demand come from? Our thesis in the book is that broken AI is very appealing to broken institutions. The reason that HR departments are desperate for AI that claims to predict which candidates are going to be good is because they’re getting hundreds, perhaps thousands, of applications per open position. And the idea of evaluating them thoroughly in the way that one would hope for is just not working. And that’s because the hiring processes themselves seem to be broken.

Another thing I would point to is because [machine learning] is kind of tricky technology in many ways, it’s very, very easy to fool oneself. So a lot of people that are pushing hype are not just fooling others, they’re fooling themselves.

What makes this technology particularly tricky to understand?

Sayash Kapoor: Part of it is that it’s a new technology in many senses. Before we had machine learning, we had tools which were well understood because they’ve been used for decades.

Narayanan: If you compare machine learning—let’s say a particular application, like self-driving cars—to some other more well-understood technology, like microwaves, for instance. If you want to measure how well your microwave is going to work, you can do that in the lab pretty well, you can put the kinds of things that people are going to put in it. [Researchers] can’t really do that with self-driving cars. So a lot of the time people are making claims about the performance of machine learning in the real world, based on simulated measurements with past data, as opposed to actually putting it out there in the world. The vastness of that gap is often not appreciated by machine-learning researchers and engineers.

In 2019, OpenAI said they weren’t going to release GPT-2 because of concerns over its capacity to generate misinformation. They did end up releasing it, and since then, we’ve had GPT-3, GPT-4, and a number of powerful open-source models. The flood of misinformation that we were warned about has not yet proved as damaging as we had been warned it would be. Do you expect a flood of misinformation somewhere down the pipeline?

Kapoor: We’ve heard predictions about all of this catastrophic disinformation that will end our ability to discern fact from fiction. We don’t expect that to happen anytime soon. All that is not to say that disinformation isn’t an important problem, but that our ability to contain disinformation is much more dependent on containing the distribution of disinformation. Platforms like Twitter and Instagram and YouTube have a much bigger role to play.

What do you both make of the argument that AI could pose an existential risk to humanity? Is alignment—AI systems behaving as developers intend—something we should be worrying about?

Narayanan: Certainly, it could pose an existential risk. But a lot of the arguments around that we strongly dispute. One is the idea that you can put a probability number on this, and the idea that [this number] should then guide policy. When I look at the methods behind these probability estimates, they’re all complete bunk. There is this AI Impacts survey that gets cited all over the place as saying that 50% of AI researchers believe that there is at least a 10% chance of existential risk. That survey had a very low response rate [Editor’s note: 738 responses from 4271 researchers contacted, a 17% response rate], and it’s also going to be self-selected—the people who take this risk seriously are the ones who are going to respond to it. So there’s a huge selection bias.

Maybe we should take existential risk seriously, I don’t dispute that. But the interventions that are being proposed—either we should find some magic bullet technical breakthrough, or we should slow down this tech, or ban this tech, or limit it to a very small number of companies—all of those are really problematic. I don’t think alignment is going to come from some magic bullet technical solution, it’s going to come from looking at the ways in which a bad actor could use AI to harm others or our society, and to defend all those attack surfaces.

Do you think AI has been a net positive for society?

Narayanan: I’ve been working on tech accountability for 15-plus years … I think tech in general can and largely has been a force for good in this world. But that can only continue to happen if there are these strong countervailing forces that try to align incentives between companies and the rest of society. So it is in that sense that I do all this work.

Kapoor: As an engineer by training, I think [AI] has been super useful and the impact can hardly be overstated. I think the reason a lot of what we write on the blog seems to be critical is because there is a pretty big vacuum in the space of people who have the capacity to respond to AI critically, but also have the incentive.

If either of us were in the industry, or if we were working at one of these companies, it would be much harder for us to talk about the harmful impacts of AI. It would be much harder to get outside the existential-risk bubble and get a realistic view.

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST