Yann LeCun has long marched to the beat of his own drum. After the French computer scientist hypothesized in the 1980s that artificial neural networks could be designed to imitate the human brain, his ideas were widely mocked as fantastical for decades. But thanks to technical breakthroughs in the field, LeCun’s ideas would forge the bedrock for the current generative-AI revolution.

These days, LeCun, now the chief AI scientist at Meta and a computer science professor at NYU, is still offering bold, controversial proclamations—and sparring with everyone and anyone he disagrees with. “I could stay silent, but it’s not so much my style,” LeCun says in a video interview with TIME.

For instance, LeCun has dismissed existential fears around AI as “preposterous” and akin to an “apocalyptic cult,” even tangling with the other pioneers of deep learning, Geoffrey Hinton and Yoshua Bengio, with whom he won a prestigious Turing Award in 2018. He believes the current craze around large language models (LLM)—including ChatGPT—is a misdirected fad that will soon hit a dead end. And in our interview, he also vociferously defended the effectiveness of AIs in combatting hate speech on Facebook, which has been the subject of intense criticism and scrutiny. (This interview has been condensed and edited for clarity.)

Illustration by TIME; reference image: Marlene Awaad—Bloomberg/Getty Image

TIME: You’ve witnessed AI hype cycles. Can you put this moment into perspective compared with the decades that have come before it?

Yann LeCun: The major difference in this hype cycle is that people have something to play with. And I think that has contributed to capturing the imagination of a lot of people.

There’s no question that people in the field, including me, have been surprised by how well LLMs have worked. But it’s very far from perfect. And it’s not by making those systems bigger and training them on more data that we will reach human intelligence. I’ve been sort of trying to dampen the excitement a little bit by saying, “Wait a minute, this is not the end of the game here.” If we imagine that we are on a highway towards truly intelligent machines, this is interesting, but it’s actually an off-ramp.

Earlier this year, Meta decided to give away its AI tech as open-source software, which drew fierce criticism from many entities, including Google and OpenAI. A Stanford researcher even compared it to making “a grenade available to everyone in a grocery store.” Are their concerns unfounded?

Open-source generative models have been available for several years now, and we have not seen a massive use of those things for the nefarious uses that people are talking about—like massive generation of disinformation, or cyberattacks, or the designing of lethal pathogens. I find those scenarios like James Bond–villain scenarios. There are people who are in the business of being paranoid and imagine them.

In the end, can you trust society as a whole to use technology in a way that’s overwhelmingly positive, even if there are some nefarious uses? The printing press allowed a lot of people with nefarious intent to disseminate misinformation, just like the two-way internet, and just like AI systems, potentially. But the advantages and benefits to society are so overwhelming, you might want to live with those risks.

For people who believe the risks are existential, of course, the trade-off is not a trade-off. But that’s a big jump to make. There’s certainly no existential risk with current AI systems.

You’ve called AI doomers “preposterous.” Why are you so sure this technology will not threaten the world?

The Eliezer Yudkowskys of the world are saying, “If you get it slightly wrong and just turn on this superintelligent system, it will immediately take over the world and escape our control.” But that’s not the way it will work. It’s like if you said in 1960, “We’ll build the first transatlantic jet. We won’t test it beforehand. We’ll just build it, put a thousand people in it, fly it across the Atlantic, and hope it doesn’t blow up.” Of course, that would be stupid.

It took decades for airliners to become super safe: decades of fine tuning and careful engineering. If you were to try to explain to people how we made turbojets safe, you would have had to go into really, really complicated mechanical engineering, thermodynamics, and all kinds of stuff that nobody will understand. You don’t need to be very skilled to imagine hundreds of scenarios of how a turbojet can blow up or fail. But actually designing one that’s reliable, that requires real talent and real expertise. [Editor’s note: The trial-and-error process that achieved the current level of aviation safety resulted in thousands of passenger deaths over the years.]

So we’re in the same situation for AI. There’s a lot of people that say AI is not safe. But they are naive. They have no idea. It’s a complicated engineering problem that we haven’t even begun to solve because we don’t even have a good design for a superintelligent AI system yet.

Now, a lot of the AI researchers themselves cannot imagine how to make those systems safe, like Geoffrey Hinton and Yoshua Bengio. But I’ve thought about this for years. I think there’s a way to make them safe by designing them so that they have to abide by a certain number of objectives. You hard-wire those objectives so the system, by construction, cannot produce an output that does not abide by the guardrails in the process of accomplishing a task.

That poses the problem of designing those guardrails. But that’s not as difficult an issue as people have made it to be, because it’s somewhat similar to designing laws for people.

Why do you think your viewpoint on this diverges so drastically from the other godfathers of deep learning, Geoffrey Hinton and Yoshua Bengio?

Geoff believes LLMs are smarter than I believe they are, and he’s a little more optimistic than I am about how they might get us to human-level AI. So he realized, all of a sudden, “We need to worry about superintelligent machines. If you have a more intelligent entity, it is going to want to take over the world.”

And I think that’s just wrong. There’s no correlation between being intelligent and wanting to take over. Even within the human species, it’s not the most intelligent among us who want to be the leaders. In fact, it’s quite the opposite, mostly.

The desire to dominate is really attached to species that are hierarchically organized and social. It’s really a consequence of human nature and the fact that evolution built us this way. But orangutans, for instance, are not a social species, and don’t have any desire to dominate anybody. They don’t need to. So we can be smart, like orangutans, without having any desire to dominate.

Are you satisfied with the percentage of hate speech that is correctly flagged by AI systems on Facebook nowadays?

It’s far from perfect. Any type of post that violates content policy that is let through is a failure. But it’s making a lot of progress, and it’s totally due to AI.

Five years ago, the hate-speech detection systems weren’t that great. They could work mostly for languages for which we had a lot of training data—and not so much for the hundreds of thousands of languages for which we don’t have much training data on. So a lot of hate speech was getting through. The automatic detection of hate speech was about 25%. And we didn’t have human moderators speaking every language of the world, so it created complicated situations in certain countries, where people are fighting each other because of some ethnic conflict or whatever.

Last year, the proportion of hate speech taken down automatically was 95%. There are some false positives, some false negatives. But the progress was entirely due to the use of large-scale transformers that are pretrained with supervised learning that are multilingual and so that you don’t need tons of training data in every dialect in the world.

Does the fact that you were right about neural networks after being dismissed for so many years give you stronger conviction about your current array of controversial opinions?

I think I’ve always been somewhat opinionated and not afraid of speaking. I’m a scientist. I could stay silent, but it’s not so much my style.

There’s no question working at Meta colors my opinion, to some extent. But it gives me a view of how sausages are made. I see the motivation of people. And the motivations you might think are driving the company are not the ones I’m seeing.

So when people say, “You cannot be ethical and work at Meta, because it’s intrinsically unethical,” I have to say no. There are side effects you can’t predict sometimes, and mistakes you make. But you’re trying to do the right thing.

More Must-Reads From TIME

Contact us at letters@time.com.

Abeba Birhane
Audrey Tang
Grimes
Demis Hassabis
Sougwen Chung
EDIT POST