Illustration by TIME; reference image courtesy of Yoshua Bengio

Yoshua Bengio, one of the most important AI researchers of the past three decades, understands what AI can do better than most. Still, upon encountering ChatGPT, the AI chatbot released by OpenAI in November 2022, Bengio had a “visceral” reaction. It took him all winter and much of spring to begin to intellectually—and emotionally—adjust to his new understanding of how quickly AI could surpass humans.

“It’s very challenging psychologically to realize that what you’ve been working for, with the idea that it would be a great thing—for society, for humanity, for science—may actually be catastrophic,” Bengio says, speaking in late August from his home in Montreal. “It’s like you think you’d been acting well all your life and then somebody [tells you] you’ve been actually building a bomb that’s going to kill everyone.”

In March, Bengio, 59, spoke out about the risks AI poses, just weeks before his mentor Geoffrey Hinton—with whom he had won the prestigious Turing Award in 2018—left Google to sound the alarm. “Geoff and I came to the same conclusion, or very similar conclusions, without talking to each other,” says Bengio, who has been a professor at the University of Montreal for more than three decades. Bengio and Hinton separately estimate that AI that outperforms humans at all tasks will be developed at some point in the next 5–20 years.

Bengio has been part of teams which have made multiple foundational breakthroughs that have helped lay the groundwork for the recent rapid advances in AI. In 2003, Bengio laid the foundations for modern large language models by showing that neural networks could learn patterns in human language by predicting the next word, like autocorrect. In 2014, working with Ian Goodfellow, Bengio came up with a way to train AIs by having two compete against each other, one generating content and the other judging its quality. And in 2018, Bengio helped develop the concept of attention–adapting neural networks to understand highly connected data like social networks by focusing on the most relevant parts.

Unlike most of his peers, Bengio has never been enticed by the higher salaries or greater computational resources available in industry. “I’m really an academic at heart—I want to be free in what I do,” he explains.

Now, Bengio intends to harness his remarkable intellectual horsepower thinking hard about the current situation, which he says “looks desperate for humanity.”

In the future, says Bengio, AI could overpower us . But things could get tricky well before then. “The nearest thing is meddling with elections,” says Bengio. “That could happen by the next U.S. election.” This year, he testified before the U.S. Senate about the dangers of AI, and has written papers about AI policy and governance. He intends to start shifting his work toward technical AI-safety research.

“The real question when you’re in any situation is, What can I do to reduce those risks? I can’t bring them to zero. It’s not clear that anybody can,” Bengio says. “But if we can reduce the probability of bad things by a factor of 10, well, let’s do it.”

Correction, Sept. 8

The original version of this story misstated when Bengio spoke publicly about his concerns about the risks posed by AI. He initially spoke out in March, not May.

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST