Illustration by TIME; reference image courtesy of Tristan Harris

Odds are you don’t intend to commit a murder in Estonia. But if you did plan to commit a murder in Estonia, you would certainly benefit from an AI system that could read and understand the entire body of Estonian law and find a loophole that would let you get away with your crime. This is the kind of scenario that keeps Tristan Harris up at night. Actually, in this boom time for the AI field, a lot of things keep Harris awake at night.

Co-founder of the Center for Humane Technology and host of the podcast Your Undivided Attention, Harris has been establishing himself as a man on the AI ramparts—a tech leader committed to ensuring that we get this generation of AI right, because we simply can’t afford to get it wrong. “We really have to have a public conversation and assess what AI will do to society,” he says, “because it really affects everything, everywhere all at once.”

This is not our initial contact with AI, of course. Our initial contact came in the first 20 years of this century, when social media and recommendation engines began to proliferate, steering eyes and minds to often awful—and even dangerous—content. “It was a race to the bottom of the brain stem,” Harris says. “People were directed to the kinds of material that unravels democracies, that causes addiction, isolation, mental-health issues, online bullying, harassment, and sexualization of kids.” All of this was the subject matter of the 2020 documentary The Social Dilemma, in which Harris appears.

That kind of mess was created in what can be thought of as AI 1.0. Here in the age of AI 2.0, the dangers are even greater. Generative AIs, like ChatGPT, involve creating language out of nothing—and there are countless types of languages out there.

“Chemistry is language, biology is language, law is language, religion is language,” Harris says. “I could take the language of smallpox and move it in the direction of being more transmissible and more lethal. Someone could tell an AI system, ‘Write me a letter encouraging this kid to commit suicide.’ AI can spit out language in the form of fake news or fake laws or fake religious documents.”

Turbocharging all of this is unbridled ambition. Developers are in a race—with other companies, with other countries—to see who can build the best, the most powerful, the most profitable systems. A close analogy, Harris says, was the competition to develop the first nuclear weapons. “Designers are worried that if they don’t build it, far worse actors will,” Harris says. “So the logic they’re operating on is, ‘Let the good guys build it.’”

The problem, of course, is that while bad guys are everywhere, we all think we’re the good ones. In March, Harris released an open letter—co-signed by a host of leaders in the tech field, including Elon Musk; Apple co-founder Steve Wozniak; and Yoshua Bengio, founder of the AI research institute Mila—calling on all AI labs to pause their work for six months and take that time to reassess the direction of the industry. None did, though Harris was hardly surprised. His hope, however, is that at least it made them think.

“There really are a lot of insane benefits to AI,” he says. “But inseparable from those problems, catastrophic risks.”

More Must-Reads From TIME

Write to Jeffrey Kluger at jeffrey.kluger@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST