Illustration by TIME; reference image courtesy of Stuart Russell

In his 2019 nonfiction book Human Compatible, the computer scientist Stuart Russell likens humanity’s attempts to build AI without thinking about the consequences to an email exchange between a superior alien civilization and humanity. The aliens email humanity to say, “Be warned: we shall arrive in 30–50 years.” They get an automatic response: “Humanity is currently out of office. We will respond to your message when we return.”

The U.K.-born Russell, who studied physics at the University of Oxford before completing his Ph.D. at Stanford, spent the first half of his career hastening the arrival of the metaphorical aliens. As a professor of electrical engineering and computer sciences at the University of California, Berkeley, Russell made foundational contributions to the field. He’s also helped others get into the field and make contributions of their own. Along with Peter Norvig, he wrote the authoritative text, Artificial Intelligence: A Modern Approach, which its website says is used by 1,547 colleges and universities in 134 countries.

But about a decade ago, Russell became preoccupied with a question: What if we succeed? After receiving an email from Human Rights Watch in 2013, Russell became a vocal advocate of a ban on lethal autonomous weapons systems, which he worries could be cheaply produced to form swarms that could target populations based on skin color or political affiliation. In 2016, he founded the Center for Human-Compatible Artificial Intelligence, a research center at Berkeley, the primary focus of which is “to ensure that AI systems are beneficial to humans.”

Rapid advances in AI have made him more pessimistic. “The important thing is which happens first? The arrival of real general-purpose AI or figuring out how to make it safe? And it feels much more likely now that the first thing is going to happen. Which could be very problematic.”

Yet on balance, Russell feels more optimistic than he did a decade ago. Recent advances and expressions of concern by leading scientists, like the open letter he—along with more than 30,000 others—signed in March that called for a pause on giant AI experiments, have “made governments receptive to a message that we needed to do something about safety.”

The aliens may arrive sooner than we previously thought. But Russell is heartened by the reaction to the open letter. “At the end of March,” he says, “humanity returned to the office.”

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST