Illustration by TIME; reference image courtesy of Shane Legg

When Shane Legg, co-founder of top AI lab DeepMind, interviews job applicants, he wants to make sure they know what they are getting into. Lila Ibrahim, chief operating officer at DeepMind, tells TIME that conversations with Legg led her to worry about the future for her kids, given the risks involved with the technologies the company is developing.

Legg was DeepMind’s chief scientist since its founding and became chief AGI scientist after DeepMind merged with Google Brain in April to form Google DeepMind. He says he would often talk explicitly about how soon artificial general intelligence (AGI)—an AI that can do practically any cognitive task a human can do—could arrive and the risks that AGI could pose, to “see how they react to it. Because a lot of people found that sort of thing completely bonkers. I wanted to see how comfortable they were in thinking about things that were beyond what was currently state of the art … I thought that was an important quality.”

Today a lot of others are coming around to the thinking that has preoccupied Legg for more than two decades. In 2011, in an interview on blogging site LessWrong, he estimated that there was a 50% chance that human-level machine intelligence would be created by 2028. Legg tells TIME he actually made this prediction more than two decades ago while working as a software engineer, after reading The Age of Spiritual Machines by Ray Kurzweil, and that he has yet to change his mind. Until quite recently, his predictions were dismissed by most AI researchers, including the Turing Award winners Geoffrey Hinton and Yoshua Bengio. But both of them abruptly changed their minds on the issue earlier this year, and predict (with great trepidation) that human-level AI will be developed in the next five to 20 years. “Both of them thought I was pretty crazy having that prediction,” Legg, 49, says. “And now neither of them think that prediction is crazy.”

Legg took his own prediction seriously enough that he decided to return to university to learn more about AI—in 2003, he started a Ph.D. at the Dalle Molle Institute for Artificial Intelligence Research in Lugano, Switzerland, and won a notable prize for his thesis, titled “Machine Super Intelligence.” (Legg eventually recruited his supervisor, Marcus Hutter, to join DeepMind as a senior research scientist in 2019.)

In 2009, while working as a postdoctoral researcher at University College London, Legg met fellow researcher Demis Hassabis. Together with Hassabis’ childhood friend and progressive activist Mustafa Suleyman, they founded DeepMind in 2010 with the mission of solving intelligence by developing AGI, and using that to solve humanity’s problems. (The term AGI was first used by the physicist Mark Gubrud in 1997, but in 2002 Shane Legg independently came up with the phrase, and popularized it with his former boss, Ben Goertzel.)

Since then, Legg, who keeps a low profile, has led DeepMind’s AGI technical safety team—which is trying to ensure that powerful AI systems, once developed, will behave as their creators intend, and prevent the kind of catastrophe caused by an AI system that develops harmful goals on its own. Legg estimates that there is a 70% chance that he and others will have solved this problem by 2028. “I have a feeling this is doable,” he says. “It may not be as hard as we think, and in hindsight, it may seem quite obvious.” Legg argues that many of the difficulties that early researchers foresaw in making AI systems behave, like making sure they understand human values, have been addressed by work that has happened since, and that many AI pessimists have failed to change their views based on these developments.

But long before the hypothetical development of AGI, Google DeepMind will need to ensure that its AI systems behave. The company is reportedly going to release Gemini, its largest AI model yet, this fall. Hassabis, CEO of Google DeepMind, is leading that effort, and Google co-founder Sergey Brin has even left retirement to contribute. But Legg says there’s nothing special about the methods Google DeepMind has used to ensure Gemini behaves. “Like other big models, people who are making it, we’re using a range of alignment techniques, or variants on those techniques rather than anything particularly different. Another one or two generations after that, that’s when we may need some more interesting alignment techniques.”

In addition to his work on AI safety at DeepMind, Legg has started an AGI community, which has around 600 members—around 25% of all DeepMind employees. The group has an internal messaging channel; at meetings, people present new ideas or listen to external speakers. “It’s just a bunch of people at DeepMind who are passionate about AGI,” says Legg.

Legg is certainly one of them. He says he doesn’t know what life will look like if we survive the development of superintelligent AI. “We’re talking about something with beyond-human intelligence applying its intelligence to making the world a better and more ethical place,” says Legg. “It’s hard for us with just human intelligence to know what’s going to happen.”

Meanwhile, polling conducted by the AI Policy Institute (AIPI) in July found that 62% of Americans were concerned about AI, while only 21% were excited. Daniel Colson, co-founder and executive director of the AIPI, says that the negative effects of social media have increasingly led the American public to be “completely skeptical of that idea that scientific and technological progress is by default positive for society.” He argues that the risks posed by the development of ever more powerful AI systems, which Legg himself acknowledges are enormous, justify halting their development.

Legg thinks we can get it right. “If it is making the world a better and more ethical place, then that’s very exciting,” he says. “I think there are many problems in the world that could be helped by having an extremely capable and ethical intelligence system. The world could become a much, much better place.”

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST