Geoffrey Irving

Research Director, U.K. AI Safety Institute

2 minute read

Geoffrey Irving has good news and bad news. The good news: he thinks AI alignment, or the task of “making the machines do what we want” as he puts it, is a solvable problem. The bad news: computer scientists are making slow progress, meaning they might build super-powerful AIs before they learn how to properly control them.

Irving would know. He is one of the most respected names in the field, having worked at Google DeepMind and OpenAI. He saw the snail’s pace of progress in AI alignment first-hand during his eight years in industry, compared to the warp speed advances in AI capabilities across the same time period. “Even if the labs want to prioritize safety over economic pressures, they are not in a position where that's easy to do,” Irving says of the conclusion he came to after years at the coalface of AI advances. “So I thought, in the absence of sufficient progress, I need to work on coordination instead.”

To do that, Irving took a job at the U.K.’s AI Safety Institute (AISI), a new government body dedicated to running safety tests on cutting-edge AIs. Right now, evaluations of the most advanced, or so-called “frontier” models, by AISI and others have found only limited risks. But Irving, who expects AI to continue growing stronger over time, is leading a strand of work inside the institute on what to do in the future if its evaluations find significant dangers within models. That might involve gathering robust evidence that technical safety measures are working, and issuing recommendations for how labs should deploy the models, Irving says. 

So far, AI companies have made non-binding voluntary commitments to give the AISI access to their new models before releasing them to the public. But the AISI could in future evolve into a body that looks more like a regulator, able to compel labs to share their models and follow its recommendations, should the new U.K. government choose to move in that direction. “There may be future developments on the regulatory side,” Irving says. “I think it behooves [the AI companies] to work with the government and provide information. I think that pressure is appropriate, and so far has motivated many of them to cooperate with us.”

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com