Illustration by TIME; reference image courtesy of Jess Whittlestone, photograph by Kate Pook

Those who think about risks from AI tend to fall into one of two camps: either they are skeptical of the power of AI and the intentions of the companies developing it, or they believe that AI will be the most important technology in human history and that the companies developing it are acting in good faith.

Jess Whittlestone does not fit into either camp. As head of AI policy at the Centre for Long-Term Resilience, a U.K.-based think tank established in 2020 to improve global resilience to extreme risks, Whittlestone knows that many of the people working on AI safety and governance work in industry, making them vulnerable to conflicts of interest. She hopes to position herself as someone who takes the extreme risks seriously, but is free of such conflicts.

Whittlestone, who started to work on AI policy in 2018, studied math and philosophy before getting a Ph.D. in behavioral science. She says she’s “always been interested in how we make better decisions on a societal level.” She’s also conscious of who’s making those decisions. She argues, for instance, that companies often justify continued AI development on the basis of supposed benefits from AI that are not well substantiated. AI companies “keep saying the big societal benefits are just around the corner, if you keep letting us push forward,” she says. “I am somewhat skeptical of that.”

After working in academic positions at the Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk, both at the University of Cambridge, Whittlestone joined the Centre for Long-Term Resilience. She has taken part in unofficial diplomacy initiatives with Chinese AI researchers and worked on AI safety policy with prominent AI companies. She spends much of her time advising U.K. government officials on the technical details that should inform AI policy—particularly important ahead of the AI summit the U.K. government is hosting this fall, which could set the terms of international cooperation on AI policy issues for years to come.

The voluntary commitments to follow responsible development practices that the White House secured from AI companies in July are a good starting point, says Whittlestone. “A really useful thing that the U.K. summit could do is really look at how we get some kind of commitment from countries and companies that goes beyond [voluntary commitments], that starts to move us in a direction of meaningful accountability, oversight, not just relying on pinky promises from companies.”

Whittlestone tries to engage with both those worried about present-day harms caused by AI, like those caused by systems that exhibit racial bias, and those concerned about future—perhaps even existential—risks from powerful AI systems. Although she acknowledges that there are sometimes trade-offs between the two issues, there are also many common solutions, such as the need for “oversight and accountability and more scrutiny of tech companies.” She co-authored a paper that argued that making a distinction between the near-term and long-term risks posed by AI is unhelpful, and she has encouraged U.K. government officials to engage AI ethics experts around the AI summit.

Ultimately, she believes a balance needs to be struck. “As someone who has mostly focused on the more extreme risks, I’m now seeing those get a lot more attention and almost worrying that that’s gone too far. One of the things that isn’t getting as much attention around the U.K. policy space at the moment is the more diffuse societal harms—like inequality, or disinformation, or impact on democracy, or power concentration. There’s a lot of focus on safety and security risks … where the solution is the kind of stuff in the White House voluntary commitment.”

More Must-Reads from TIME

Write to Will Henshall at will.henshall@time.com.

Sougwen Chung
Yann LeCun
Abeba Birhane
Audrey Tang
Grimes
EDIT POST