Katja Grace

Co-founder and lead researcher, AI Impacts

2 minute read

Thanks to Katja Grace, co-founder of the research organization AI Impacts, we have some sense of how AI researchers are thinking about the future. The short answer: while views vary widely, they all agree on the need to prioritize AI safety research.

In January, Grace, who leads AI Impacts’ research team, released the results of her latest survey of researchers in the field—the largest of its kind. Over 2,700 researchers who had recently published peer-reviewed papers in top AI venues were asked questions on when they expected AI systems to be able to perform 39 different tasks, including building a payments processing website and constructing video from a new angle. They were also asked how concerned they were about 11 different negative scenarios unfolding over the next 30 years, such as “authoritarian rulers use AI to control their population” and  “AI systems worsen[ing] economic inequality by disproportionately benefiting certain individuals.”

While researchers expressed a wide range of views in response to almost every question, some interesting trends emerged. A majority of respondents thought AI posed an at least 5% chance of causing human extinction or similarly severe outcomes. Most milestones, like AI writing “NYT best-selling fiction” or beating pro-gamers at “all Atari games,” were predicted to have better than even odds of happening within the next ten years. 

Expectations for when AI might achieve “human-level machine intelligence” and replace every working individual also shifted significantly from the organization’s 2022 survey. If science keeps progressing smoothly, researchers estimate that there's a 10% chance machines can outperform humans in almost every task by 2027—a decade earlier than was estimated in the previous survey. 

Grace, who describes herself as being good at “paying attention to things that might be important but are tempting to not pay attention to,” says she doesn’t enjoy running the survey,  but she still believes it's important. “I think that maybe without the surveys, people were more wrong about what AI researchers think,” she says. “If you just look at the loudest five AI researchers on Twitter or something, you get quite a different picture of what the AI researchers think. So, yeah, that makes me more optimistic.”

More Must-Reads from TIME

Contact us at letters@time.com