AI Experts Call For Policy Action to Avoid Extreme Risks

6 minute read
Updated: | Originally published:

On Tuesday, 24 AI experts, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, released a paper calling on governments to take action to manage risks from AI. The policy document had a particular focus on extreme risks posed by the most advanced systems, such as enabling large-scale criminal or terrorist activities.

The paper makes a number of concrete policy recommendations, such as ensuring that major tech companies and public funders devote at least one-third of their AI R&D budget to projects that promote safe and ethical use of AI. The authors also call for the creation of national and international standards.

Bengio, scientific director at the Montreal Institute for Learning Algorithms, says that the paper aims to help policymakers, the media, and the general public “understand the risks, and some of the things we have to do to make [AI] systems do what we want.”

The recommendations do not break new ground. Instead, the paper’s co-authors are putting their names behind the consensus view among AI policy researchers concerned by extreme risks (they closely match the most popular policies identified in a May survey of experts).

“We wanted to present what (we feel is) clear thinking on AI safety, reasonably free of influence from vested interests,” Stuart Russell, professor of computer science at the University of California, Berkeley, and a co-author of the letter, told TIME in an email.

This week’s statement differs from previous expert-led open letters, says Russell, because “Governments have understood that there are real risks. They are asking the AI community, ‘What is to be done?’ The statement is an answer to that question.”

Other co-authors include historian and philosopher Yuval Noah Harari, and MacArthur “genius” grantee and professor of computer science at the University of California, Berkeley, Dawn Song, along with a number of other academics from various countries and fields.

The paper is the third prominent statement signed by AI experts this year, in a mounting effort to sound the alarm on potential risks of unregulated AI development. In March, an open letter calling on AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” was signed by tens of thousands of people, including Elon Musk, Bengio, and Russell.

In May, a statement organized by the Center for AI Safety declared that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement was signed by more than 500 prominent academics and industry leaders, again including Hinton, Bengio, and Russell, but also the CEOs of three of the most prominent AI companies: Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic.

Pieter Abbeel, co-founder, president, and chief scientist at robotics company Covariant.ai, and professor of electrical engineering and computer sciences at the University of California, Berkeley, signed this week’s paper despite not signing earlier open letters. Abbeel told TIME that the cautiously optimistic tone of this most recent statement better fits his view than the more alarming tones of previous open letters. “If we do things right—and we have a lot of things to get right—we can be very optimistic about the future,” he says.

Read more: The AI Arms Race Is Changing Everything

AI researchers have long been trying to draw attention to the potential risks posed by the technology they helped develop. In 2016, Hinton, Bengio, and Russell signed a letter organized by the Future of Life Institute, a nonprofit that aims to reduce global catastrophic and existential risks, calling for a “ban on offensive autonomous weapons beyond meaningful human control.”

Historically, scientists have sounded the alarm and been early advocates for issues related to their research. Climate scientists have been calling attention to the problem of global warming since the 1980s. And after he led the development of the atomic bomb, Robert Oppenheimer became a vocal advocate for international control, and even the complete abolition, of nuclear weapons.

Bengio says that his AI policy advocacy has evolved as his “understanding of the problem, and the politics around it has greatly improved.”

Read more: Why Oppenheimer's Nuclear Fears Are Just as Relevant Today

One of the policies recommended in the new paper is requiring companies to seek a license before developing “exceptionally capable future models.” Some AI developers and commentators, however, have warned that licensing would advantage large companies that can bear the regulatory burden required to attain a license.

Bengio calls this a “completely false argument,” pointing out that the burden imposed by licensing would fall exclusively on companies developing the largest, most capable AI models. Instead, Bengio argues the real risk of regulatory capture to be wary of would be if companies were allowed to influence legislation such that it isn’t sufficiently robust.

Russell says the argument that large AI companies are cynically pushing for regulation to shut out smaller companies is “utter nonsense on stilts,” arguing that although there are more regulations on sandwich shops than there are on AI companies, tens of thousands of new cafes and restaurants open each year.

Read more: The Heated Debate Over Who Should Control Access to AI

The new paper comes at a pivotal moment, with rules in major AI-developing jurisdictions at varying stages of maturity. China is furthest ahead—its rules governing AI chatbots, which build on previous rounds of AI regulation, came into force in August.

Western countries are further behind. The E.U. AI Act is still progressing through the E.U. regulatory process. In the U.S., the White House has secured voluntary commitments from 15 leading AI developers, but Congress remains a long way away from passing AI legislation.

Meanwhile, U.K. Prime Minister Rishi Sunak is attempting to play a key role in promoting international cooperation on AI issues, and U.N. Secretary General António Guterres and his envoy on technology, Amandeep Gill, are also trying to advance the global governance of AI.

“If governments act now, with determination,” says Russell, “there is a chance that we will learn how to make AI systems safe before we learn how to make them so powerful that they become uncontrollable.”

Correction, Oct. 24

The original version of this story misstated the nature of the published document. It is a paper, not an open letter.

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com