How the U.N. Plans to Shape the Future of AI

10 minute read

As the United Nations General Assembly gathered this week in New York, the U.N. Secretary-General's envoy on technology, Amandeep Gill, hosted an event titled Governing AI for Humanity, where participants discussed the risks that AI might pose and the challenges of achieving international cooperation on artificial intelligence.

Secretary-General António Guterres and Gill have said they believe that a new U.N. agency will be required to help the world cooperate in managing this powerful technology. But the issues that the new entity would seek to address and its structure are yet to be determined, and some observers say that ambitious plans for global cooperation like this rarely get the required support of powerful nations.

Gill has led efforts to make advanced forms of technology safer before. He was chair of the Group of Governmental Experts of the Convention on Certain Conventional Weapons when the Campaign to Stop Killer Robots, which sought to compel governments to outlaw the development of lethal autonomous weapons systems, failed to gain traction with global superpowers including the U.S. and Russia. Now, Gill is shepherding an even more ambitious effort to foster international cooperation on AI.

AI has developed at a blistering pace in the last few years, and experts don’t expect this progress to slow any time soon. The impacts of AI will be felt far beyond the borders of the countries in which it is developed, leading world leaders and technologists to call for international cooperation on AI issues.

More From TIME

Read more: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down

At a meeting of the U.N. Security Council in July, Guterres made the case for the U.N. as the appropriate forum for this cooperation. The High-Level Advisory Body on Artificial Intelligence, the membership of which will be announced in October, is the next step towards the establishment of a U.N. AI agency. 

The High-Level Advisory Body

Gill, who had previously served as executive director of the U.N. High-Level Panel on Digital Cooperation from 2018 to 2019, was appointed as the tech envoy in June 2022.

In August this year, the tech envoy’s office opened the selection process for experts to serve on a High-Level Advisory Body on Artificial Intelligence with a public call for nominations.

A separate call for nominations, seen by TIME, was sent to member states and contained the terms of reference for the Body. These specified that the Body would consist of up to 32 members from a range of sectors and a mix of genders, age, geographic representation and area of focus.

Gill told TIME in an interview on Aug. 30 that his office had received more than 1,600 nominations through the public call for nominations. Combined with the nominations from member states, he expects there to be more than 2,000 nominations. The tech envoy’s office, with input from other U.N. organizations, will create a shortlist, from which the Secretary-General will select the 32 members, he said. The Body will meet for the first time in October.

The terms of reference in the document seen by TIME and confirmed by Gill, specify that the Body will produce an interim report presenting “a high-level analysis of options for the international governance of artificial intelligence” by 31 December 2023. A second report, to be submitted by 31 August 2024, “may provide detailed recommendations on the functions, form, and timelines for a new international agency for the governance of artificial intelligence.”

Aki Enkenberg, team lead for innovation and digital cooperation at Finland’s Ministry for Foreign Affairs, described the decision to specify that the Body would provide recommendations concerning a new international agency as a “hasty move.” He argues that much of the international governance required for AI could be facilitated by existing bodies in the U.N. system, and says that analysis should have been conducted to assess whether there were gaps in the U.N. system before suggesting that a new agency be established. Gill denied that the terms of reference bias the Body’s findings towards recommending the establishment of a new agency.

When asked whether size of the advisory body might make it unwieldy, and therefore end up giving the secretariat outsized influence over a report’s contents, Gill said that the secretariat would not seek to influence the Body’s findings. 

In September 2024, the U.N. will host its Summit of the Future. By then, Gill says that he hopes that the Body’s findings will have given the member states the information they need to decide whether and how they should support the establishment of a U.N. AI agency. “That would be the right moment for this agency to be created,” he said. “The summit of the future is a leaders-level opportunity.”

An atomic energy agency for AI

In a blog post published in May, senior leaders of the company behind ChatGPT, OpenAI, argued that something like an International Atomic Energy Agency (IAEA) for AI may be required in future to safely govern more sophisticated artificial intelligence systems. “Any [superintelligence] effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security,” the blog post said.

The IAEA, founded in 1957, conducts inspections to help ensure that non-nuclear weapon states don’t develop nuclear weapons, and provides technical cooperation for the peaceful development of nuclear energy.

In June, Guterres showed enthusiasm for the concept of a similar body for AI, saying “I would be favorable to the idea that we could have an artificial intelligence agency ...inspired by what the international agency of atomic energy is today."

Read More: AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation

But replicating the IAEA for artificial intelligence is just one potential option. A paper, which Gill referenced during the interview with TIME, published in July by researchers from prominent AI labs Google DeepMind and OpenAI and various academic and nonprofit research institutions, laid out four possible forms an international AI body could take that are not mutually exclusive.

An IAEA-like agency that can develop industry-wide standards and monitor stakeholders to assess whether those standards are being met is one option. Another suggestion is an organization modeled after the Intergovernmental Panel on Climate Change (IPCC) that facilitates expert consensus on technical issues. Also proposed in the report is an international partnership between the public and private sectors, which would ensure equal access to beneficial AI, as Gavi, the Vaccine Alliance, does for vaccines. A final suggestion is an international collaboration on AI safety research effort, similar to the European Organization for Nuclear Research (CERN).

Read More: Column: We Can Prevent AI Disaster Like We Prevented Nuclear Catastrophe

While Gill says the High-Level Advisory Body must determine which model, if any, is most suitable, he says the fourth option—an international research effort—is a “classic kind of international collaboration problem where a U.N. agency might play a role.”

Some experts argue an IAEA-like agency for AI would be difficult to get support for among policymakers. That’s because it would require countries such as the U.S. and China to allow international inspectors full access to the most advanced AI labs in their jurisdictions in an effort to avoid risks that have not yet materialized.

Levels of international cooperation between countries with the aim of managing dangerous technologies is at a “30-year low,” says Bill Drexel, an associate fellow at military affairs think tank the Center for a New American Security. “Trying to eke out a really meaningful agreement from that baseline with a technology that's still having its risks and advantages determined seems like a super tall order.”

Drexel thinks it may take a serious AI-related incident to generate the political will required to form a substantial agreement between countries. In the meantime, he says, it could be prudent to set up an international body, whether through the U.N., or between a smaller multilateral or bilateral group, that imposes very minimally on the participants, which could be used as a foundation for more material cooperation if political will arises.

Gill, a former nuclear historian who has written a book on nuclear security summits, agrees. “Having studied the history of the IAEA, I don't believe there's a cut and paste,” he says. “But who knows, at some stage, if there is a burgeoning risk, then you could accordion in the international control.”

One proposal for a more politically feasible governance model that could be expanded in future comes from a white paper published in August by researchers from various academic and nonprofit organizations, and tech giant Microsoft. The paper proposes creating an International AI Organization (IAIO) to partner with national regulators on standards and to certify jurisdictions, rather than monitoring individual AI companies. Under the model, individual governments would enforce the standards by only allowing companies in jurisdictions that meet those standards to operate in their domestic market. They would also need to restrict exports of AI inputs, such as computing chips, to countries that don’t meet the standards.

Robert Trager, one of the lead authors of the white paper and international governance lead at the Centre for the Governance of AI, told TIME that certification at the jurisdictional level makes this arrangement less intrusive, meaning powerful countries are more likely to sign on.

AI safety risks may not be easily contained within borders, so it will be essential for all countries in which powerful AI systems are being developed—chiefly the U.S. and China—to be included in any international agreement. Sihao Huang, a researcher at the University of Oxford who spent the past year studying AI governance in Beijing, argues that a tight focus on shared problems, such as “understanding how we can evaluate models for biosecurity risks,” will be required if China is to assent to an international agreement.

It may be a harder sell with Russia. In a meeting of the U.N. Security Council in July, Dmitry Polyanskiy, first deputy permanent representative of the Russian Federation to the U.N., said that Russia was “opposed to establishing supranational oversight bodies for AI.”

Read More: The TIME100 Most Influential People in Artificial Intelligence

Although two of the organizations that inspired the idea of the IAIO—the International Civilian Aviation Organization and the International Maritime Organization—are United Nations agencies, the U.N. is not the only forum that should be considered, says Trager. He argues that there are many models that would allow for a broad set of stakeholders to be involved in governance.

International cooperation is “really clunky, slow and generally inefficient,” says Drexel. Instead, it may be possible to “come up with bilateral or more limited multilateral fora to try to govern [advanced AI systems] and even to scale with the expansion of companies that might be able to train frontier models.”

Gill says that the U.N., a uniquely universal intergovernmental organization with experience managing new technologies, is well-placed to host a treaty or organization for the governance of AI.

Idealism has resulted in the failure of previous attempts—like the Campaign to Stop Killer Robots—to manage AI internationally, says Trager. “This is kind of what happened in the lethal autonomous weapons discussions,” he says. While the effort was well-intentioned, with a clear purpose to prohibit certain forms of development and uses, Trager says “That's just not going to happen. Was never going to happen.”

Gill, despite his experience with the lethal autonomous weapons systems negotiations, remains determined. “We have to try,” he says. “The world is complicated—has been and will always be. But I think I see a window of opportunity."

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com