Amandeep Singh Gill is at the forefront of global AI governance. As the United Nations Secretary General’s Envoy on Technology, he coordinates digital cooperation among member states, industry, and civil society, as well as within the U.N. He implements U.N. Secretary General António Guterres’ priorities on AI, helping stakeholders to understand what is going on with the technology and shaping international engagement.
In October 2023, the U.N. formed a 39-member “High-Level Advisory Body on Artificial Intelligence,” drawing from governments, the private sector, and academia—representative of regions across the world. Members include OpenAI’s Mira Murati, assistant professor Zhang Linghan (who has been leading China’s AI policy response), and Gill himself.
Gill considers this moment as our narrow shot to avoid another Cold War-style arms race with AI. “In the mid-1950s—the London talks when Eisenhower was the president and Bulganin was the leader on the Soviet side—there was a very interesting opportunity that was missed to prevent an arms race that took off. Then you had the Cuban Missile Crisis, and the world came close to the brink of nuclear war,” he says. “Why not be more thoughtful about this today?”
While the Cold War may be a lesson for this moment, Gill highlights that traditional governance approaches fall short of addressing the rapid development of AI technologies. There is a need to update these global governance tools to ensure accountability. He also believes in bridging the gap between the private sector and regulators to develop a “shared understanding” of the consequences and benefits of the technology.
In December, the body put out an interim report that laid the foundations for how to govern the proliferation of AI globally. Their key recommendations included bridging the gap between AI adoption in the west and the Global South and producing unbiased scientific reports on AI's progress, similar to the work of the Intergovernmental Panel on Climate Change. The advisory panel’s final report is expected by September when the U.N. will hold its “Summit of the Future.”
Gill has said that without deliberate effort, AI will not advance the U.N.’s Sustainable Development Goals, which include ending poverty and reducing inequality, nor will it be useful in addressing climate change or public health challenges.
TIME spoke with Gill in late July to understand how he’s thinking about the future.
(This interview has been condensed and edited for clarity.)
TIME: One of the central problems in AI governance, particularly internationally, seems to be that the technology develops at a much quicker rate than governance responses. How do you think about this challenge?
Amandeep Gill: Absolutely, I agree with you. It is indeed one of the biggest challenges. At the U.N., we have a toolkit to respond.
I've spent decades in arms control—non-proliferation, disarmament—you have a toolkit of treaties, of what we call ‘Transparency, Confidence-Building Measures (TCBMs). But when you are confronted with these technologies, the traditional toolkit falls short.
So how do you respond? By updating the toolkit and by better leveraging a mix of soft and hard norms, because negotiating hard norms is tough these days. In the AI context, we have norms around human rights, on gender, on sustainability. But if these commitments are only voluntary, then the accountability gap increases.
I think another aspect of addressing this pacing problem is that you need to have almost constant “software reboots.” You need to update your understanding of what the technology is and what its implications are. So this idea of scientific assessments and the proposal of a new international scientific panel on AI—which would provide regular, unbiased assessments of AI development and its impacts—is part of that response.
A third aspect of this is we have to find ways to engage with those who are in the know. Often there's a gap between technology developers and regulators, particularly when the private sector is in the lead. When it comes to diplomats and civil servants and leaders and ministers, there's a further gap.
How can you involve different stakeholders, the private sector, in particular, in a way that influences action? You need to have a shared understanding. “Ah, this is problematic.” So people walk out of the room with some ideas about how to respond. They may not like to articulate it because they don't want governments to pick up the regulatory pen and start writing things down, but they may go and just do what needs to be done, so the quality of action improves. I think there are some ways to handle the pacing problem if we are smart and deliberate about it.
How can countries in the Global South benefit from the AI revolution without being left behind?
The most important thing is inclusion in governance. We need to broaden participation in the international governance of AI. A few countries, a few institutions with partial membership, can't do this. We need to bring in more diverse perspectives, particularly from the Global South, so their needs can be reflected—so it's not someone deciding “this is good for you. Please take it.”
Second, we need to be more deliberate about capacity building. That includes curating data sets, building the cross-domain talent, access to computing power, and increasing the quantity and flow of data through digital public infrastructure and in the digital economy overall. At the end of the day, you'll have to train your own or tune your own models on local languages.
There are limitations on these broadly trained LLMs working in local languages. India's Bhashini project addresses this by providing an interface for local languages, enabling farmers to use AI in their native languages with their own datasets. If you're a chili farmer or an avocado farmer, what are the problems you're dealing with? Without hallucination, you can get the right advice there.
Those are the areas where we need to focus on with the Global South. We need to help governments think strategically about these issues.
In a recent podcast appearance, you told listeners “there's a narrow window of opportunity to avoid a Cold War on AI.” What did you mean?
Today, you have a reasonable degree of optimism about international cooperation. In the United Nations, we've had two AI resolutions: one put forward by the U.S., another one put forward by China. Both of these resolutions have been adopted by consensus and the U.S. and China have supported each other on them. The framing is essentially “AI has issues. We need to manage them together. But there are also opportunities. We need to harness those opportunities together.”
So that is the window of opportunity. There will be an attempted reboot of multilateralism through the upcoming Summit of the Future. Practically, the summit will come out with a pact for the future, which is a shared vision for the kind of digital world we want in the future, that global leaders will hopefully adopt in September.
Digital issues are also on the agenda through the Global Digital Compact, a U.N. initiative to establish shared values for governing digital technologies globally. Currently, countries have an opportunity to take a decision on some aspects of AI on which we will work together. If we do this, then there is less of a chance of geopolitical competition, a race to the bottom, in terms of AI use. The job will not be done by adopting the Global Digital Compact, but it will be a shared foundation for future cooperative action.
These summits have often been criticized for being too numerous, but they are channels of communication. They are opportunities for some announcements, for some networking, and hopefully for some agreement on what needs to be done in the future.
Otherwise, with many other technologies, we've seen—nuclear, the one case that I'm most familiar with—there were two or three instances in the 1940s and 1950s that we missed out on. In the mid-1950s, during the London talks when Eisenhower was the president and Bulganin was the leader on the Soviet side, there was a very interesting opportunity that was missed to prevent an arms race that took off. Then you had the Cuban Missile Crisis, and the world came close to the brink of nuclear war, and there was a wake-up call. People took responses almost as emergency measures.
Why not be more thoughtful about this today? Why not, at least, with some bare minimum of collaboration, create some habits of working together? Create some frameworks for international engagement, so that there is some kind of balancing out of the competitive instinct.
There's going to be geopolitical competition, which will include AI. But I think the point is can we balance that competition with some cooperation? So that there’s no runaway competition and it doesn't end up being a Cold War type of situation.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com