Regulating AI Is Easier Than You Think

5 minute read
Ideas
Scharre is vice president and director of studies at the Center for a New American Security and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence  

Artificial intelligence is poised to deliver tremendous benefits to society. But, as many have pointed out, it could also bring unprecedented new horrors. As a general-purpose technology, the same tools that will advance scientific discovery could also be used to develop cyber, chemical, or biological weapons. Governing AI will require widely sharing its benefits while keeping the most powerful AI out of the hands of bad actors. The good news is that there is already a template on how to do just that.

In the 20th century, nations built international institutions to allow the spread of peaceful nuclear energy but slow nuclear weapons proliferation by controlling access to the raw materials—namely weapons-grade uranium and plutonium—that underpins them. The risk has been managed through international institutions, such as the Nuclear Non-Proliferation Treaty and International Atomic Energy Agency. Today, 32 nations operate nuclear power plants, which collectively provide 10% of the world’s electricity, and only nine countries possess nuclear weapons.

Countries can do something similar for AI today. They can regulate AI from the ground up by controlling access to the highly specialized chips that are needed to train the world’s most advanced AI models. Business leaders and even the U.N. Secretary-General António Guterres have called for an international governance framework for AI similar to that for nuclear technology.

The most advanced AI systems are trained on tens of thousands of highly specialized computer chips. These chips are housed in massive data centers where they churn on data for months to train the most capable AI models. These advanced chips are difficult to produce, the supply chain is tightly controlled, and large numbers of them are needed to train AI models. 

Governments can establish a regulatory regime where only authorized computing providers are able to acquire large numbers of advanced chips in their data centers, and only licensed, trusted AI companies are able to access the computing power needed to train the most capable—and most dangerous—AI models. 

This may seem like a tall order. But only a handful of nations are needed to put this governance regime in place. The specialized computer chips used to train the most advanced AI models are only made in Taiwan. They depend on critical technology from three countries—Japan, the Netherlands, and the U.S. In some cases, a single company holds a monopoly on key elements of the chip production supply chain. The Dutch company ASML is the world’s only producer of extreme ultraviolet lithography machines that are used to make the most cutting-edge chips.

Read More: The 100 Most Influential People in AI 2024

Governments are already taking steps to govern these high-tech chips. The U.S., Japan, and the Netherlands have placed export controls on their chip-making equipment, restricting their sale to China. And the U.S. government has prohibited the sale of the most advanced chips—which are made using U.S. technology—to China. The U.S. government has also proposed requirements for cloud computing providers to know who their foreign customers are and report when a foreign customer is training a large AI model that could be used for cyberattacks. And the U.S. government has begun debating—but not yet put in place—restrictions on the most powerful trained AI models and how widely they can be shared. While some of these restrictions are about geopolitical competition with China, the same tools can be used to govern chips to prevent adversary nations, terrorists, or criminals from using the most powerful AI systems.

The U.S. can work with other nations to build on this foundation to put in place a structure to govern computing hardware across the entire lifecycle of an AI model: chip-making equipment, chips, data centers, training AI models, and the trained models that are the result of this production cycle. 

Japan, the Netherlands, and the U.S. can help lead the creation of a global governance framework that permits these highly specialized chips to only be sold to countries that have established regulatory regimes for governing computing hardware. This would include tracking chips and keeping account of them, knowing who is using them, and ensuring that AI training and deployment is safe and secure.

But global governance of computing hardware can do more than simply keep AI out of the hands of bad actors—it can empower innovators around the world by bridging the divide between computing haves and have nots. Because the computing requirements to train the most advanced AI models are so intense, the industry is moving toward an oligopoly. That kind of concentration of power is not good for society or for business.

Some AI companies have in turn begun publicly releasing their models. This is great for scientific innovation, and it helps level the playing field with Big Tech. But once the AI model is open source, it can be modified by anyone. Guardrails can be quickly stripped away.

The U.S. government has fortunately begun piloting national cloud computing resources as a public good for academics, small businesses, and startups. Powerful AI models could be made accessible through the national cloud, allowing trusted researchers and companies to use them without releasing the models on the internet to everyone, where they could be abused.  

Countries could even come together to build an international resource for global scientific cooperation on AI. Today, 23 nations participate in CERN, the international physics laboratory that operates the world’s most advanced particle accelerator. Nations should do the same for AI, creating a global computing resource for scientists to collaborate on AI safety, empowering scientists around the world.

AI’s potential is enormous. But to unlock AI’s benefits, society will also have to manage its risks. By controlling the physical inputs to AI, nations can securely govern AI and build a foundation for a safe and prosperous future. It’s easier than many think.

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.