President Biden this week signed a sweeping Executive Order on artificial intelligence that seeks to tackle threats posed by the technology, but some experts say the regulation has left questions unanswered about how it could work in practice.
The order tasks agencies with rethinking their approach to AI and aims to address threats relating to national security, competition and consumer privacy, while promoting innovation, competition, and the use of AI for public services.
One of the most significant elements of the order is the requirement for companies developing the most powerful AI models to disclose the results of safety tests. On Tuesday, Secretary of Commerce Gina Raimondo told CNBC that under the Executive Order “the President directs the Commerce Department to require companies to tell us: what are the safety precautions they’re putting in place and to allow us to judge whether that’s enough. And we plan to hold these companies accountable.”
The 63-page Executive Order doesn’t, however, set out the implications of a company reporting that its model could be dangerous. Experts are divided—some think the Executive Order solely improves transparency, while others believe the government might take action if a model were found to be unsafe.
This has prompted some experts to conclude that the White House has run up against the limits of executive power on addressing certain AI concerns.
Forward guidance
At a virtual briefing before the Order’s release, a senior official said that the President tasked his team with finding every lever and pulling them all, according to attendee Helen Toner, director of strategy and foundational research grants at the Center for Security and Emerging Technology, a think tank.
Much of the order consists of instructions to other agencies and bodies to carry out studies or develop more detailed guidance. For example, the Office of Management and Budget has 150 days to issue guidance to federal agencies for how they can advance innovation while managing risks posed by AI.
Read More: AI Experts Call For Policy Action to Avoid Extreme Risks
Executive Orders depend on successful implementation, but strong levels of political support from within the federal government mean that this one has a better chance than most of making an impact, says Divyansh Kaushik, associate director for emerging technologies and national security at think tank the Federation of American Scientists.
Former President Donald Trump issued his own AI-focused Executive Order in 2019 that aimed to maintain U.S. AI primacy. Kaushik says that it suffered from a lack of concerted support from senior officials, resulting in only one department—the Department of Health and Human Services—implementing AI policy in line with the Executive Order. In contrast, the Biden Administration’s Executive Order “has buy-in from the very top, which has buy-in from the President's office, the chief of staff's office, the Vice President's office,” says Kaushik.
Compute limits
Some parts of the Biden Administration’s order are likely to have an immediate impact, such as the changes to rules concerning high-skill immigration. These aim to boost U.S. innovation by increasing the amount of AI talent available, some of which are supposed to be made in the next 90 days.
Another provision likely to have a more immediate impact on the AI industry is a set of requirements imposed on companies that are developing or intend to develop dual-use foundation models. These models are capable across a number of tasks and could pose a threat to national security, as the U.K. government recently laid out in a paper published ahead of the AI Safety Summit. The companies will need to inform the U.S. government about their AI development plans, the security measures—both physical and cyber—they’ve taken to secure their AI models, and the results of any safety testing done.
The Secretary of Commerce has been tasked with defining the AI models that are sufficiently dangerous to qualify for these requirements. As it stands, experts don’t know how to do this, says Paul Scharre, executive vice president and director of studies at the Center for a New American Security, a military-affairs think tank.
In the meantime, the requirements will apply to models that are trained using an amount of computational power above a set threshold of 100 million billion billion operations. No AI models have yet been trained using this much computing power. OpenAI’s GPT-4, the most capable publicly available AI model, is estimated by research organization Epoch to have been trained with five times less than this amount. However, the amount of computing power used to train AI models has been doubling every six months for the last decade, according to Epoch.
A Biden Administration official said that the threshold was set such that current models wouldn’t be captured but the next generation of state-of-the-art models likely would, according to Scharre, who also attended the briefing.
Computational power is a “crude proxy” for the thing policymakers are really concerned about—the model’s capabilities—says Scharre. But Kaushik points out that setting a compute threshold could create an incentive for AI companies to develop models that achieve similar performance while keeping computational power under the threshold, particularly if the reporting requirements threaten to compromise trade secrets or intellectual property.
Limits to presidential power
Even for models that surpass the computational threshold, the Executive Order only explicitly requires companies to inform the government of the results of the red-teaming safety tests, which involve auditors adversarially trying to discover issues with AI models. To provide legal justification, the Biden Administration invoked the Defense Production Act, a law which can allow the President to influence domestic industry in order to promote national security.
It is “totally unclear” what would happen if a company were to report that its AI model had failed the required safety tests, says Toner, of the Center for Security and Emerging Technology.
Read More: Federal AI Regulation Draws Nearer as Schumer Hosts Second Insight Forum
“The basic philosophy here is there's a small number of companies that are building these very sophisticated AI systems, and those companies are telling the government: ‘We don't know what our systems can do, and we think they might be dangerous.’ And that's kind of crazy,” says Toner. “And so the government is saying: ‘Okay, you have to tell us more, we have to be better informed.’”
Samuel Hammond, a senior economist at the Foundation for American Innovation, believes that the government would step in, either to prevent the model’s deployment or even order that it be deleted. “The defense production axis has been deployed in recent years to force companies to produce things that they didn’t want to produce, and can force companies to stop production of things they don't want to produce,” says Hammond. “Its powers are pretty broad under the national security umbrella.”
The use of the U.S. Defense Production Act to require disclosure is already “very aggressive” and could be open to legal challenge from AI developers, says Charles Blanchard, a partner at law firm Arnold and Porter and former general counsel of the U.S. Air Force and the Army. However, he notes that almost all the companies this regulation could apply to are already working with the government voluntarily on AI safety issues, and are therefore unlikely to bring a challenge.
Blanchard says if the government took action against unsafe AI model developers under the Defense Production Act it might be on weaker legal footing. “I think that would be a stretch and that's one where you might get a legal challenge,” he says.
The White House did not respond to a request for comment in time for publication.
Toner says that the ambiguity around post-disclosure enforcement is just one of many examples of the Biden Administration bumping up against the limits of executive power, citing the use of AI in law enforcement and criminal justice as another area where the power to act is “to a large extent, out of the executive branch's hands.”
“This is really putting the ball in Congress's court to reinforce some of the things in here that the White House can only do kind of a tentative initial version of, and also to handle some kinds of problems that the executive branch just can't handle on their own,” she says.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Will Henshall at will.henshall@time.com