Exclusive: California Bill Proposes Regulating AI at State Level

5 minute read

A senior California lawmaker will introduce a new artificial intelligence (AI) bill to the state’s senate on Wednesday, adding to national and global efforts to regulate the fast-accelerating technology.

Although there are several attempts in Congress to draft AI legislation, the state of California—home to Silicon Valley, where most of the world’s top AI companies are based—has a role to play in setting guardrails on the industry, according to state Senator Scott Wiener, (D—San Francisco) who drafted the bill. 

“In an ideal world we would have a strong federal AI regulatory scheme,” Wiener said in an interview with TIME on Tuesday, adding that he supports attempts in Congress and the White House to regulate the technology. “But California has a history of acting when the federal government is moving either too slowly or not acting.”

He added: “We need to get ahead of these risks, not do what we’ve done in the past around social media or other technology, where we do nothing before it’s potentially too late.”

More From TIME

Read More: The 100 Most Influential People in Artificial Intelligence

The bill targets “frontier” AI systems at the threshold of capability. It proposes that systems that require above a certain quantity of computing power to train—a threshold not specified by the bill—be subject to transparency requirements. It proposes establishing legal liability for “those who fail to take appropriate precautions” to prevent unintended consequences and malicious uses of advanced AI systems. It also suggests mandating security measures to prevent cutting-edge AIs falling into the hands of foreign states. The bill calls for California to set up “CalCompute,” a proposed state research cloud that would provide the computing infrastructure necessary for groups outside of big industry, like academia and startups, to do advanced AI work.

“The Legislature is concerned about the potential for dangerous or even catastrophic unintended consequences to arise from the development or deployment of future frontier AI models,” the bill says. “The rapid pace of technical advance in AI requires a legislative approach that is proactive in anticipating the risks that current and future variants of the technology present to public safety in order to enable the safe harnessing of the technology’s full potential for public benefit.”

Read More: OpenAI CEO Sam Altman Asks Congress to Regulate AI

Still, like many efforts so far at AI regulation, the bill is light on details. As a so-called intent bill, it is less than three pages long and only gives broad brush-strokes of what a piece of California AI legislation would look like. The aim, according to a separate fact-sheet issued by Wiener’s office, is to establish the intent of the California legislature “to enact sweeping rules governing AI development,” while allowing the proposal “to generate discussion and feedback for a period of time before being amended with full legislative text and moving through the formal legislative process.” The plan is for the full text to be amended with specifics by January, ready for the bill to progress through the legislature (where Democrats have a supermajority in both houses) in 2024 and become law, at the earliest, at the beginning of 2025.

Read more: Big Tech Is Already Lobbying to Water Down Europe's AI Rules

Wiener told TIME he wanted to leave it up to state lawmakers to decide the appropriate computing power thresholds for the bill’s provisions to take effect, and whether liability for failing to take adequate precautions should fall upon corporations or individual staff. “That is to be discussed and decided,” he said. “I don’t want to prejudge that. What I want is a system in place that creates a real incentive for companies and labs to take this seriously and do it right.”

Wiener acknowledged that currently the California government does not have the capacity to audit AI systems or fully enforce the bill. “I don’t think there’s an agency that, tomorrow, could implement this,” he said. “Absolutely part of this conversation needs to be: which agency? Is it an existing agency whose mandate could be increased? Is it a new agency?” That decision, he said, would ultimately be up to California Governor Gavin Newsom to decide. 

“So much AI innovation is happening in California,” Wiener says. “So when California sets rules around AI, that will have a global impact.”

Read More: The Heated Debate Over Who Should Control Access to AI

But being home to AI companies while seeking to set a regulatory framework for them could create a conflict for policymakers keen to foster innovation and economic growth. Separately, earlier in September, Newsom issued an executive order calling for a “measured approach” to AI, mitigating its risks while “remaining the world’s AI leader.” The executive order echoed a policy proposal earlier this year from the U.K., where leading AI lab Google DeepMind is based, which emphasized the need for the government to avoid a “heavy-handed” approach to AI regulation that would stifle innovation.

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com