U.K. Competition Watchdog Signals Cautious Approach to AI Regulation

7 minute read

A report published this week by the U.K.’s Competition & Markets Authority (CMA) has raised concerns about the potential ways the artificial intelligence industry could become monopolized or harm consumers in future, but stressed that it is too soon to tell whether these scenarios would materialize.

The issues raised by the report highlight the difficulties policymakers face in governing AI, a source of both huge potential commercial value and many risks. Rishi Sunak, the British Prime Minister, is pushing for the U.K. to occupy a central role in international AI policy discussions, with a particular focus on risks from advanced AI systems. If the U.K. competition watchdog decides to start taking action against AI developers, tech companies around the world could be affected.

Regulatory concerns

The report, published on Monday, focuses on foundation models, which the CMA defines as “a type of AI technology that are trained on vast amounts of data that can be adapted to a wide range of tasks and operations.” Examples include text-generating AI models, such as GPT-3.5, the model that powers OpenAI’s ChatGPT, as well as image-generating AI models, such as Stable Diffusion.

The paper raises concerns that the increasing amounts of computational power and data used to train AI models could create barriers to entry, reducing competition in the sector.

“Access to computing power already determines the competitive dynamics of this market,” Anton Korinek, an economics professor at the University of Virginia, told TIME over email. “The report is spot-on at recognizing that the market for foundation models would be competitive if a large number of models pushed towards the frontier of capabilities; a market where only a few models are at the frontier is not competitive even if hundreds of other models are behind the frontier.” 

The CMA also says in the paper that some stakeholders argued that those with a lead in the development of foundation models might be able to use the capabilities of those models to further improve them and pull away from their competitors, in a process known as “recursive self-improvement,” although the report also notes that this scenario is speculative.

Read more: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down

In 2021, Sam Altman, CEO of prominent AI developer OpenAI, predicted that AI would create a “a recursive loop of innovation,” in which “smart machines themselves help us make smarter machines.” In April, TechCrunch reported that a funding document produced by prominent AI developer Anthropic contained statements suggesting there could soon be a winner-takes-all dynamic in development of foundation models.

The open-source debate

The CMA report also says that, if powerful models such as Meta’s Llama 2 remained open-source, this would reduce the ability of other AI developers to abuse market power and could protect consumers. The report acknowledged that there may be a tension between ensuring public safety and promoting competition, because if powerful AI models are broadly available they could be used to cause harm. 

U.S. Senators Richard Blumenthal and Josh Hawley released a framework for a U.S. AI bill seeking outlining another approach on Sept. 7. Under the framework AI developers would require licenses to develop the most powerful systems, and take steps to limit the transfer of advanced AI models to “China, Russia, and other adversary nations.” Both of these measures are strongly in tension with promoting powerful open-source models.

“Competition policy needs to be coupled with proactive regulation of other risks, including safety risks,” said Korinek. “If that type of regulation lags behind, encouraging competition could in fact have adverse effects on safety.”

Ultimately, given that most powerful open-source AI models are developed outside the U.K., regulators in the U.K. would have limited power over the supply of open-source AI systems if the U.S. and other countries take action to prevent their release.

Read more: The Heated Debate Over Who Should Control Access to AI

The companies that develop foundation models will need to be held accountable to ensure consumers are protected, the report says. 

The Federal Trade Commission (FTC), the U.S. competition and consumer protection agency, has already taken action on consumer protection. In July, the FTC opened an investigation into OpenAI over whether ChatGPT, the AI powered-chatbot, has harmed consumers through its data collection and publication of false information.

The FTC, under the leadership of Lina Khan, has taken a confrontational stance towards AI regulation. In a blog post published in June, the FTC highlighted potential competition concerns raised by generative AI—systems that create content rather than analyzing existing data. (Foundation models often perform generative tasks, but not all foundation models are generative and not all generative systems are foundation models). 

The blog post touched on many of the same issues the CMA report explores. For example, the post expressed concerns about the market power of cloud computing providers. Korinek noted that competition regulators appear to be making greater efforts to coordinate to prevent firms from skirting regulations. 

The European Union’s AI Act, which is currently progressing through the bloc’s regulatory process, imposes transparency requirements and risk assessments for foundation models. The companies behind foundation models, such as OpenAI, Google, and Microsoft, would be required to declare whether copyrighted material has been used to train their AIs. 

Cautious next steps

The CMA’s purview as a competition regulator extends beyond just the U.K, as demonstrated when it blocked tech giant Microsoft’s attempt to acquire video games producer Activision Blizzard for $69 billion.

The regulator noted in its foundation model paper that it “will be vigilant of any competition concerns that arise in markets where [foundation models] play a role and will not hesitate to use its powers where appropriate.”

But, the report also states that the next steps are a “significant programme of engagement” with stakeholders, culminating in the publication of an update in early 2024, signaling a cautious approach.

In a tweet, Cristina Caffara, an expert at economics consultancy Keystone Strategy, said that the CMA is being overly cautious, and that the market structure is such that foundation model developers will be able to abuse their market power. Competition regulators made the same mistake in the case of digital platforms, she added.

“We recognise that the foundation models market is rapidly evolving and are fully aware of the risks,” a CMA spokesperson said in an emailed statement. “That’s why we have set out these proposed principles now so they can help guide the market to more positive outcomes and maximise the potential of these technologies.”

“By working with stakeholders, our aim is to develop these principles to ensure effective competition and consumer protection are at the heart of responsible development and use of foundation models. But as we have been absolutely clear, we stand ready to intervene where necessary,” the statement added.

A bill currently working its way through the U.K. legislative process and expected to pass next year, would give the CMA new powers to regulate digital markets and the firms active within them and the ability to levy fines of up to 10% of the offending company’s global revenue—billions of dollars in the case of big tech companies. These powers could be applied to AI developers if the CMA believed they were abusing market power or otherwise harming consumers.

“The mere fact that the CMA has published such a report will probably make a positive difference by signaling to companies in the sector that their behavior will be closely monitored for any potential anticompetitive practices,” said Korinek. “In a sense, the ball is now in the court of the companies working on foundation models to show that they do not engage in anticompetitive practices.” 

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com