E.U.’s AI Regulation Could Be Softened After Pushback From Biggest Members

12 minute read

A key aspect of the E.U.’s landmark AI Act could be watered down after the French, German, and Italian governments advocated for limited regulation of the powerful models—known as foundation models—that underpin a wide range of artificial intelligence applications.

A document seen by TIME that was shared with officials from the European Parliament and the European Commission by the three biggest economies in the bloc over the weekend proposes that AI companies working on foundation models regulate themselves by publishing certain information about their models and signing up to codes of conduct. There would initially be no punishment for companies that didn’t follow these rules, though there might be in future if companies repeatedly violate codes of conduct.

Foundation models, such as GPT-3.5—the large language model that powers OpenAI’s ChatGPT—are trained on vast amounts of data and are able to carry out a wide range of tasks in a number of different use cases. They are some of the most powerful, valuable and potentially risky AI systems in existence. Many of the most prominent and hyped AI companies—including OpenAI, Google DeepMind, Anthropic, xAI, Cohere, InflectionAI, and Meta—develop foundation models. Accordingly, governments have increasingly focused on these models—the Biden Administration’s recent Executive Order requires any lab developing a very large foundation model to run safety tests and inform the government of the results, and discussions at the recent U.K. AI Safety Summit focused heavily on risks associated with the most advanced foundation models.

The Franco-German-Italian document proposes that AI primarily be regulated based on how it’s used. Foundation model developers would be required to publish certain kinds of information, such as the kinds of testing done to ensure their model is safe. No sanctions would be applied initially to companies who didn’t publish this information, although the proposal suggests a sanction system could be set up in future.

The document states that the three countries are opposed to a “two-tier” approach to foundation model regulation, originally proposed by the European Commission. The two-tier approach would similarly require light touch regulation for most foundation models, but would impose stricter regulations on the most capable models expected to have the largest impact. 

Understanding that some countries were resistant to the more onerous two-tier approach, in a proposal seen by TIME, the European Commission presented a new two-tier approach on Nov. 19 that would only impose an additional non-binding code of conduct on the most powerful foundation models. The proposal was discussed at a meeting of Members of European Parliament and senior officials from the Commission and the Council on Nov. 21. While no formal agreement was made, negotiations are expected to center around this proposal going forward, according to two officials who were present. This represents a setback for the European Parliament, which is largely in favor of more strict regulation on all foundation models.

Big tech companies, largely headquartered in the U.S., have been lobbying to weaken the proposed E.U. legislation throughout its development. Now, calls to weaken certain aspects of the regulation have come from the French, German, and Italian governments, which are eager to promote AI innovation. France and Germany are home to two of Europe’s most prominent AI companies: Aleph Alpha and Mistral AI, both of which have advocated against regulating foundation models.

The E.U.’s AI Act was first proposed in 2021 and talks are in the final, ‘trilogue,’ stage of the E.U. legislative process, during which the European Parliament and the member states negotiate to find a version of the Act they can agree on. The aim is to finalize the AI Act before February 2024, otherwise the 2024 European Parliament elections could delay its passage until early 2025. If passed, the E.U. AI Act would be one of the most stringent and comprehensive AI regulations in the world. But disagreements remain over how foundation models should be regulated.

Foundation models and the E.U. AI Act

The dispute centers on how strictly foundation models, also known as general-purpose AI, should be regulated.

The initial regulatory framework published in April 2021 for the E.U.’s AI Act proposed imposing differing levels of regulatory scrutiny on AI systems depending on their intended use. Higher risk use cases, such as law enforcement, would require measures such as risk assessment and mitigation measures under the proposal.

In May 2022, the French Presidency of the Council of the E.U.—the legislative body representing the E.U. member states—proposed regulating foundation models regardless of how they are used, imposing additional guardrails and setting requirements for their training data.

After OpenAI released ChatGPT in November 2022, some policy makers and civil society organizations raised concerns about general-purpose AI systems. Earlier this year, U.S.-based research organization AI Now Institute published a report signed by more than 50 experts and institutions arguing that general-purpose AI systems “carry serious risks and must not be exempt under the forthcoming E.U. AI Act.” The report argues that there are risks inherent in the development of foundation models, such as potential privacy violations committed in order to collect the data required to train a model, which can only be addressed by regulating the models themselves rather than their application.

In June 2023, the European Parliament, the legislative body comprising directly elected officials from across the continent, approved a version of the Act that would regulate all foundation models regardless of their expected impact. Since then, the trilogue—the European Commission, the E.U. Council, and the European Parliament—have been negotiating to find a compromise.

Amid concerns from the Council over how broad the foundation models provisions in the Act were, the European Commission—the E.U. executive branch tasked with acting as an ‘honest broker’ in the trilogue negotiations—proposed the compromise “two-tier” approach, according to a European Parliament official. This same approach was disavowed in the document shared by the French, German and Italian governments.

Promoting innovation

In the Franco-German-Italian “non-paper,” the countries advocate for a “balanced and innovation-friendly” approach to regulating AI that is risk-based but also reduces “unnecessary administrative burdens on Companies that would hinder Europe´s ability to innovate.”

The French and German governments have both made statements and taken steps demonstrating their desire to foster innovation in their domestic AI industries. In June, President Macron announced €500 million in funding to support AI "champions.” Similarly, the German government announced in August that it will almost double public funding for AI research, to nearly €1 billion (around $1.1 billion) in the next two years.

Both governments have expressed concern that regulation could stifle their domestic AI industries. Speaking at the U.K. AI Safety Summit in November, French Finance Minister Bruno Le Maire said “before regulating, we must innovate,” citing Mistral AI as a promising company and suggesting that the E.U. AI Act should regulate the uses of AI rather than the underlying models.

After a Franco-German cabinet meeting in October, German Chancellor Olaf Scholz said that the two countries “will work together on European regulation,” and that the pair “don’t want to affect the development of models in Europe.” Macron also warned of the danger of overregulation. “We don’t want regulation that would stifle innovation,” he said.

In late October, French, German, and Italian business and economic ministers met in Rome to discuss their joint approach to artificial intelligence. A press release about the meeting said that the countries are committed to “reducing unnecessary administrative burdens on Companies that would hinder Europe’s ability to innovate.”

A spokesperson for the German permanent representation in Brussels said in an emailed statement: “Germany and France think that regulating the foundational models too much too early would hinder and inhibit innovation and the future development of AI at large. This is particularly true for the companies who are at the forefront of developing these systems.”

A spokesperson for the French permanent representation in Brussels said in an emailed statement: “Since the beginning of the negotiations on this issue, as on many others, France has defended proportionate and balanced regulation, which takes into account both the need to support innovation and to guarantee the protection of fundamental rights.”

The Italian permanent representation to the E.U. did not respond in time for publication.

The U.K. is similarly hesitant about regulating its domestic AI industry. Its Minister for AI and intellectual property, Viscount Jonathan Camrose, said on Nov. 16 that the U.K. would not regulate AI in the short term over concerns that new rules could harm innovation.

Prospective national champions

Germany and France are both home to AI developers that stand to benefit from a relaxed approach to regulating foundation models. Executives at Germany’s Aleph Alpha and France’s Mistral AI have both publicly spoken out against foundation model regulation.

In October, Aleph Alpha founder and CEO Jonas Andrulis was joined by Robert Habeck, the German Federal Minister for Economic Affairs and Climate Action, on a panel about AI. At the event, Andrulis argued against regulation of general-purpose AI systems. “Personally, I believe we don’t need to regulate foundational technology at all,” Andrulis said. “Use cases yes, but foundational technology not at all.” At the same event, Habeck warned that the E.U. AI Act could over-regulate in a way that very large companies could comply with but smaller ones might struggle to manage, citing Aleph Alpha as an example of such a company.

Habeck also recently joined an Aleph Alpha’s press conference where the company announced it had raised $500 million in funding. “The thought of having our own sovereignty in the AI sector is extremely important,” Habeck said at the press conference, according to Bloomberg. “If Europe has the best regulation but no European companies, we haven’t won much.” 

Aleph Alpha’s products are increasingly being used by the German government. The German state of Baden-Württemberg uses Aleph Alpha’s technology as part of an administrative support system. At an event in August, Germany’s Federal Minister for Digital and Transport Volker Wissing said he hopes to start using the system used in Baden-Württemberg at the federal administration “as quickly as possible.” In May, German IT service provider Materna announced a partnership with Aleph Alpha that involves the company’s language models being used for public sector administration tasks.

Aleph Alpha has participated in a number of public hearings with the official bodies of the E.U. and the German Government relating to AI regulation, where it has advised on “technological concepts and capabilities underlying the architecture and functioning” of large language models, a spokesperson said in an emailed statement. “We gave recommendations on the technological capabilities which should be considered by lawmakers when formulating a sensible and technology-based approach to AI regulation.”

France’s Mistral AI counts Cédric O, President Emmanuel Macron’s former Secretary of State for the Digital Economy, as one of its owners and an adviser.

Alongside Mistral AI’s CEO and co-founder Arthur Mensch, O is a member of the French Generative Artificial Intelligence Committee, which was launched in September and will provide recommendations to the French government. 

In June 2023, along with founding partner of Mistral AI investor La Famiglia VC, Jeannette zu Fürstenberg, O helped organize an open letter signed by more than 150 executives warning that the draft text approved by the European Parliament would regulate foundation models too heavily, resulting in the E.U. falling behind the U.S. And in October, O warned that the E.U. AI Act could “kill” Mistral, and argued that European policymakers should focus on ensuring European companies can develop. 

“We have publicly said that regulating foundational models did not make sense and that any regulation should target applications, not infrastructure,” Mensch told TIME over email. “This would be the only enforceable regulation and, in Europe, the only way to prevent US regulatory capture. We are happy to see that the regulators are now realising it.”

Close to the wire

With an unofficial February 2024 deadline looming and a transfer of the presidency of the Council of the E.U. coming up in January, policymakers in Brussels had hoped to finalize the Act at a meeting scheduled for Dec. 6.

At the meeting on Nov. 21, it seemed that the Commission’s proposed two-tier approach, with a non-binding code of conduct for the largest foundation models, would be the basis for further discussions about the final foundation model regulation in the Act, according to two officials in the meeting. But the new direction of discussions is likely to face opposition from some in the European Parliament who want to see stricter regulation and will vote on the final draft of the legislation. 

Axel Voss, a German ​​Member of the European Parliament, said in a post on X that the European Parliament cannot accept the French, German, and Italian proposal. (Members of the European Parliament are directly elected by voters across the continent, whereas the Council represents the E.U.’s constituent national governments). AI experts Yoshua Bengio and Gary Marcus, have also expressed concern over moves to water down regulation of foundation models. 

“Right now it seems like the Council wants basically nothing for the smaller models and transparency, perhaps, for the bigger ones,” Kim van Sparrentak, a Dutch Member of the European Parliament from the GroenLinks political party, told TIME on Nov. 14. “That's an absolute no go.”

More Must-Reads From TIME

Write to Will Henshall at will.henshall@time.com