Big Tech Is Already Lobbying to Water Down Europe’s AI Rules

7 minute read

European lawmakers are putting their finishing touches on a set of wide-ranging rules designed to govern the use of artificial intelligence that, if passed, would make the E.U. the first major jurisdiction outside of China to pass targeted AI regulation. That has made the forthcoming legislation the subject of fierce debate and lobbying, with opposing sides battling to ensure that its scope is either widened or narrowed.

Lawmakers are close to agreeing on a draft version of the law, the Financial Times reported last week. After that, the law will progress to negotiations between the bloc’s member states and executive branch.

The E.U. Artificial Intelligence Act is likely to ban controversial uses of AI like social scoring and facial recognition in public, as well as force companies to declare if copyrighted material is used to train their AIs.

The rules could set a global bar for how companies build and deploy their AI systems as companies may find it easier to comply with E.U. rules globally rather than to build different products for different regions—a phenomenon known as the “Brussels effect.”

“The E.U. AI Act is definitely going to set the regulatory tone around: what does an omnibus regulation of AI look like?” says Amba Kak, the executive director of the AI Now Institute, a policy research group based at NYU.

One of the Act’s most contentious points is whether so-called “general purpose AI”—of the kind that ChatGPT is based on—should be considered high-risk, and thus subject to the strictest rules and penalties for misuse. On one side of the debate are Big Tech companies and a conservative bloc of politicians, who argue that to label general purpose AIs as “high risk” would stifle innovation. On the other is a group of progressive politicians and technologists, who argue that exempting powerful general purpose AI systems from the new rules would be akin to passing social media regulation that doesn’t apply to Facebook or TikTok.

Read More: The A to Z of Artificial Intelligence

Those calling for general purpose AI models to be regulated argue that only the developers of general purpose AI systems have real insights into how those models are trained, and therefore the biases and harms that can arise as a result. They say that the big tech companies behind artificial intelligence—the only ones with the power to change how these general purpose systems are built—would be let off the hook if the onus for ensuring AI safety were shifted onto smaller companies downstream.

In an open letter published earlier this month, more than 50 institutions and AI experts argued against general purpose artificial intelligence being exempted from the E.U. regulation. “Considering [general purpose AI] as not high-risk would exempt the companies at the heart of the AI industry, who make extraordinarily important choices about how these models are shaped, how they’ll work, who they’ll work for, during the development and calibration process,” says Meredith Whittaker, the president of the Signal Foundation and a signatory of the letter. “It would exempt them from scrutiny even as these general purpose AIs are core to their business model.”

Big Tech companies like Google and Microsoft, which have plowed billions of dollars into AI, are arguing against the proposals, according to a report by the Corporate Europe Observatory, a transparency group. Lobbyists have argued that it is only when general purpose AIs are applied to “high risk” use cases—often by smaller companies tapping into them to build more niche, downstream applications—that they become dangerous, the Observatory’s report states.

“General-purpose AI systems are purpose neutral: they are versatile by design, and are not themselves high-risk because these systems are not intended for any specific purpose,” Google argued in a document that it sent to the offices of E.U. commissioners in the summer of 2022, which the Corporate Europe Observatory obtained through freedom of information requests and made public last week. Categorizing general-purpose AI systems as “high risk,” Google argued, could harm consumers and hamper innovation in Europe.

Microsoft, the biggest investor in OpenAI, has made similar arguments through industry groups that it is a member of. “There is no need for the AI Act to have a specific section on GPAI [general purpose AI],” an industry group letter co-signed by Microsoft in 2022 states. “It is … not possible for providers of GPAI software to exhaustively guess and anticipate the AI solutions that will be built based on their software.” Microsoft has also lobbied against the E.U. AI Act “unduly burdening innovation” through The Software Alliance, an industry lobby group that it founded in 1998. The forthcoming regulations, it argues, should be “assigned to the user that may place the general purpose AI in a high-risk use [case],” rather than the developer of the general purpose system itself.

A spokesperson for Microsoft declined to comment. Representatives for Google did not respond to requests for comment in time for publication.

Read More: The AI Arms Race Is Changing Everything

The E.U. AI Act was first drafted in 2021, at a time when AIs were primarily narrow tools applied to narrow use-cases. But in the last two years, Big Tech companies have begun to successfully develop and launch powerful “general purpose” AI systems that can perform harmless tasks—like writing poetry—while equally having the capacity for much riskier behaviors. (Think OpenAI’s GPT-4 or Google’s LaMDA.) Under the prevailing business model that has since emerged, these big companies license their powerful general purpose AIs to other businesses, who will often adapt them to specific tasks and make them public through an app or interface.

Read More: The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter

Some argue that the E.U. has placed itself in a bind by structuring the AI Act in an outdated fashion. “The underlying problem here is that the whole way they structured the E.U. Act, years ago at this point, was by having risk categories for different uses of AI,” says Helen Toner, a member of OpenAI’s board and the director of strategy at Georgetown’s Center for Security and Emerging Technology. “The problem they are now coming up against is that large language models—general purpose models—don’t have an inherent use case. This is a big shift in how AI works.”

“Once these models are trained, they’re not trained to do one specific thing,” Toner says. “Even the people who create them don’t actually know what they can and can’t do. I expect that it’s going to be, probably, years before we really know all the things that GPT-4 can and can’t do. This is very difficult for a piece of legislation that is structured around categorizing AI systems according to risk levels based on their use case.”

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com