The European Union’s flagship artificial intelligence regulation took a major step toward becoming law on Wednesday, after lawmakers voted to approve the text of the law that would ban real-time facial recognition, and place new transparency requirements on generative AI tools like ChatGPT.
The regulation—known as the E.U. AI Act—will now progress to the final “trilogue” stage of the E.U.’s regulatory process. There, officials will attempt to reach a compromise between the draft of the law just approved by the E.U. Parliament, a different version preferred by the bloc’s executive branch, and the desires of member states. That process will begin on Wednesday night and must be completed by January if the law is to come into force before E.U. elections next year.
“This moment is hugely significant,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office. “What the European Union says poses an unacceptable risk to human rights will be taken as a blueprint around the world.”
The version of the rules approved by E.U. lawmakers on Wednesday states that any AIs applied to “high risk” use cases like employment, border control, and education must comply with a list of safety requirements including risk assessments, transparency, and logging. The Act does not automatically consider “general purpose” AIs like ChatGPT to be high risk, but it does impose transparency requirements and risk assessments for so-called “foundation models,” or powerful AI systems trained on large quantities of data. Purveyors of foundation models, including the likes of OpenAI, Google, and Microsoft, will for example be required to declare whether copyrighted material has been used to train their AIs. There is no similar requirement, however, to declare whether personal data has been used during their training.
Read More: AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation
A last-minute attempt by a bloc of right-wing lawmakers in the European Parliament to remove the Act’s proposed ban on real-time facial recognition was rejected by lawmakers on Wednesday.
In the forthcoming trilogue stage, the European Council—the arm of the bloc representing the member state governments—is expected to argue strongly for AI tools used by law enforcement and border forces to be exempt from the requirements placed on “high risk” systems, according to Access Now’s Leufer.
Read More: OpenAI Could Quit Europe Over New AI Rules, CEO Sam Altman Warns
“After today’s AI Act plenary vote, the Parliament’s negotiating team have a strong mandate to demand that these vital protections are not undermined during trilogue negotiations,” Leufer says. “The Parliament must hold the line against industry lobbying and state overreach to keep these victories secure and expand them to people at the E.U.’s borders, and try to close the gaps and loopholes that remain.”
Some experts worried about the safety risk posed by increasingly powerful AI models believe the bill is missing limits around the amount of computing power AI systems can use. Experts frequently refer to the physical machines that provide this computing power as “compute.”
Large language models like Chat GPT have used exponentially more compute with each new version released. This has dramatically improved their capabilities and performance, but advocates worry that the continued direction of exponential growth could lead to safety issues. Currently, the E.U. law does not place extra safety burdens on AI systems based on the amount of compute they use.
“The more compute is used to train an AI system, the more powerful the AI will be. With more power comes more risks and potential for danger,” says Andrea Miotti, head of strategy and governance at the AI safety start-up Conjecture.
According to Miotti, it is relatively easy for researchers to measure the total computing power of their systems because the chips used to train most cutting-edge AIs—known as graphical processing units or GPUs—are a physical resource.
“Looking at the amount of compute is much easier to regulate because it’s a physical thing,” says Miotti. Regulating compute may become especially important if the U.S. implements its own AI law in the future, since that is where most compute is located, he notes.
“Most of it is located in data centers in the U.S. and monitors could inspect that physically, so it would be easy for the U.S. government to check this if they wanted to,” says Miotti.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo at billy.perrigo@time.com