• Ideas
  • Technology

Why Europe Must Not Let AI Firms Put Profits Before People 

6 minute read
Ideas
Kaljulaid served as President of Estonia between 2016 and 2021.

The soap opera-like ousting and swift return of OpenAI CEO Sam Altman produced plenty of fodder for ironic quips online but it also exposed some serious fault lines. One such critique I enjoyed was: “How are we supposed to solve the AI alignment problem if aligning just a few board members presents an insurmountable challenge?”

As the company behind ChatGPT, OpenAI may be one of the more recognizable names, but artificial intelligence is more than one company. It's a technology of immense consequence, yet it remains almost entirely unregulated. The E.U. has a chance to meaningfully tackle that challenge—but not if it bends the knee to Big Tech’s ongoing onslaught. Inspirational Members of the European Parliament have so far been standing firm in the face of incredible pressure, in an effort to save this landmark legislation. On the weekend, E.U. Commissioner Thierry Breton spoke out against what he claims are self-serving lobbying efforts by France’s Mistral AI and other AI companies that do not serve public interest. These lawmakers need and deserve our support at this crucial moment.

Europe is poised to lead in a world that is waking up to the need to regulate AI. From the U.S. Executive Order to the recent AI Safety Summit hosted by the U.K. at Bletchley Park, countries everywhere are recognising that if we are going to share the benefits of this incredible technology, we must mitigate its risks. The E.U. AI Act will be the first comprehensive legal framework aimed at doing precisely this, but a handful of technology firms are holding the political process hostage, threatening to sink the ship unless their systems are exempt from regulation. To capitulate will damage European innovation, put profits before public safety, and represent an affront to democracy. Our lawmakers must not bend the knee. 

On Nov. 10 negotiations broke down after France and Germany pushed back against proposed regulation of “foundation models.” Together with Italy, they subsequently released a nonpaper that articulated these demands, asking that companies building foundation models only be subject to voluntary commitments. Foundation models are general-purpose machine learning systems, such as Open AI’s GPT-4 (which underpins ChatGPT), that can be subsequently applied to a large range of downstream applications and functions. Regulating these foundation models will force AI corporations to make sure they are safe before deployment, rather than waiting to act until after dangerous systems are released, which comes with clear danger of public harm. Given growing concern on the potential risks presented by these advanced systems, including mass misinformation, enabled bioterrorism, hacking of critical infrastructure, large-scale cyber attacks and more, this is a sensible provision to include.

Read More: E.U.’s AI Regulation Could Be Softened After Pushback From Biggest Members

We have seen first-hand the need for codified legal protections, rather than relying on corporate self-regulation. For example, the psychological harm wrought by social media on young women and girls has become increasingly apparent. The companies that ran the platforms and channels that hosted harmful content were aware of this damage for years yet failed to act. Voluntary commitments are neither sufficient nor reliable. We need prevention, rather than cure, if we want to stop people getting hurt. We need enforceable safety standards and risk mitigation for powerful AI from the start. 

So why the objection? Holdouts claim that it will hinder innovation for businesses who want to adapt and use AI, but this simply isn’t true. Regulating foundation models is essential for innovation as it will protect downstream smaller European users from the requirements of compliance, and from liability if things go wrong. There are only a handful of very well-resourced companies that are developing the most impactful foundation models, but there are thousands of small companies in the E.U. that have already adopted them for concrete business applications, and many more who plan to do so. We need balanced obligations across the value chain—the broadest shoulders should bear the biggest load.

This is reflected in the makeup of the opposing sides. The European DIGITAL SME Alliance, comprising 45,000 business members, wants to regulate foundation models. Two European AI corporations (France’s Mistral AI and Germany’s Aleph Alpha), along with a handful of giant U.S. firms do not. Their argument is also not borne out by real-world experience. My own country Estonia is bound by the exact same E.U. rules and regulations as Germany, yet has a vibrant and thriving startup ecosystem. If those who oppose regulation of foundation models, like Mistral’s Cedric O, are looking to point the finger, they must look elsewhere. In truth, while those opposing regulation claim to be protecting the E.U.’s innovation ecosystem, such a step-down would more likely transfer financial and legal burdens away from large corporations and onto startups, which have neither the ability nor the resources to change the underlying models.

France and Germany also claim that regulating foundation models will stifle Europe’s ability to compete in AI on the global stage. This doesn’t hold up. The tiered approach proposed, which is already a compromise between the Parliament and the Council of the E.U., allows for targeting so that competitors to major AI companies can emerge without onerous restrictions. European lawmakers should close their ears to fearmongering pushed by Big Tech and its newest allies, and remember the Act’s purpose: To achieve a fair and balanced framework that safeguards innovation while preventing harm. It must not be a legislative device for anointing a few Silicon Valley-backed AI leaders with sectoral supremacy and zero requirements, while preventing thousands of European businesses from maximizing the technology’s potential. 

The Parliament supports regulating foundation models, as do many in the Commission and the Council. The business community endorses it, as do the thousands of AI experts who have profound concerns about the dangers of these increasingly powerful systems if left unchecked. A handful of tech firms should not be allowed to hold our political process to ransom, threatening to detonate this landmark legislation and throw away three years of work. They must not be allowed to put their profits before our safety, and to put market capture before European innovation. 

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.