MIT Technology Review Subscribe

Why the EU AI Act was so hard to agree on

Three key issues that jeopardized the EU AI Act

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

Update: On December 8, 2023, the EU AI Act was agreed on, after this story was written and sent as MIT Technology Review’s weekly tech policy newsletter, The Technocrat. The full text of the new law is not yet available, but we will have a full breakdown in The Algorithm, arriving in inboxes later today. Make sure you’re signed up!

Advertisement

Three governing bodies of the European Union have been intensely negotiating the final version of the EU AI Act, a major package of laws regulating the industry that was first proposed back in 2021. The initial deadline for a final package, December 6, has now come and gone, though lawmakers have not given up and were debating into the early hours of Thursday morning and again on Friday. 

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Just a few months ago, it seemed as though the EU AI Act was on its way to getting all the necessary votes and setting the benchmark for AI regulation far beyond the European bloc. But now France, Germany, and Italy in the EU Council, which is composed of member countries’ heads of state, have contested some of the package’s main tenets, and the legislation seems in real danger of failing—which would open the door for other countries outside Europe to set the global AI agenda. 

To better understand the key sticking points and what’s next, I spoke with our senior AI reporter Melissa Heikkilä and Connor Dunlop, a policy expert at the Ada Lovelace Institute. I’ll warn it’s all pretty complex and it’s still a moving target; as Connor tells me, “The most surprising thing has been the level of drafting and redrafting across all three EU institutions,” which he describes as “unprecedented.” But here, with their help, I’ll do my best to answer some of the biggest questions.

What is the basic outline of this law? 

As a refresher, the EU AI Act seeks to establish a risk-based framework for regulating artificial-intelligence products and applications. The use of AI in hiring, for example, is more tightly regulated and requires more transparency than a “lower-risk” application, like AI-enabled spam filters. (I wrote about the package back in June, if you want more background information.) 

Why has this been so hard to finalize?

First, Melissa tells me, there is a lot of disagreement about foundation models, which has taken up most of the energy and space during the latest debates. There are several definitions of the term “foundation model” floating around, which is part of what’s causing the discord, but the core concept has to do with general-purpose AI that can do many different things for various applications. 

You’ve probably played around with ChatGPT; that interface is essentially powered by a foundation model, in this case a large language model from OpenAI. Making this more complex, though, is that these technologies can also be plugged into various other applications with more narrow uses, like education or advertising.

Initial versions of the EU AI Act didn’t explicitly consider foundation models, but Melissa notes that the proliferation of generative AI products over the past year pushed lawmakers to integrate them into the risk framework. In the version of the legislation passed by Parliament in June, all foundation models would be tightly regulated regardless of their assigned risk category or how they are used. This was deemed necessary in light of the vast amount of training data required to build them, as well as IP and privacy concerns and the overall impact they have on other technologies. 

But of course, tech companies that build foundation models have disputed this and advocate for a more nuanced approach that considers how the models are used. France, Germany, and Italy have flipped their positions and gone so far to say that foundation models should be largely exempt from AI Act regulations. (I’ll get at why below.)

Advertisement

The latest round of EU negotiations has introduced a two-tier approach in which foundation models are, at least in part, sorted on the basis of the computational resources they require, Connor explains. In practice, this would mean that “the vast majority of powerful general-purpose models will likely only be regulated by light-touch transparency and information-sharing obligations,” he says, including models from Anthropic, Meta, and others. “This would be a dramatic narrowing of scope [of the EU AI Act],” he adds. Connor says OpenAI’s GPT-4 is the only model on the market that would definitely fall into the higher tier, though Google’s new model, Gemini, might as well. (Read more about the just-released Gemini from Melissa and our senior AI editor Will Douglas Heaven here.)

This debate over foundation models is closely tied to another big issue: industry-friendliness. The EU is known for its aggressive digital policies (like its landmark data privacy law, GDPR), which often seek to protect Europeans from American and Chinese tech companies. But in the past few years, as Melissa points out, European companies have started to emerge as major tech players as well. Mistral AI in France and Aleph Alpha in Germany, for instance, have recently raised hundreds of millions in funding to build foundation models. It’s almost certainly not a coincidence that France, Germany, and Italy have now started to argue that the EU AI act may be too burdensome for the industry. Connor says this means that the regulatory environment could end up relying on voluntary commitments from companies, which may only later become binding.

“How do we regulate these technologies without hindering innovation? Obviously there’s a lot of lobbying happening from Big Tech, but as European countries have very successful AI startups of their own, they have maybe moved to a slightly more industry-friendly position,” says Melissa. 

Finally, both Melissa and Connor talk about how hard it’s been to find agreement on biometric data and AI in policing. “From the very beginning, one of the biggest bones of contention was the use of facial recognition in public places by law enforcement,” says Melissa. 

The European Parliament is pushing for stricter restrictions on biometrics over fears the technology could enable mass surveillance and infringe on citizens’ privacy and other rights. But European countries such as France, which is hosting the Olympics next year, want to use AI to fight crime and terrorism; they are lobbying aggressively and placing a lot of pressure on the Parliament to relax their proposed policies, she says.   

What’s next?

The December 6 deadline was essentially arbitrary, as negotiations have already continued past that date. But the EU is creeping up to a harder deadline. 

Melissa and Connor tell me the key stipulations need to be settled several months before EU elections next June to prevent the legislation from withering completely or getting delayed until 2025. It’s likely that if no agreement is reached in the next few days, the discussion will resume after Christmas. And keep in mind that beyond solidifying the text of the actual law, there’s still a lot that needs to be ironed out regarding implementation and enforcement. 

“Hopes were high for the EU to set the global standard with the first horizontal regulation on AI in the world,” Connor says, “but if it fails to properly assign responsibility across the AI value chain and fails to adequately protect EU citizens and their rights, then this attempt at global leadership will be severely diminished.” 

Advertisement

What I am reading this week

What I learned this week

Google’s CEO, Sundar Pichai, spoke with our editor in chief on the eve of the company’s release of Gemini, Google’s response to ChatGPT. There are lots of good bits from the interview, but I was drawn to the exchange about the future of intellectual property and AI. Pichai said that he expects it to be “contentious,” though Google “will work hard to be on the right side of the law and make sure we also have deep relationships with many providers of content today.” “We have to create that win-win ecosystem for all of this to work over time,” he said.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement