Artificial intelligence law: European Parliament finalizes text ahead of key committee vote

EU lawmakers finalized the text of the AI ​​regulation ahead of a vote in main parliamentary committees on Thursday (11 May).

The AI ​​Act is a landmark piece of legislation to regulate artificial intelligence based on its potential to cause harm. On Friday (5 May), the Members of European Parliaments (MEPs) leading the dossier shared a refined version of the compromise amendments.

The compromises, seen by EURACTIV, reflect a broader political agreement reached at the end of April, but also include last-minute changes and important details on how the agreement has been operationalised.

The deputies sign the agreement on the Artificial Intelligence Act

After months of intense negotiations, Members of the European Parliament (MEPs) have bridged differences and reached a tentative political agreement on the world’s first regulation on artificial intelligence.

Foundation models

The original proposal of the AI ​​law did not cover AI systems without a specific purpose. The meteoric success of ChatGPT and other generative AI models has disrupted discussions, prompting lawmakers to further question how best to regulate such systems.

The agreement was found in imposing a stricter regime for so-called base models, powerful AI systems that can power other AI applications.

Specifically on generative AI, MEPs agreed to provide a summary of training data covered by copyright law. The improved text specifies that this summary must be sufficiently detailed.

Additionally, generative foundation models should provide transparency that their content is AI rather than human generated.

Fines for foundation model providers who violate AI rules have been set at up to 10 million or 2% of annual turnover, whichever is higher.

High risk systems

The Artificial Intelligence Act establishes a strict regime for AI solutions with a high risk of causing harm. Originally, the proposal automatically classified as high risk all systems that fell into certain critical areas or use cases listed in Annex III.

However, EU lawmakers have added an extra layer, which means that the categorization will not be automatic. Systems will also need to pose a significant risk to qualify as high risk.

A new paragraph has been introduced to better define what is meant by significant risk, stating that it must be assessed by considering on the one hand the effect of this risk with respect to its overall combined level of severity, intensity, probability of occurrence and duration and on the other if the risk may concern an individual, a plurality of people or a particular group of people.

In addition, some last-minute changes have been made to Annex III. MEPs agreed to include recommendation schemes from very large online platforms as a high-risk category under the Digital Services Act. The latest compromise limits this high-risk category to social media.

Artificial intelligence systems used to influence the outcome of voting behavior are considered high risk. However, an exception has been introduced for AI models whose output is not directly seen by the general public, such as tools for organizing political campaigns.

A new one has been added regarding the requirements for these systems, which requires high-risk AI systems to meet accessibility requirements.

In terms of transparency, the text specifies that data subjects should always be informed that they are subject to the use of a high-risk AI-system, when operators use a high-risk AI-system to assist in decision-making or make decisions relating to natural persons”.

At the request of the centre-left, the text of the Parliament provides for the obligation for those who implement a high-risk system in the EU to carry out an impact assessment on fundamental rights. This impact assessment includes a consultation with the competent authority and stakeholders.

In a new addition to the text, SMEs have been exempted from this consultation provision.

Prohibited practices

The AI ​​law prohibits applications that are deemed to pose an unacceptable risk. Progressive lawmakers got the ban extended to biometric identification systems for both real-time and ex post use, except for the latter in cases of serious crime and preliminary authorization.

The ban on biometric identification is hard for the centre-right European People’s Party to swallow, which has a strong faction in favor of law enforcement. The Conservative group got to vote on the biometric bans in a split vote, separately from the rest of the compromises, according to a draft voting list seen by EURACTIV.

Furthermore, a carve-out for therapeutic purposes has been introduced in the prohibition of biometric categorisation.

Governance and enforcement

MEPs introduced the AI ​​Office, a new EU body to support harmonized enforcement of the AI ​​regulation and cross-border investigations.

Wording has been added referring to the possibility of strengthening the Office in the future to better support cross-border enforcement. The reference is to transform it into an agency, a solution that the current EU budget does not allow.

In a last-minute change, EU lawmakers have given national authorities the power to request access to both trained and training models of AI systems, including basic models. Access could take place on site or, in exceptional cases, remotely.

Furthermore, the document mentions a proposal to add a professional secrecy provision for national authorities taken from the EU General Data Protection Regulation.


The list of elements to be considered by the European Commission when assessing the AI ​​Act has been extended to include sustainability requirements, the legal regime for foundation models and unfair contract terms unilaterally imposed on SMEs and start-ups by providers of General Purpose AI.

[Edited by Nathalie Weatherald]

Read more with EURACTIV

#Artificial #intelligence #law #European #Parliament #finalizes #text #ahead #key #committee #vote

Leave a Comment