A Look at What's in The EU's Newly Proposed Regulation on AI

Posted on by Jetty Tielemans

On April 21, 2021, the European Commission unveiled its long-awaited proposal for a regulation laying down harmonized rules on artificial intelligence and amending certain union legislative acts. The proposal is the result of several years of preparatory work by the commission and its advisers, including the publication of a "White Paper on Artificial Intelligence." The proposal is a key piece in the commission’s ambitious European Strategy for data.

The regulation applies to (1) providers that place on the market or put into service AI systems, irrespective of whether those providers are established in the European Union or in a third country; (2) users of AI systems in the EU; and (3) providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU. 

The term “AI system” is broadly defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with.”

The commission takes a risk-based but overall cautious approach to AI and recognizes the potential of AI and the many benefits it presents, but at the same time is keenly aware of the dangers these new technologies present to the European values and fundamental rights and principles.

This explains why the proposal starts with listing four types of AI practices that are prohibited:

  1. Placing on the market, putting into service or using an AI system that deploys subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior in a manner that causes that person or another person physical or psychological harm.
  2. Placing on the market, putting into service or using an AI system that exploits vulnerabilities of a specific group of persons due to their age, physical or mental disability to materially distort the behavior of a person pertaining to the group in a manner that causes that person or another person physical or psychological harm.
  3. Placing on the market, putting into service or using an AI system by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons with the social score leading to detrimental or unfavorable treatment that is either unrelated to the contexts in which the data was originally generated or unjustified or disproportionate.
  4. Use of “real-time” remote biometric identification (read: facial recognition) systems in publicly accessible spaces for law enforcement purposes, subject however to broad exemptions that, in turn, are subject to additional requirements, including prior authorization for each individual use to be granted by a judicial authority or an independent administrative body in the member state where the system is used.

Unlike an earlier leaked draft of the proposal, which was widely commented on in the trade press, the use of facial recognition systems in public spaces for law enforcement purposes is now ranged under the prohibited AI practices. Countries, like France and Italy, that already impose restrictions on the use of facial recognition will need to align their national laws with the new EU-wide rules.

The bulk of the proposal focuses on high-risk AI systems. 

The term is not defined, but the proposal indicates in Articles 6 and 7 the criteria to be used to determine whether a system should be considered high risk. Article 6 refers to products or components that are covered by existing EU product safety legislation that is listed in Annex II to the Proposal, such as EU legislation on machinery, toys, lifts, pressure equipment or medical devices, just to name a few. Article 7 refers to AI systems used in areas set out in Annex III that the commission considers high risk and the criteria to take into account to update the Annex. Examples of areas in Annex III include biometric identification and categorization of natural persons, management and operation of critical infrastructure, education and vocational training, employment, law enforcement, migration, asylum and border control, and administration of justice and democratic processes.

In its proposal, the commission adopted a cradle-to-grave approach: High-risk AI systems are subject to scrutiny before they are placed on the market or put into service and throughout their life cycle, including through a mandatory risk management system, strict data and data governance requirements, technical documentation and record-keeping requirements, and post-market monitoring and reporting of incidents requirements. 

A conformity assessment by a third party or the provider itself and, for stand-alone AI systems, registration in a central database set up and maintained by the commission, are at the center of the proposal. For the conformity assessment, different procedures apply depending on the type of system and whether the system is already covered by existing product safety legislation listed in Annex II of the proposal. The interaction between existing requirements in EU sectoral product safety laws listed in Annex II and the requirements in the proposal is a constant that runs throughout the text of the proposal. The commission went to great lengths to avoid inconsistencies and duplication and aims to minimize additional burdens for all concerned. To that effect, the final sections of the proposal also identify existing EU legislation that will need to be amended in light of the proposal.

The obligations under the proposal affect all parties involved: the provider, importer, distributor and user. There are special provisions relating to transparency to ensure people know they are dealing with an AI system (Article 52) but also enable users to interpret the system's output and use it appropriately (Article 13).

The proposal emphasizes in Article 14 that AI systems shall be designed and developed in such a way that human oversight is guaranteed while in use.

In an effort to demonstrate the commission is aware of the opportunities presented by AI technology, the proposal also contains a few provisions in Title V outlining measures in support of innovation. They include regulatory sandboxing schemes and an obligation on member states to provide certain services and facilities for small-scale providers and users.

In line with positions taken in other data-related legislative initiatives, enforcement of the AI regulation is left to the member states. The regulation foresees steep administrative fines for various types of violations ranging for companies between 2% and 6% of total annual worldwide turnover. The proposal foresees in the creation of a European AI Board with various tasks, including assisting the national supervisory authorities and commission to ensure the consistent application of the regulation, issue opinions and recommendations, or collect and share best practices among member states.

The regulation, once adopted, will come into force 20 days after its publication in the Official Journal. It will apply 24 months after that date, but some provisions will apply sooner. This long “grace period” increases the risk that, notwithstanding the efforts from the commission to make the regulation future-proof, some of its provisions will be overtaken by technological developments before they even apply.

The proposal now goes to the European Parliament and Council for further consideration and debate. Given the controversial nature of AI, large number of stakeholders and interests involved, it seems fair to assume the road to adoption will be bumpy and long. There will likely be many amendments — and hopefully, some further clarifications — in the European Parliament and discussions with member states.

Jetty Tielemans

Senior Westin Research Fellow, IAPP

Privacy Machine Learning & Artificial Intelligence

artificial intelligence & machine learning privacy

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.

Share With Your Community

Related Blogs