Authored by Adam Billen (AI & Democracy Advisor) and Shobha Dasari (AI & Health Advisor) with support from Titouan Tardivent (Director of Encode Justice France)

On March 13th, 2024, the European Union passed the Artificial Intelligence Act, a sweeping regulatory framework for all forms of artificial intelligence (AI). In this report, we cover the key components of the EU AI Act, as well as its implications for the future of AI development and regulation.

The European Union began to grapple with how to respond to rapid advances in AI systems soon after the passing of the General Data Protection Regulation (GDPR) in 2016. This process began with the launch of the European AI Alliance in 2018, continued with the AI Act’s initial proposal in April 2021, and reached its climax in December 2023 when a hotly contested draft compromise was reached. The EU’s goal has from the beginning been to prevent the risks that certain applications of the technology pose without needlessly over-regulating lower-risk forms. The Act’s risk-based system of categorization reflects this goal. AI systems are labeled as having a minimal, limited, high, or unacceptable level of risk and are regulated based on their category.

A version of the Act utilizing this system was close to being completed in late 2022, but the public release of ChatGPT sparked a fierce debate in the EU around how to regulate general purpose AI (GPAI) systems, also known as foundation models, such as OpenAI’s GPT or Google’s Gemini. Lawmakers initially considered categorizing all GPAI models as high risk, but France, Italy and Germany quickly pushed to exclude said systems from the Act entirely. These three countries represent some of Europe’s largest economies and were concerned that tough restrictions would reduce innovation and harm their local startup ecosystems. OpenAI, Google, Microsoft, and other large technology companies also heavily lobbied the EU to loosen rules targeting GPAI models. In May of 2023, OpenAI CEO Sam Altman stated that the company may decide to cease operating in Europe if it could not comply with the version of the AI Act then being proposed. He later walked back that statement.

The final version of the AI Act leaves room for GPAI systems to be interpreted as high-risk systems themselves, or to be interpreted as high-risk when they are integrated with a larger AI system which is itself high-risk. Regardless of these criteria, they are subject to a unique set of requirements. Some of these requirements may be waived if a model is open license unless said model is systemic. A model is systemic if the cumulative amount of compute used for its training is greater than 1025 floating point operations per second (FLOPS).

Other tightly debated issues included the Act’s implementation and enforcement mechanisms, broad concerns about hampering European innovation and competition with the United States, and which systems should be placed in the unacceptable risk category. The debate around biometric identification systems was particularly intense and resulted in a set of exceptions to its prohibition.

 

Risk Classification System

The core of the AI Act is its risk hierarchy wherein systems are subject to different levels of regulation based on how much potential harm they pose.

Minimal Risk: This tier encapsulates AI systems that pose no threat to the public such as spam filters or AI-enabled video games. Systems in this tier are absent from obligations under the EU AI Act and can be built and deployed without intervention. The vast majority of systems exist in this category.

Limited Risk: Systems in this tier may pose a notable but non-severe risk. These systems are subject to transparency requirements; i.e. a user should be informed that it is interacting with a chatbot rather than a human.

High Risk: Systems are considered high risk based on a list of applicable areas of use and a set of criteria. The AI Act was written to focus on this area, and its criteria and regulations are the most extensively detailed. Applicable areas include:

 

  • Systems affecting critical infrastructure
  • Biometrics
  • Education and vocational training
  • Employment, workers management, and access to self-employment
  • Access to essential public and private services (i.e. credit scoring or eligibility for social programs)
  • Law enforcement and administration of justice
  • Migration, asylum and border management
  • Democratic processes (i.e., systems that influence voter behavior or election outcomes).
  • Educational

 

Such systems are subject to a wide host of requirements under the AI Act. Developers of High Risk systems must establish risk-management systems, have high-quality data sets, log all activity, provide detailed documentation, and provide instructions for use by downstream developers. They must also have human oversight and a high level of robustness, accuracy, and cybersecurity. These systems are subject to mandatory fundamental rights impact assessments.

Under the AI Act, consumers also have the right to launch complaints about high-risk AI systems, receive meaningful explanations from AI systems about decisions that impact their rights, and obtain human intervention in order to challenge the system’s decision. Additionally, users have transparency rights – they must be made aware when they are interacting with an AI chatbot, deepfakes, or AI-generated content. These systems must also be designed so that synthetic audio/video/text/images are machine-readable and are detectable as artificially generated. Users must also be made aware when biometric categorization and emotion recognition systems are being used.

Unacceptable Risk: Systems that are considered a threat to the fundamental rights of people are prohibited and banned entirely. These systems include:

 

  • Systems deploying “subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques” intended to “materially distort” a person or group’s decision making
  • Systems exploiting vulnerabilities among a specific group in order to cause harm
  • Biometric categorization systems that deduce or infer sensitive information about a person*
    • *except for the labeling or filtering of biometric datasets based on biometric data in law enforcement
  • Social scoring systems
  • Real-time biometric identification systems in publicly accessible spaces for law enforcement*
    • *except for the search of human trafficking victims, the prevention of an imminent threat to the public, or the identification of a person suspected of having committed a criminal offense
  • Systems used to predict the likelihood of a person to commit a crime in the future
  • Systems used to infer the emotions of a person in the workplace or an educational setting*
    • *except for systems used for medical or safety reasons

 

The exclusions in this section are of particular note. AI systems focused solely on military, defence or national security purposes are explicitly and entirely excluded from regulation. The carve outs related to real-time biometric identification systems in law enforcement were also highly controversial and built as a political compromise. 

GPAI Models: Due to intense debate and lobbying, General Purpose AI (GPAI) models were carved out in the legislation, with their own unique set of restrictions and requirements. They can, however, still be categorized within the risk framework. GPAI models were initially defined as very powerful models (greater than 1025 floating-point operations per second) that may pose systemic risks. In the final text, however, GPAI models are models which “display significant generality” and are capable of performing a “wide range of distinct tasks.” Very powerful models (greater than 1025 floating-point operations per second) are now set in a new category of models which pose a “systemic risk” and are subject to a stricter set of regulations.

Under the AI Act, all GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary of the training data. If, however, a model is provided freely under an open license its providers must only comply with the Copyright Directive and publish a summary of the training data. A model is considered free and open-licence if its provider:

 

  • Permits the public to “freely access use, modify and redistribute them or modified versions thereof”
  • Meaning that users may run, copy, distribute, study, change and improve software and data*
    • *assuming that the provider is credited and the identical or comparable terms of distribution are respected
  • Makes the model’s parameters, weights, model architecture and information on model usage publicly available

 

Models which pose a systemic risk, regardless of whether they are closed or open source, must manage, monitor and report serious incidents, ensure cybersecurity protections, conduct model evaluations and perform adversarial testing.

 

Compliance and Enforcement

The AI Act creates a new European AI Office within the European Commission, which will ensure coordination at the European level to supervise the implementation and enforcement of the new rules on GPAI. The AI Act’s provisions on GPAI will also be enforced by a scientific panel of independent experts, and this panel will play a central role in determining systemic risks and issuing alerts, and contributing to classifying and testing models.

The EU has set stringent consequences for the violation of the AI Act – those who deploy banned AI applications will be fined either €35 million or 7% of global revenue – whichever figure is higher. For other violations outside of banned AI systems, the responsible party will be fined the higher of €15 million or 3% of global revenue. After entry into force, deployers must comply by the following deadlines:

 

  • 6 months for prohibited systems
  • 12 months for GPAI systems
  • 24 months for high risk systems under Annex III
  • 36 months for high risk systems under Annex II*
    • *A subset of systems already regulated under existing EU legislation

 

In addition to the provisions of the regulations, the EU will create an AI Pact to convene AI developers from Europe and around the world. These developers would commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.

 

Analysis and Implications

The EU AI Act is a crucial step forward for global AI governance. The law effectively creates guardrails for both narrow and general-purpose AI systems. For narrow systems, the risk-based framework allows for innovation to continue, while reigning in high-risk use cases that demand oversight. Article 7 facilitates flexibility in the Act’s implementation; the Commission is empowered to add or remove categories of high risk systems from the framework. The Act also struck a delicate balance between addressing short-term and long-term risks. If the Act failed to balance these concerns, it could have easily lost unified support from within the AI expert community. The primary downside of the risk system is that it is a product-based rather than a rights-based framework. This has been a key argument among critics Act and may affect how future-proof the Act is. The newfound public attention on GPAI systems, for example, showed some of the cracks in a product-based framework. From the beginning the Act has been messaged as future-proof, but this new form of the existing technology showed how rapid progress will challenge any AI-focused regulation. The partial separation of GPAI systems in the final compromise is an example of this difficulty. Still, the AIA succeeded in creating key restrictions and reporting requirements for general-purpose AI models, while leaving room for innovation through open-sourcing exemptions. Article 7 also helps to mitigate these concerns; by empowering the Commission to add or remove categories of high risk systems from the framework the Act retains the ability to adapt to new types of AI systems. Law enforcement represented another critical voice of opposition in the Act’s political process. Their lobying resulted in serious exceptions to restrictions on some controversial use cases of AI, such as real-time biometrics. Still, the core of the legislation survived the trilogue process and its success appeasing multiple oppositional voices should be an example for other governments. 

The future of compliance will be determined by two key factors. One will be whether other Western countries such as the US and Canada pass laws that match or exceed the AI Act’s level of stringency. The second will be whether providers of AI systems, particularly GPAI models, will choose to make their models uniform across regulatory landscapes or if they will ‘splinter’ their product offerings based on the relevant market. The General Data Protection Regulation (GDPR), for example, led to many American tech companies working to become in line with the GDPR outside of European markets in order to maintain product consistency across markets. It also led California, where many powerful tech companies are based, to pass similar legislation in the form of the CCPA. It seems reasonable to assume that if the US does not pass serious legislation on AI in the next two years that the AI Act will have a “Brussels Effect” comparable to that of the GDPR.

One of the most significant sections of the Act states that providers of general-purpose AI must respect existing EU copyright law. It additionally specifies that previous exceptions in EU law related to text and data mining, such as Article 4(3) of Directive (EU) 2019/790, are still in place for GPAI providers. These exemptions allow actors to hold “reproductions and extractions of lawfully accessible works and other subject matter for the purposes of text and data mining” so long as their owners have not explicitly chosen to reserve their work in a way that is readable to text and data mining software. That exemption should cover a large amount of the works used by popular GPAI models. It is still fully conceivable that some amount of the data currently used to train these models may be called into question by the Act. If, for example, books owned by publishing companies but available online are decided to not be considered fair game under existing law, GPAI providers may be obligated to remove some training data from training runs for models in the future. Groups like the European Authors’ Societies (GESAC) have embraced the Act for this reason. The newly established AI Office will be responsible for ensuring that model providers are meeting their obligation to provide summaries of training data as mandated under the Act. The AI Office will not, however, conduct a consistent, work-by-work analysis of every model provider’s training data summaries.

The significance of these restrictions, even given the current lack of clarity on some key questions, cannot be overstated. Until today, AI systems have generally not been targeted specifically by regulation. Broader legislation, such as the GDPR, the Digital Services Act (DSA) and existing copyright law, has had unintentionally significant implications for the technology but was not written to target current systems. New legislation like the AI Act that targets AI specifically will force companies to make significant changes to their models and practices to comply with new rules and mandates. Research based on a draft version of the Act from June found that zero of 12 of the largest AI systems would have been in compliance with that version of the Act. All but one failed to meet even half of that version of the Act’s mandates.

As the AI Act passes through the final formal processes uncertainty remains about its future. New advances in technology may pose new questions and risks that the regulation does not account for. Companies may choose to splinter their products across global markets. Influential countries like the US, Canada or the United Kingdom may pass regulation that differs with the Act, leaving companies to make sense of an uncertain and inconsistent regulatory landscape. What is clear is that the AI Act has set the tone for and transformed the regulatory landscape for AI.

Related Posts

See all recent posts