The EU AI Act’s Implementation Timeline: Key Milestones for Enforcement

By Morgan Sullivan

Senior Content Marketing Manager II

February 4, 20255 min read

Share this article

At a glance

  • With an implementation timeline that stretches from 2024 to 2030, the European Union’s AI Act will shape the future of global AI governance for years to come.
  • Introducing critical rules on AI transparency, safety, and ethics, the Act will primarily affect AI developers and businesses that use AI tools.
  • Keep reading for a breakdown of the EU AI Act’s main implementation milestones, plus key details to help businesses prepare.

EU AI Act implementation timeline

August 1, 2024: EU AI Act enters into force

The EU AI Act officially went into effect on August 1, 2024. The Act’s foundational rules, which govern the development, deployment, and regulation of AI systems—focusing on transparency and clear definitions of AI system categories—have been in effect since this date.

Resource: Field Trips Ep. 1, Sitting down with Dan Nechita, Head of Cabinet for Dragos Tudorache

February 2, 2025: Prohibitions on “Unacceptable AI” + AI literacy requirements

Ban on prohibited AI practices

As of February 2, 2025, AI systems deemed to pose "unacceptable risks" became strictly prohibited. These systems include:

  • Manipulative AI: AI systems that deploy subliminal, manipulative, or deceptive techniques to materially distort the behavior of individuals or groups, impairing their ability to make informed decisions.
  • Predictive Policing: AI systems used to predict criminal behavior by profiling individuals based on personal data, such as inferred or predicted personality traits.
  • Social Scoring: AI systems that evaluate or classify individuals based on their social behavior or inferred personal characteristics over time. These systems are sometimes used for access to opportunities like jobs, credit, or public services.
  • Biometric Identification: AI systems that use biometric data (e.g., facial recognition) for real-time identification, particularly in publicly accessible spaces or for law enforcement purposes.
  • Emotion Recognition: AI systems that infer emotions in contexts like the workplace or educational settings, often without explicit consent from those being monitored.

Organizations must ensure their AI systems are free from these prohibited practices or face severe penalties.

AI literacy requirements

Article 4 of the EU AI Act mandates that both AI providers and deployers ensure their staff and stakeholders have sufficient AI literacy, which includes understanding the opportunities, risks, and potential harms of AI. This requirement is not limited to staff but extends to all relevant actors in the AI value chain, including affected persons.

Organizations must assess the technical knowledge, experience, and training of their staff, taking into account the context of the AI system’s use. While no specific penalties are outlined for non-compliance with AI literacy, breaches of this requirement could influence penalties for other violations under the Act.

Resource: Field Trips Ep. 2, Navigating AI and EU lawmaking with Kai Zenner

May 2, 2025: Codes of Practice for General-Purpose AI (GPAI) Models

By May 2, 2025, the European Commission’s AI Office, will release a code of practice for General-Purpose AI (GPAI) models. These models, including Large Language Models (LLMs), are defined as:

“an AI model [...] that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market…”

The code will define guidelines for the development and deployment of GPAI models and, ideally, will help standardize compliance across the EU. If the codes of practice are not finalized by this date, the EU Commission will step in to adopt common rules for GPAI model providers.

August 2, 2025: General-Purpose AI governance obligations

On August 2, 2025, governance obligations for GPAI model providers will become applicable. These models, which are generally less risky than high-risk systems, must still adhere to various compliance requirements, such as:

  • Providing technical documentation
  • Implementing copyright law compliance policies
  • Providing detailed information on the training datasets used
  • Cooperating with the EU Commission
  • Demonstrating compliance as effectively as possible

GPAI models deemed to have “systemic risks” will be subject to additional obligations, including:

  • Systematically assessing and addressing risks
  • Tracking and reporting data incidents to national authorities
  • Ensuring effective cybersecurity measures

August 2, 2026: Full application to "high-risk AI"

On August 2, 2026, the EU AI Act will become generally applicable and requirements around “high-risk AI systems” will go into effect. Listed in Annex III of the Act, high risk AI applications include those used in law enforcement, healthcare, education, critical infrastructure, and more.

At this stage, organizations must:

  • Implement human oversight for high-risk AI systems
  • Conduct detailed risk assessments to categorize AI systems and evaluate potential harms
  • Keep comprehensive documentation to demonstrate compliance with the Act's requirements

By this point, EU member states must have also implemented one regulatory sandbox.

August 2, 2027: Comprehensive application

From August 2, 2027, the full scope of the EU AI Act applies to all risk categories, including those in Annex II, which covers medium-to-high-risk AI systems. At this stage, all AI systems, regardless of risk level, must comply with:

  • Human oversight mechanisms: Ensuring that high-risk AI applications remain under human control and open to intervention when needed
  • Clear and transparent documentation: Including evidence of ongoing risk mitigation efforts and compliance strategies

This phase marks the full maturity of the AI regulatory framework, applying to virtually all AI systems deployed within the EU.

2030: Large-scale IT systems

By 2030, AI systems integrated into large-scale IT infrastructure (e.g., the Schengen Information System and other EU security systems) will be subject to additional governance and compliance measures. These systems will need to align with EU laws governing freedom, security, and justice.

How Transcend helps businesses and AI providers comply with the EU AI Act

The EU AI Act is setting the bar high for AI transparency, ethics, and safety—and businesses need to move quickly to meet its requirements. Transcend enables AI companies, and any business using these tools, to drive growth and innovation while ensuring industry-leading compliance.

Simplify data access, erasure, and opt-outs

The EU AI Act mandates that businesses provide users with rights like data access, deletion, and opting out of training data usage. Transcend automates these data subject requests at scale, replacing clunky manual forms with seamless, real-time solutions that ensure compliance with privacy regulations and user trust.

Learn more about Transcend DSR Automation

Manage novel AI-specific opt-outs

With generative AI’s unique challenges—like inference suppression and machine unlearning—Transcend’s next-gen consent management solution helps AI providers and businesses using these tools handle AI-specific opt-outs with ease. This ensures compliance with the Act’s evolving requirements and demonstrates your commitment to responsible data usage.

Explore Transcend Consent Management

Smarter risk assessments and documentation

The Act requires businesses to continuously assess AI system risks, especially for high-risk applications in sectors like healthcare and law enforcement. Transcend’s pre-built AI Risk Assessment template provides proactive risk management through streamlined, up-to-date assessments and documentation.

Transcend Pathfinder regulates data flows into and out of AI systems, ensuring governance policies are applied at the code level. Meanwhile, Transcend Consent Management automates and enforces consent preferences at the code level, ensuring compliance with user consent requirements and future-proofing your AI systems as EU AI Act provisions continue to come into effect.

Learn more about Transcend's approach to AI governance

Create a single source of truth

Transcend’s real-time data discovery and classification tools simplify the process of tracking and documenting data usage across your systems. This helps you maintain a single source of truth, ensuring compliance with the EU AI Act’s transparency requirements.

Future-proof your EU AI Act compliance

With key deadlines stretching from 2024 to 2030, the EU AI Act’s requirements will continue to evolve. Transcend provides flexible, scalable tools that grow with your compliance needs, ensuring you remain on track and prepared for future regulations.

Ready to streamline your compliance process? Request a demo to see how Transcend can help you meet the EU AI Act’s requirements with ease.


References


By Morgan Sullivan

Senior Content Marketing Manager II

Share this article