Senior Content Marketing Manager II
February 4, 2025•5 min read
The EU AI Act officially went into effect on August 1, 2024. The Act’s foundational rules, which govern the development, deployment, and regulation of AI systems—focusing on transparency and clear definitions of AI system categories—have been in effect since this date.
Resource: Field Trips Ep. 1, Sitting down with Dan Nechita, Head of Cabinet for Dragos Tudorache
As of February 2, 2025, AI systems deemed to pose "unacceptable risks" became strictly prohibited. These systems include:
Organizations must ensure their AI systems are free from these prohibited practices or face severe penalties.
Article 4 of the EU AI Act mandates that both AI providers and deployers ensure their staff and stakeholders have sufficient AI literacy, which includes understanding the opportunities, risks, and potential harms of AI. This requirement is not limited to staff but extends to all relevant actors in the AI value chain, including affected persons.
Organizations must assess the technical knowledge, experience, and training of their staff, taking into account the context of the AI system’s use. While no specific penalties are outlined for non-compliance with AI literacy, breaches of this requirement could influence penalties for other violations under the Act.
Resource: Field Trips Ep. 2, Navigating AI and EU lawmaking with Kai Zenner
By May 2, 2025, the European Commission’s AI Office, will release a code of practice for General-Purpose AI (GPAI) models. These models, including Large Language Models (LLMs), are defined as:
“an AI model [...] that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market…”
The code will define guidelines for the development and deployment of GPAI models and, ideally, will help standardize compliance across the EU. If the codes of practice are not finalized by this date, the EU Commission will step in to adopt common rules for GPAI model providers.
On August 2, 2025, governance obligations for GPAI model providers will become applicable. These models, which are generally less risky than high-risk systems, must still adhere to various compliance requirements, such as:
GPAI models deemed to have “systemic risks” will be subject to additional obligations, including:
On August 2, 2026, the EU AI Act will become generally applicable and requirements around “high-risk AI systems” will go into effect. Listed in Annex III of the Act, high risk AI applications include those used in law enforcement, healthcare, education, critical infrastructure, and more.
At this stage, organizations must:
By this point, EU member states must have also implemented one regulatory sandbox.
From August 2, 2027, the full scope of the EU AI Act applies to all risk categories, including those in Annex II, which covers medium-to-high-risk AI systems. At this stage, all AI systems, regardless of risk level, must comply with:
This phase marks the full maturity of the AI regulatory framework, applying to virtually all AI systems deployed within the EU.
By 2030, AI systems integrated into large-scale IT infrastructure (e.g., the Schengen Information System and other EU security systems) will be subject to additional governance and compliance measures. These systems will need to align with EU laws governing freedom, security, and justice.
The EU AI Act is setting the bar high for AI transparency, ethics, and safety—and businesses need to move quickly to meet its requirements. Transcend enables AI companies, and any business using these tools, to drive growth and innovation while ensuring industry-leading compliance.
The EU AI Act mandates that businesses provide users with rights like data access, deletion, and opting out of training data usage. Transcend automates these data subject requests at scale, replacing clunky manual forms with seamless, real-time solutions that ensure compliance with privacy regulations and user trust.
Learn more about Transcend DSR Automation
With generative AI’s unique challenges—like inference suppression and machine unlearning—Transcend’s next-gen consent management solution helps AI providers and businesses using these tools handle AI-specific opt-outs with ease. This ensures compliance with the Act’s evolving requirements and demonstrates your commitment to responsible data usage.
Explore Transcend Consent Management
The Act requires businesses to continuously assess AI system risks, especially for high-risk applications in sectors like healthcare and law enforcement. Transcend’s pre-built AI Risk Assessment template provides proactive risk management through streamlined, up-to-date assessments and documentation.
Transcend Pathfinder regulates data flows into and out of AI systems, ensuring governance policies are applied at the code level. Meanwhile, Transcend Consent Management automates and enforces consent preferences at the code level, ensuring compliance with user consent requirements and future-proofing your AI systems as EU AI Act provisions continue to come into effect.
Learn more about Transcend's approach to AI governance
Transcend’s real-time data discovery and classification tools simplify the process of tracking and documenting data usage across your systems. This helps you maintain a single source of truth, ensuring compliance with the EU AI Act’s transparency requirements.
With key deadlines stretching from 2024 to 2030, the EU AI Act’s requirements will continue to evolve. Transcend provides flexible, scalable tools that grow with your compliance needs, ensuring you remain on track and prepared for future regulations.
Ready to streamline your compliance process? Request a demo to see how Transcend can help you meet the EU AI Act’s requirements with ease.
Senior Content Marketing Manager II