Cleo
CompanyPricing
Request a Demo
Anaelle GuezNaomie Halioua
Request a Demo
Cleo

AI-powered regulatory intelligence.

contact@cleolabs.co

Solutions

  • Due Diligence
  • Product Compliance

Company

  • About
  • Research
  • Blog

Jurisdictions

  • πŸ‡ͺπŸ‡Ί European Union
  • πŸ‡«πŸ‡· France
  • πŸ‡©πŸ‡ͺ Germany
  • πŸ‡¬πŸ‡§ United Kingdom
  • πŸ‡ΊπŸ‡Έ United States

Legal

  • Privacy
  • Terms
  • Security

Events

  • VivaTech ParisJun 11–14, 2026

Β© 2026 Cleo Labs. All rights reserved.

GDPREU Data
Resources/AI Act

EU AI Act Compliance Guide 2026

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on August 1, 2024 and applies progressively through 2027. This guide covers everything compliance teams need to know to prepare.

What is the EU AI Act?

The EU AI Act is a risk-based regulatory framework that classifies AI systems into four categories and imposes obligations proportional to the level of risk they pose. It applies to providers, deployers, importers, and distributors of AI systems operating in the EU market, regardless of where the organization is headquartered.

The regulation was adopted by the European Parliament on March 13, 2024 and published in the Official Journal on July 12, 2024. It is enforced by national competent authorities in each EU member state, coordinated by the newly established EU AI Office.

Risk classification system

The AI Act defines four risk levels, each with different regulatory requirements:

Unacceptable risk

AI systems that are outright prohibited: social scoring by governments, real-time biometric surveillance in public spaces (with exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces and educational institutions.

High risk

AI systems used in critical areas: biometric identification, critical infrastructure, education and vocational training, employment and workers management, access to essential services (credit scoring, insurance), law enforcement, migration and border control, and administration of justice. These require conformity assessments, risk management systems, data governance, transparency, human oversight, and registration in the EU database.

Limited risk

AI systems that interact with people (chatbots, deepfake generators, emotion recognition). Subject to transparency obligations: users must be informed they are interacting with AI.

Minimal risk

All other AI systems (spam filters, AI in video games, inventory management). No specific regulatory requirements, but voluntary codes of conduct are encouraged.

Key obligations for high-risk AI systems

  • β†’Risk management system: continuous identification and mitigation of risks throughout the AI system lifecycle
  • β†’Data governance: training, validation, and testing datasets must meet quality criteria (relevance, representativeness, completeness)
  • β†’Technical documentation: detailed documentation of the system's design, development, and intended purpose
  • β†’Record-keeping: automatic logging of events during AI system operation for traceability
  • β†’Transparency: clear instructions of use for deployers, including system capabilities and limitations
  • β†’Human oversight: measures enabling human intervention, including the ability to override or stop the AI system
  • β†’Accuracy, robustness, and cybersecurity: appropriate levels of performance and protection against adversarial attacks
  • β†’Conformity assessment: self-assessment or third-party assessment depending on the use case
  • β†’EU database registration: mandatory registration in the EU AI Act public database before market placement

Compliance timeline

August 1, 2024

AI Act enters into force

February 2, 2025

Prohibitions on unacceptable-risk AI systems apply

August 2, 2025

Rules for general-purpose AI (GPAI) models apply; governance structures must be established

August 2, 2026

Full application of high-risk AI system requirements (Annex III)

August 2, 2027

Requirements for high-risk AI embedded in regulated products (Annex I) apply

Penalties for non-compliance

The AI Act establishes a tiered penalty structure: up to €35 million or 7% of global annual turnover for violations of prohibited AI practices; up to €15 million or 3% for non-compliance with high-risk AI requirements; and up to €7.5 million or 1.5% for providing incorrect or misleading information to authorities. SMEs and startups benefit from proportional caps.

How Cleo helps with AI Act compliance

Cleo Labs monitors the EU AI Act and all implementing regulations in real time. The platform automatically identifies which of your AI systems fall under high-risk classification, maps them to specific obligations (Articles 9-15), tracks enforcement deadlines across EU member states, and alerts your compliance team when new guidance is published by the EU AI Office or national supervisors.

Cleo scans 3,500+ regulatory sources across 60+ jurisdictions, ensuring you never miss a regulatory development. Each signal is risk-scored (0-100) with full source traceability for audit-ready documentation.

Map your AI Act obligations automatically

Enter your company domain and Cleo identifies applicable AI Act requirements in minutes.

Scan your company
Anaelle GuezNaomie Halioua
or request a demo