
Anaelle Guez
Co-founder & CEO, Compliance

EU AI Act Compliance Guide 2026: What You Need to Know Now
With high-risk AI system requirements taking effect in August 2026, compliance teams have months, not years, to prepare. Here is everything you need to know.
The AI Act timeline: where we are
The EU AI Act entered into force on August 1, 2024. Its phased implementation has already delivered two major milestones: prohibited AI practices were banned in February 2025, and general-purpose AI (GPAI) transparency obligations kicked in August 2025. The next, and most consequential, deadline lands on August 2, 2026: full obligations for high-risk AI systems.
Unacceptable-risk AI practices banned
GPAI transparency obligations
High-risk AI system requirements
Obligations for AI in regulated products
Risk classification: the foundation
The AI Act defines four risk tiers. Your compliance obligations depend entirely on where your AI systems fall. Unacceptable risk systems are outright banned. High-risk systems face the heaviest requirements: conformity assessments, risk management systems, data governance, human oversight, and registration in the EU database.
Unacceptable risk
Banned outright. Social scoring, manipulative AI, real-time remote biometric ID in public spaces.
High risk
Full conformity assessment required. AI in hiring, credit, critical infrastructure, education, law enforcement.
Limited risk
Transparency obligations only. Chatbots must disclose AI nature. Deepfakes must be labeled.
Minimal risk
No specific obligations. AI in games, spam filters, inventory management.
What high-risk compliance requires
For high-risk AI systems, organizations must implement a risk management system throughout the AI lifecycle, establish data governance practices, maintain technical documentation for conformity assessment, build logging and monitoring capabilities, ensure human oversight mechanisms, and achieve appropriate levels of accuracy, robustness, and cybersecurity.
Penalties for non-compliance are severe: up to €35 million or 7% of global annual turnover for prohibited practices, up to €15 million or 3% for other violations. The EU AI Office is building enforcement infrastructure now.
How to prepare now
Inventory all AI systems deployed or developed by your organization
Classify each system according to the AI Act risk categories
Gap-assess current documentation and governance against requirements
Implement conformity assessment processes for high-risk systems
Use regulatory intelligence tools to track evolving guidance from the EU AI Office
Frequently asked questions
When does the EU AI Act take effect?
The EU AI Act entered into force on August 1, 2024, with a phased implementation: prohibited AI practices banned since February 2025, GPAI transparency requirements since August 2025, and high-risk AI system obligations taking effect on August 2, 2026. Penalties for non-compliance can reach €35 million or 7% of global turnover.
How to classify AI systems under the AI Act?
The AI Act defines four risk categories: (1) Unacceptable risk, banned (social scoring, real-time biometric identification), (2) High-risk, subject to conformity assessment (AI in critical infrastructure, employment, education, law enforcement), (3) Limited risk, transparency obligations (chatbots, deepfakes), (4) Minimal risk, no obligations. Most enterprise AI systems fall under high-risk or limited-risk categories.
Related resources
Solutions
Product Compliance SolutionTry Cleo: free regulatory risk scan
See your regulatory landscape mapped in minutes. No signup, no credit card.