EuroComply
Zarejestruj się
Back to blog
EU AI Act 7 min read

What Is the EU AI Act? A Complete Guide for Businesses

What Is the EU AI Act? A Complete Guide for Businesses?

The EU AI Act is the world's first comprehensive AI regulation. This guide explains what it is, who it applies to, what the risk tiers mean, and what your business needs to do before the August 2026 deadline.

Source: EuroComply Editorial (2026-04-14)Reviewed:
EuroComply Team
EU regulatory specialistsContent reviewed against official EUR-Lex texts
EuroComply Editorial Team
0 views

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework governing artificial intelligence. It entered into force on August 1, 2024 and applies to any business that develops, deploys, or uses AI systems affecting people in the European Union — regardless of where that business is headquartered.

This guide covers everything you need to know: what the regulation requires, who it applies to, how AI systems are classified, what the deadlines are, and what happens if you miss them.

What Is the EU AI Act?

The EU AI Act establishes a risk-based framework for AI. Rather than treating all AI systems the same, it classifies them by the level of risk they pose to people — and attaches different compliance obligations to each tier.

The regulation applies to:

  • Providers — companies that develop and place AI systems on the EU market
  • Deployers — businesses that use AI systems in their operations (including off-the-shelf tools like ChatGPT or Copilot)
  • Importers and distributors — companies that bring AI systems from outside the EU into the EU market

If your business uses AI tools to interact with customers, process applications, screen candidates, score individuals, or make decisions that affect people — you are a deployer and the AI Act applies to you.

The Four Risk Tiers

Tier 1: Prohibited AI (Article 5)

Some AI applications are banned outright. These were already enforceable from August 2, 2025:

  • Social scoring — systems that score people based on their social behavior or personal characteristics
  • Real-time biometric identification in publicly accessible spaces (with narrow law enforcement exceptions)
  • Subliminal manipulation — AI that exploits psychological vulnerabilities to alter behavior
  • Emotion recognition in workplace and educational settings
  • Biometric categorization to infer sensitive attributes (race, political opinion, religion, sexual orientation)
  • Predictive policing for individuals based on profiling
  • Untargeted scraping of facial images from the internet or CCTV footage

Deploying a prohibited AI system carries fines of up to €35M or 7% of global turnover.

Tier 2: High-Risk AI (Annex III)

High-risk AI systems face the most stringent requirements. A system is high-risk if it operates in one of eight regulated sectors and makes or materially influences decisions about natural persons:

| Sector | Examples | |--------|----------| | Biometrics | Remote identification, emotion recognition | | Critical infrastructure | Safety components in water, energy, transport | | Education | Admissions, grading, proctoring | | Employment | CV screening, performance evaluation, termination | | Essential services | Credit scoring, insurance risk, social benefits | | Law enforcement | Risk assessment, evidence evaluation, profiling | | Migration & asylum | Risk assessment, document verification | | Democratic processes | Election influence, voter targeting |

Compliance requirements for high-risk systems include:

  • A risk management system maintained throughout the lifecycle
  • Data governance procedures for training and validation data
  • Technical documentation (Annex IV)
  • Automatic logging of system activity
  • Transparency — users must be told they are interacting with or affected by a high-risk AI
  • Human oversight — a human must be able to intervene, override, or stop the system
  • Accuracy, robustness, and cybersecurity testing
  • A conformity assessment before deployment (or third-party audit for Annex I products)

The deadline for high-risk AI compliance is August 2, 2026.

Tier 3: Limited-Risk AI (Article 50)

Limited-risk systems have transparency obligations only:

  • Chatbots and conversational AI must inform users they are interacting with a machine
  • AI-generated content (images, audio, video, text) must be clearly labeled as synthetic
  • Deepfakes must be disclosed as artificially created or manipulated
  • Emotion recognition systems must notify the people being analyzed

Most commercial AI tools — customer support bots, content generators, recommendation engines — fall here.

Tier 4: Minimal-Risk AI

All other AI systems. No mandatory obligations beyond the general AI literacy requirement. Voluntary codes of conduct apply. This covers the majority of AI use cases: spam filters, AI-assisted search, recommendation engines, fraud detection (non-financial-sector), and similar tools.

The Timeline

| Date | What Happened or Happens | |------|--------------------------| | August 1, 2024 | Regulation entered into force | | February 2, 2025 | AI literacy (Article 4) — already in force. All businesses deploying AI must ensure staff have sufficient AI literacy | | August 2, 2025 | Prohibited practices and GPAI obligations — already in force. Eight categories of AI are banned. General-purpose AI providers face new documentation requirements | | August 2, 2026 | High-risk AI systems deadline. Full compliance required for all Annex III systems | | August 2, 2027 | Full enforcement for AI embedded in products covered by EU harmonized legislation (medical devices, machinery, toys) |

What Is AI Literacy (Article 4)?

Since February 2, 2025, any organization that provides or deploys AI systems must ensure its staff have sufficient AI literacy — defined as the skills, knowledge, and understanding to make informed use of AI systems.

This applies to employees who use AI in their day-to-day work. It does not require certification but does require documented training. Key areas include:

  • Understanding what AI systems do and how they make decisions
  • Knowing the limitations and risks of the AI tools used
  • Recognizing when AI output requires human review
  • Understanding data protection obligations when using AI

The EU has not mandated a specific format, so structured onboarding, e-learning modules, or documented briefings all satisfy the requirement.

What Are the Fines?

| Violation | Maximum Fine | |-----------|-------------| | Prohibited AI practices | €35M or 7% of global annual turnover | | High-risk AI violations | €15M or 3% of global annual turnover | | Providing false information | €7.5M or 1.5% of global annual turnover |

Fines are applied at whichever figure is higher. For a €10M turnover SME, 7% means €700K — still a material risk.

Member states are establishing national AI authorities responsible for enforcement. In Germany, this role falls to the Bundesnetzagentur. France is establishing its own authority. Coordinated EU-level enforcement is handled by the European AI Office.

What Does This Mean for SMEs?

Most SMEs are deployers, not providers. They buy and use AI tools rather than building them. Deployer obligations are narrower than provider obligations, but they are real:

  1. AI literacy — already required
  2. Fundamental rights impact assessment — for high-risk systems in public bodies or regulated sectors
  3. Human oversight — for any high-risk AI in use
  4. Transparency to users — for limited-risk AI
  5. Incident reporting — serious incidents involving high-risk AI must be reported to national authorities

The practical first step for any SME is an AI system inventory: list every AI tool in use, classify it under the Act, and identify which (if any) is high-risk. Most SMEs will find their tools fall into the limited or minimal tiers — but this needs to be confirmed, not assumed.

How to Get Started

  1. Inventory your AI tools — list every AI system your business uses or deploys
  2. Classify each one — use the Annex III list to check for high-risk applicability
  3. Train your staff — satisfy the Article 4 AI literacy requirement
  4. Document your high-risk systems — if any apply, start building Annex IV documentation
  5. Set a compliance roadmap — August 2, 2026 is the high-risk deadline; start now

The AI Act is not designed to prevent businesses from using AI. It is designed to ensure that AI affecting people's lives is used responsibly, transparently, and with human oversight. For most SMEs, compliance is achievable — it requires documentation and process, not a compliance team.

Summary

The EU AI Act is the most significant technology regulation since GDPR. It applies to every business using AI in or affecting the EU. Risk classification is the first step: identify whether your AI systems are prohibited, high-risk, limited-risk, or minimal-risk, then build your compliance program accordingly. The high-risk deadline is August 2, 2026 — approximately 15 months from the date of publication of this guide.


Last updated: April 2026. For informational purposes only — not legal advice. Consult qualified legal counsel for compliance decisions.

EC

EuroComply Editorial Team

EU regulatory compliance specialists covering the AI Act, GDPR, NIS2, and related legislation. Content reviewed against official EU regulation texts and enforcement guidance.

For informational purposes only. Consult qualified legal counsel.

Share:

Ready to check compliance?

Start auditing your AI systems and tech stack today.