EuroComply
Konto erstellen

EU AI Act

The EU AI Act classifies AI systems by risk level and imposes obligations on providers and deployers. High-risk systems face mandatory conformity assessments, documentation, and human oversight requirements.

What does AI Act require and when does it apply?

AI Act applies to Technology and Healthcare organisations across all EU member states. The key deadline is August 2, 2026 (high-risk systems). Non-compliance carries a maximum penalty of €35M or 7% of global turnover. Core obligations include classify ai systems by risk tier and implement risk management systems.

  • Classify AI systems by risk tier
  • Implement risk management systems
  • Ensure transparency and human oversight
  • Register high-risk systems in EU database
  • Conduct fundamental rights impact assessments
DeadlineAugust 2, 2026 (high-risk systems)
Max fine€35M or 7% of global turnover
Primary sectorsTechnology, Healthcare, Financial Services
Source: Official Journal of the EU — EU AI ActReviewed:
TL;DR

AI Act: €35M or 7% of global turnover max fine

AI Act applies to Technology and Healthcare organisations in all EU member states. Key deadline: August 2, 2026 (high-risk systems).

Source: Official Journal of the European Union — EU AI Act

The EU AI Act applies to SMEs that provide or deploy AI systems affecting people in the EU. Most SMEs start as deployers: they must inventory AI use, train staff, classify risk, keep evidence, and meet high-risk obligations where Annex III applies.

2026-08-02High-risk AI obligations

Most Annex III high-risk AI obligations apply, including documentation, oversight, logs and risk management.

Source: Regulation (EU) 2024/1689, Article 113

AI Act SME action checklist

Action checklist
Build an AI system inventory

List every internal and customer-facing AI tool, owner, vendor, purpose, data categories, user group and deployment status.

Articles 3, 4, 26

Classify each use case by risk tier

Separate prohibited, high-risk, limited-risk and minimal-risk use. Pay special attention to Annex III areas such as employment, education, credit, health and essential services.

Articles 5, 6, 50 and Annex III

Document deployer responsibilities

Assign a human owner, define intended use, keep logs where available, follow provider instructions and record monitoring decisions.

Article 26

Train staff and keep evidence

Provide AI literacy training to staff who procure, use, supervise or govern AI tools. Retain completion records and training content.

Article 4

Request vendor documentation

Collect provider instructions, risk classification, data information, transparency notices, security controls and incident handling commitments.

Articles 13, 15, 16, 26

Prepare high-risk evidence

For Annex III systems, document human oversight, accuracy monitoring, data governance, incident escalation and fundamental-rights impact assessment triggers.

Articles 9-15, 26, 27, 73

EU AI Act application timeline

Updated 2026-05-12: The AI Act originally set many obligations to apply from 2 August 2026. Some high-risk obligations may be subject to amendment. See per-obligation status below.
In forceAdoptedPolitical agreementExpectedSubject to formal adoption

AI Act enters into force

GovernanceIn forceArt. 113

Regulation (EU) 2024/1689 entered into force on the twentieth day following its publication in the Official Journal on 12 July 2024. The Regulation applies in phases over subsequent years pursuant to Article 113.

Applies to: All providers, deployers, importers and distributors of AI systems and GPAI models in the EU market

Prohibited AI practices ban applies

Prohibited practicesIn forceArt. 5Art. 113(1)

Article 5 prohibitions on unacceptable-risk AI systems become enforceable: subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time remote biometric identification in public spaces (with limited exceptions), emotion recognition in workplace and education, AI-based profiling to predict offences, and untargeted facial image scraping.

Applies to: All providers and deployers of AI systems in the EU

Note: Already in force since 2 February 2025. Verified against Art. 113(1) — six months after entry into force.

AI literacy obligation applies

AI literacyIn forceArt. 4Art. 113(1)

Article 4 requires providers and deployers to take measures to ensure a sufficient level of AI literacy for their staff and persons operating AI systems on their behalf. This obligation became binding on 2 February 2025 alongside the prohibited practices prohibition.

Applies to: All providers and deployers of AI systems in the EU

Note: Already in force since 2 February 2025. Verified against Art. 113(1) — six months after entry into force.

GPAI model obligations apply

GPAIIn forceArt. 53Art. 55Art. 56+3 more

Chapter V provisions for providers of general-purpose AI models become applicable: technical documentation (Annex XI/XII), transparency information, copyright summary, and — for systemic-risk models — adversarial testing, incident notification, and cybersecurity measures. Codes of practice for GPAI model providers must also be finalised under Article 56.

Applies to: Providers of general-purpose AI models made available in the EU; providers of GPAI models with systemic risk

Note: Verified against Art. 113(3) — twelve months after entry into force (1 August 2024 + 12 months = 2 August 2025). NOTE: the task brief references Art. 113(c) but the phased dates are set out in Article 113 paragraphs, not lettered sub-provisions in the OJ text. This date is confirmed from the regulation text.

High-risk AI obligations — Annex III systems

High-risk AIExpectedArt. 6Art. 9Art. 10+11 more

Full obligations for high-risk AI systems listed in Annex III become applicable: risk management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), accuracy and robustness (Art. 15), quality management (Art. 17), conformity assessment (Art. 43), EU database registration (Art. 71), and post-market monitoring (Art. 72). Annex III categories include biometrics, critical infrastructure, employment/HR tools, education/vocational training, essential private and public services, law enforcement, migration and asylum, and administration of justice.

Applies to: Providers and deployers of high-risk AI systems listed in Annex III (biometrics, critical infrastructure, employment, education, essential services, law enforcement, migration, justice)

Note: Verification needed — The original date (2 August 2026) is set by Art. 113(6) — 24 months after entry into force. The European Commission's Omnibus simplification package (COM(2025)87, February 2025) proposed amendments. A political agreement on certain Omnibus elements was reported but, as of 2026-05-12, formal adoption of amendments to Regulation (EU) 2024/1689 that would alter this date has not been confirmed in the Official Journal. This milestone remains at 2026-08-02 until a revised regulation is published. Human review required before asserting postponement.

High-risk AI obligations — Annex I product-safety AI

High-risk AIExpectedArt. 6(1)Annex IAnnex III+1 more

AI systems that are safety components of products covered by the Union harmonisation legislation listed in Annex I (e.g. machinery, medical devices, automotive) must comply. These systems must also pass conformity assessment under the applicable sectoral legislation.

Applies to: Providers of AI systems that are safety components of products under Annex I sectoral legislation (machinery, medical devices, lifts, radio equipment, etc.)

Note: Same caveats as Annex III milestone. Verification needed for any Commission-proposed postponement prior to asserting a revised date.

GPAI models already on market before Aug 2025 must comply

GPAIExpectedArt. 113(3)Chap. V

General-purpose AI models that were placed on the market before 2 August 2025 must comply with the Chapter V GPAI obligations by this date. This is the transitional grace period for legacy GPAI models.

Applies to: Providers of general-purpose AI models that were on the EU market before 2 August 2025 and have not yet complied with Chapter V obligations

Note: Verified against Art. 113(3) — 36 months after entry into force (1 August 2024 + 36 months = 2 August 2027).

Source: Regulation (EU) 2024/1689, Article 113 · Last checked: 2026-05-12

Deadline

August 2, 2026 (high-risk systems)

Max Fine

€35M or 7% of global turnover

Sectors Affected

Technology, Healthcare, Financial Services

€35Mmaximum fine

The highest penalty for non-compliance with AI Act in the EU.

EU Official Journal

How do I comply with AI Act?

  • Classify AI systems by risk tier
  • Implement risk management systems
  • Ensure transparency and human oversight
  • Register high-risk systems in EU database
  • Conduct fundamental rights impact assessments

Does AI Act apply to your business?

Find out in 2 minutes with our free regulation checker.

Check now — free

Next step — classify

Classify your AI systems

Use the free regulation checker to find out exactly which AI Act obligations apply to your business in 2 minutes.

Classify your AI systems

For informational purposes only. This is not legal advice — consult qualified legal counsel.

Last updated: · Editorial policy