EU AI Act
The EU AI Act classifies AI systems by risk level and imposes obligations on providers and deployers. High-risk systems face mandatory conformity assessments, documentation, and human oversight requirements.
What does AI Act require and when does it apply?
AI Act applies to Technology and Healthcare organisations across all EU member states. The key deadline is August 2, 2026 (high-risk systems). Non-compliance carries a maximum penalty of €35M or 7% of global turnover. Core obligations include classify ai systems by risk tier and implement risk management systems.
- Classify AI systems by risk tier
- Implement risk management systems
- Ensure transparency and human oversight
- Register high-risk systems in EU database
- Conduct fundamental rights impact assessments
| Deadline | August 2, 2026 (high-risk systems) |
| Max fine | €35M or 7% of global turnover |
| Primary sectors | Technology, Healthcare, Financial Services |
AI Act: €35M or 7% of global turnover max fine
AI Act applies to Technology and Healthcare organisations in all EU member states. Key deadline: August 2, 2026 (high-risk systems).
Source: Official Journal of the European Union — EU AI Act
The EU AI Act applies to SMEs that provide or deploy AI systems affecting people in the EU. Most SMEs start as deployers: they must inventory AI use, train staff, classify risk, keep evidence, and meet high-risk obligations where Annex III applies.
Most Annex III high-risk AI obligations apply, including documentation, oversight, logs and risk management.
AI Act SME action checklist
Action checklistList every internal and customer-facing AI tool, owner, vendor, purpose, data categories, user group and deployment status.
Articles 3, 4, 26
Separate prohibited, high-risk, limited-risk and minimal-risk use. Pay special attention to Annex III areas such as employment, education, credit, health and essential services.
Articles 5, 6, 50 and Annex III
Assign a human owner, define intended use, keep logs where available, follow provider instructions and record monitoring decisions.
Article 26
Provide AI literacy training to staff who procure, use, supervise or govern AI tools. Retain completion records and training content.
Article 4
Collect provider instructions, risk classification, data information, transparency notices, security controls and incident handling commitments.
Articles 13, 15, 16, 26
For Annex III systems, document human oversight, accuracy monitoring, data governance, incident escalation and fundamental-rights impact assessment triggers.
Articles 9-15, 26, 27, 73
EU AI Act application timeline
AI Act enters into force
Regulation (EU) 2024/1689 entered into force on the twentieth day following its publication in the Official Journal on 12 July 2024. The Regulation applies in phases over subsequent years pursuant to Article 113.
Applies to: All providers, deployers, importers and distributors of AI systems and GPAI models in the EU market
Prohibited AI practices ban applies
Article 5 prohibitions on unacceptable-risk AI systems become enforceable: subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time remote biometric identification in public spaces (with limited exceptions), emotion recognition in workplace and education, AI-based profiling to predict offences, and untargeted facial image scraping.
Applies to: All providers and deployers of AI systems in the EU
Note: Already in force since 2 February 2025. Verified against Art. 113(1) — six months after entry into force.
AI literacy obligation applies
Article 4 requires providers and deployers to take measures to ensure a sufficient level of AI literacy for their staff and persons operating AI systems on their behalf. This obligation became binding on 2 February 2025 alongside the prohibited practices prohibition.
Applies to: All providers and deployers of AI systems in the EU
Note: Already in force since 2 February 2025. Verified against Art. 113(1) — six months after entry into force.
GPAI model obligations apply
Chapter V provisions for providers of general-purpose AI models become applicable: technical documentation (Annex XI/XII), transparency information, copyright summary, and — for systemic-risk models — adversarial testing, incident notification, and cybersecurity measures. Codes of practice for GPAI model providers must also be finalised under Article 56.
Applies to: Providers of general-purpose AI models made available in the EU; providers of GPAI models with systemic risk
Note: Verified against Art. 113(3) — twelve months after entry into force (1 August 2024 + 12 months = 2 August 2025). NOTE: the task brief references Art. 113(c) but the phased dates are set out in Article 113 paragraphs, not lettered sub-provisions in the OJ text. This date is confirmed from the regulation text.
High-risk AI obligations — Annex III systems
Full obligations for high-risk AI systems listed in Annex III become applicable: risk management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), accuracy and robustness (Art. 15), quality management (Art. 17), conformity assessment (Art. 43), EU database registration (Art. 71), and post-market monitoring (Art. 72). Annex III categories include biometrics, critical infrastructure, employment/HR tools, education/vocational training, essential private and public services, law enforcement, migration and asylum, and administration of justice.
Applies to: Providers and deployers of high-risk AI systems listed in Annex III (biometrics, critical infrastructure, employment, education, essential services, law enforcement, migration, justice)
Note: Verification needed — The original date (2 August 2026) is set by Art. 113(6) — 24 months after entry into force. The European Commission's Omnibus simplification package (COM(2025)87, February 2025) proposed amendments. A political agreement on certain Omnibus elements was reported but, as of 2026-05-12, formal adoption of amendments to Regulation (EU) 2024/1689 that would alter this date has not been confirmed in the Official Journal. This milestone remains at 2026-08-02 until a revised regulation is published. Human review required before asserting postponement.
High-risk AI obligations — Annex I product-safety AI
AI systems that are safety components of products covered by the Union harmonisation legislation listed in Annex I (e.g. machinery, medical devices, automotive) must comply. These systems must also pass conformity assessment under the applicable sectoral legislation.
Applies to: Providers of AI systems that are safety components of products under Annex I sectoral legislation (machinery, medical devices, lifts, radio equipment, etc.)
Note: Same caveats as Annex III milestone. Verification needed for any Commission-proposed postponement prior to asserting a revised date.
GPAI models already on market before Aug 2025 must comply
General-purpose AI models that were placed on the market before 2 August 2025 must comply with the Chapter V GPAI obligations by this date. This is the transitional grace period for legacy GPAI models.
Applies to: Providers of general-purpose AI models that were on the EU market before 2 August 2025 and have not yet complied with Chapter V obligations
Note: Verified against Art. 113(3) — 36 months after entry into force (1 August 2024 + 36 months = 2 August 2027).
Source: Regulation (EU) 2024/1689, Article 113 · Last checked: 2026-05-12
August 2, 2026 (high-risk systems)
€35M or 7% of global turnover
Technology, Healthcare, Financial Services
The highest penalty for non-compliance with AI Act in the EU.
EU Official Journal
How do I comply with AI Act?
- Classify AI systems by risk tier
- Implement risk management systems
- Ensure transparency and human oversight
- Register high-risk systems in EU database
- Conduct fundamental rights impact assessments
Does AI Act apply to your business?
Find out in 2 minutes with our free regulation checker.
Check now — freeAI Act by Country
Related Regulations
GDPR
GDPR governs the processing of personal data of EU residents. It requires lawful basis for processing, data subject rights, breach notification, and accountability measures.
NIS2
NIS2 expands cybersecurity obligations to essential and important entities across critical sectors. It mandates risk management, incident reporting, and supply chain security.
CRA
The CRA establishes cybersecurity requirements for products with digital elements sold in the EU. Manufacturers must ensure security by design and provide vulnerability handling.
Next step — classify
Classify your AI systems
Use the free regulation checker to find out exactly which AI Act obligations apply to your business in 2 minutes.
For informational purposes only. This is not legal advice — consult qualified legal counsel.
Last updated: · Editorial policy