EU AI Act
What EU AI Act risk tier is my AI system?
8 questions to classify your AI system as prohibited, high-risk, limited-risk, or minimal-risk under the EU AI Act.
Last updated: 1 May 2025
Do AI Act risk tier need to comply with EU AI Act?
8 questions to classify your AI system as prohibited, high-risk, limited-risk, or minimal-risk under the EU AI Act. If yes: Prohibited — subliminal manipulation (Art. 5(1)(a)). If not: Minimal-risk — no mandatory obligations. Use the interactive tree below to walk through your sp…
- Yes path: Prohibited — subliminal manipulation (Art. 5(1)(a))
- No path: Minimal-risk — no mandatory obligations
- Use the step-by-step decision tree below for your exact situation
EU AI Act · Question 1
Does your AI system use techniques that operate below conscious awareness to manipulate users' behaviour?
This covers systems designed to influence decisions through means the user cannot perceive or resist — subliminal audio/visual triggers, hidden persuasion patterns, neuromarketing AI.
For informational purposes only. Consult qualified legal counsel before making compliance decisions.
Decision tree questions
Does your AI system use techniques that operate below conscious awareness to manipulate users' behaviour?
This covers systems designed to influence decisions through means the user cannot perceive or resist — subliminal audio/visual triggers, hidden persuasion patterns, neuromarketing AI.
- Yes: Prohibited — subliminal manipulation (Art. 5(1)(a))
- No: Continue to: Does your AI system assign social scores to individuals that are then used to restrict their access to services or opportunities?
Does your AI system assign social scores to individuals that are then used to restrict their access to services or opportunities?
Social scoring by public authorities for general purposes is prohibited. Private credit scoring in regulated financial contexts is handled separately under Annex III.
- Yes: Prohibited — social scoring (Art. 5(1)(c))
- No: Continue to: Does your AI system perform real-time remote biometric identification (e.g. face recognition) in publicly accessible spaces?
Does your AI system perform real-time remote biometric identification (e.g. face recognition) in publicly accessible spaces?
This is prohibited except in narrow law-enforcement contexts with prior judicial authorisation. Private CCTV inside a company's own premises is not in scope of this prohibition.
- Yes: Prohibited (with narrow exceptions) — real-time biometric ID
- No: Continue to: Is your AI system used in any of these 8 areas: biometric categorisation, critical infrastructure safety, education/vocational training, employment/worker management, essential private services (credit, insurance), law enforcement, migration/asylum, or justice administration?
Is your AI system used in any of these 8 areas: biometric categorisation, critical infrastructure safety, education/vocational training, employment/worker management, essential private services (credit, insurance), law enforcement, migration/asylum, or justice administration?
Annex III is the definitive list. Employment includes: CV screening, interview assessment tools, task allocation, performance monitoring. Essential services: credit scoring, life/health insurance risk tools.
- Yes: Continue to: Is the AI system a safety component of a product already covered by EU product safety legislation (e.g. Machinery Regulation, MDR, aviation)?
- No: Continue to: Is your system a general-purpose AI model — a foundation model capable of performing a wide variety of tasks?
Is the AI system a safety component of a product already covered by EU product safety legislation (e.g. Machinery Regulation, MDR, aviation)?
AI embedded in medical devices, machinery, vehicles, or aviation equipment that functions as a safety component is automatically high-risk.
- Yes: High-risk — safety component in regulated product
- No: High-risk AI system — Annex III obligations apply
Is your system a general-purpose AI model — a foundation model capable of performing a wide variety of tasks?
LLMs (GPT, Claude, Llama, Mistral), multimodal models, and large image/audio generation models are GPAIs. Narrow task-specific models are not.
- Yes: General-Purpose AI Model — Art. 53 obligations
- No: Continue to: Does your system interact with humans (chatbot), generate synthetic media, or perform emotion recognition?
Does your system interact with humans (chatbot), generate synthetic media, or perform emotion recognition?
Systems that produce content users might mistake for human-created, or that detect emotional states, fall into the limited-risk category with transparency obligations.
- Yes: Limited-risk AI system — transparency obligations only
- No: Minimal-risk — no mandatory obligations