How to Classify Your AI System Under the EU AI Act: A Step-by-Step Guide
The EU AI Act classifies AI systems into four risk tiers: Prohibited, High, Limited, and Minimal. Getting this classification right determines your entire compliance roadmap. Here's how to do it.
Step 1: Check for Prohibited Practices (Article 5)
First, verify your system doesn't fall into the eight prohibited categories. Most business AI won't, but check anyway: - Does it score individuals for social behavior? - Does it use subliminal manipulation? - Does it exploit vulnerabilities of specific groups? - Does it perform real-time biometric identification in public? - Does it recognize emotions in workplaces or schools?
If yes to any of these, stop. The system cannot be deployed in the EU.
Step 2: Check the High-Risk List (Annex III)
Annex III lists eight sectors where AI is high-risk when it significantly affects individuals:
- Biometrics — remote identification systems
- Critical infrastructure — safety components in utilities, transport
- Education — admissions, grading, monitoring, cheating detection
- Employment — recruitment, CV screening, performance evaluation, termination decisions
- Essential services — credit scoring, insurance pricing, social benefits
- Law enforcement — risk assessment, evidence evaluation, profiling
- Migration — risk assessment, document verification
- Democratic processes — voter influence systems
The key question: does your AI system make or materially influence decisions about natural persons in these sectors?
Step 3: Check for Transparency Obligations (Article 50)
Even if your system isn't high-risk, it might have transparency obligations: - Chatbots: Must inform users they're interacting with AI - Content generation: AI-generated text, images, audio, or video must be labeled - Emotion recognition: Subjects must be informed - Deep fakes: Must be clearly labeled as synthetic
Step 4: Minimal Risk
Everything else falls here. No specific obligations, just voluntary codes of conduct and the general AI literacy requirement.
Practical Examples
| System | Classification | Why |
|---|---|---|
| Customer support chatbot | LIMITED | Interacts with people, transparency obligation |
| CV screening tool | HIGH | Employment sector, influences hiring decisions |
| Content recommendation engine | MINIMAL | No significant individual impact |
| Fraud detection for loans | HIGH | Essential services, affects access to credit |
| AI image generator | LIMITED | Generates synthetic content, labeling required |
| Predictive maintenance sensor | MINIMAL | No impact on individuals |
What Happens After Classification
For high-risk systems, you need: a risk management system, data governance procedures, technical documentation, automated logging, transparency information for users, human oversight mechanisms, and accuracy/robustness testing. Plus a conformity assessment before deployment.
For limited-risk systems, you need transparency measures — inform users, label content.
For minimal-risk systems, you're encouraged to follow voluntary codes of conduct, but there are no mandatory obligations beyond AI literacy.
Start classifying now. The high-risk deadline is August 2, 2026, and building compliance infrastructure takes time.
For informational purposes only. Consult qualified legal counsel.
← Back to blog