Blog
EU AI Act

How to Classify Your AI System Under the EU AI Act: A Step-by-Step Guide

The EU AI Act classifies AI systems into four risk tiers: Prohibited, High, Limited, and Minimal. Getting this classification right determines your entire compliance roadmap. Here's how to do it.

Step 1: Check for Prohibited Practices (Article 5)

First, verify your system doesn't fall into the eight prohibited categories. Most business AI won't, but check anyway: - Does it score individuals for social behavior? - Does it use subliminal manipulation? - Does it exploit vulnerabilities of specific groups? - Does it perform real-time biometric identification in public? - Does it recognize emotions in workplaces or schools?

If yes to any of these, stop. The system cannot be deployed in the EU.

Step 2: Check the High-Risk List (Annex III)

Annex III lists eight sectors where AI is high-risk when it significantly affects individuals:

The key question: does your AI system make or materially influence decisions about natural persons in these sectors?

Step 3: Check for Transparency Obligations (Article 50)

Even if your system isn't high-risk, it might have transparency obligations: - Chatbots: Must inform users they're interacting with AI - Content generation: AI-generated text, images, audio, or video must be labeled - Emotion recognition: Subjects must be informed - Deep fakes: Must be clearly labeled as synthetic

Step 4: Minimal Risk

Everything else falls here. No specific obligations, just voluntary codes of conduct and the general AI literacy requirement.

Practical Examples

SystemClassificationWhy
Customer support chatbotLIMITEDInteracts with people, transparency obligation
CV screening toolHIGHEmployment sector, influences hiring decisions
Content recommendation engineMINIMALNo significant individual impact
Fraud detection for loansHIGHEssential services, affects access to credit
AI image generatorLIMITEDGenerates synthetic content, labeling required
Predictive maintenance sensorMINIMALNo impact on individuals

What Happens After Classification

For high-risk systems, you need: a risk management system, data governance procedures, technical documentation, automated logging, transparency information for users, human oversight mechanisms, and accuracy/robustness testing. Plus a conformity assessment before deployment.

For limited-risk systems, you need transparency measures — inform users, label content.

For minimal-risk systems, you're encouraged to follow voluntary codes of conduct, but there are no mandatory obligations beyond AI literacy.

Start classifying now. The high-risk deadline is August 2, 2026, and building compliance infrastructure takes time.

For informational purposes only. Consult qualified legal counsel.

← Back to blog