EuroComply
Sign up
Back to blog
EU AI Act 3 min read

How to Classify Your AI System Under the EU AI Act: A Step-by-Step Guide

How to Classify Your AI System Under the EU AI Act: A Step-by-Step Guide?

Not sure if your AI system is high-risk? Walk through the official classification process with practical examples for common business AI tools.

Source: EuroComply Editorial (2025-02-15)Reviewed:
EuroComply Team
EU regulatory specialistsContent reviewed against official EUR-Lex texts
EuroComply Editorial Team
0 views

The EU AI Act classifies AI systems into four risk tiers: Prohibited, High, Limited, and Minimal. Getting this classification right determines your entire compliance roadmap. Here's how to do it.

Step 1: Check for Prohibited Practices (Article 5)

First, verify your system doesn't fall into the eight prohibited categories. Most business AI won't, but check anyway:

  • Does it score individuals for social behavior?
  • Does it use subliminal manipulation?
  • Does it exploit vulnerabilities of specific groups?
  • Does it perform real-time biometric identification in public?
  • Does it recognize emotions in workplaces or schools?

If yes to any of these, stop. The system cannot be deployed in the EU.

Step 2: Check the High-Risk List (Annex III)

Annex III lists eight sectors where AI is high-risk when it significantly affects individuals:

  1. Biometrics β€” remote identification systems
  2. Critical infrastructure β€” safety components in utilities, transport
  3. Education β€” admissions, grading, monitoring, cheating detection
  4. Employment β€” recruitment, CV screening, performance evaluation, termination decisions
  5. Essential services β€” credit scoring, insurance pricing, social benefits
  6. Law enforcement β€” risk assessment, evidence evaluation, profiling
  7. Migration β€” risk assessment, document verification
  8. Democratic processes β€” voter influence systems

The key question: does your AI system make or materially influence decisions about natural persons in these sectors?

Step 3: Check for Transparency Obligations (Article 50)

Even if your system isn't high-risk, it might have transparency obligations:

  • Chatbots: Must inform users they're interacting with AI
  • Content generation: AI-generated text, images, audio, or video must be labeled
  • Emotion recognition: Subjects must be informed
  • Deep fakes: Must be clearly labeled as synthetic

Step 4: Minimal Risk

Everything else falls here. No specific obligations, just voluntary codes of conduct and the general AI literacy requirement.

Practical Examples

| System | Classification | Why | |--------|---------------|-----| | Customer support chatbot | LIMITED | Interacts with people, transparency obligation | | CV screening tool | HIGH | Employment sector, influences hiring decisions | | Content recommendation engine | MINIMAL | No significant individual impact | | Fraud detection for loans | HIGH | Essential services, affects access to credit | | AI image generator | LIMITED | Generates synthetic content, labeling required | | Predictive maintenance sensor | MINIMAL | No impact on individuals |

What Happens After Classification

For high-risk systems, you need: a risk management system, data governance procedures, technical documentation, automated logging, transparency information for users, human oversight mechanisms, and accuracy/robustness testing. Plus a conformity assessment before deployment.

For limited-risk systems, you need transparency measures β€” inform users, label content.

For minimal-risk systems, you're encouraged to follow voluntary codes of conduct, but there are no mandatory obligations beyond AI literacy.

Start classifying now. The high-risk deadline is August 2, 2026, and building compliance infrastructure takes time.

EC

EuroComply Editorial Team

EU regulatory compliance specialists covering the AI Act, GDPR, NIS2, and related legislation. Content reviewed against official EU regulation texts and enforcement guidance.

For informational purposes only. Consult qualified legal counsel.

Share:

Ready to check compliance?

Start auditing your AI systems and tech stack today.