EuroComply
Konto erstellen

EU AI Act

Does the EU AI Act apply to my product?

Answer 7 questions to find out whether your product or organisation falls under the EU AI Act's scope and which obligations you face.

Last updated: 1 May 2025

Do AI Act applicability need to comply with EU AI Act?

Answer 7 questions to find out whether your product or organisation falls under the EU AI Act's scope and which obligations you face. If yes: Your AI system is prohibited under the EU AI Act. If not: Minimal-risk AI system — no mandatory obligations. Use the interactive tree belo…

  • Yes path: Your AI system is prohibited under the EU AI Act
  • No path: Minimal-risk AI system — no mandatory obligations
  • Use the step-by-step decision tree below for your exact situation
Step 1

EU AI Act · Question 1

Does your product include any AI or machine learning component?

AI systems under the AI Act include ML models (including deep learning), logic/knowledge-based systems, and statistical approaches that generate outputs like predictions, recommendations, decisions, or content.

For informational purposes only. Consult qualified legal counsel before making compliance decisions.

Decision tree questions

  1. Does your product include any AI or machine learning component?

    AI systems under the AI Act include ML models (including deep learning), logic/knowledge-based systems, and statistical approaches that generate outputs like predictions, recommendations, decisions, or content.

    • Yes: Continue to: Do you place this AI system on the EU market or put it into service for EU users?
    • No: EU AI Act does not apply — no AI component
  2. Do you place this AI system on the EU market or put it into service for EU users?

    This includes selling, licensing, or making available the AI system to third parties in the EU, or deploying it in the EU under your own authority.

    • Yes: Continue to: Does your AI system use subliminal manipulation, exploit vulnerabilities of specific groups, or involve real-time biometric identification in public spaces?
    • No: AI Act does not apply — no EU market placement
  3. Does your AI system use subliminal manipulation, exploit vulnerabilities of specific groups, or involve real-time biometric identification in public spaces?

    Prohibited practices include: social scoring by public authorities, real-time remote biometric ID in public spaces (with narrow exceptions), and systems that manipulate behaviour below conscious awareness.

    • Yes: Your AI system is prohibited under the EU AI Act
    • No: Continue to: Is your AI system used in one of these high-risk areas: biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice?
  4. Is your AI system used in one of these high-risk areas: biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice?

    Annex III lists eight areas. Employment examples: CV screening, interview analysis, performance management. Essential services: credit scoring, insurance risk assessment.

    • Yes: High-risk AI system — full compliance obligations apply
    • No: Continue to: Is your product a general-purpose AI model — a large model trained to perform a wide range of tasks (like an LLM or multimodal foundation model)?
  5. Is your product a general-purpose AI model — a large model trained to perform a wide range of tasks (like an LLM or multimodal foundation model)?

    GPAIs include models like GPT-4, Claude, Llama, Mistral, Gemini — models not designed for one specific task but used across many downstream applications.

    • Yes: General-Purpose AI Model — Art. 53 obligations apply
    • No: Continue to: Does your AI system interact directly with users, generate synthetic content, or perform emotion recognition?
  6. Does your AI system interact directly with users, generate synthetic content, or perform emotion recognition?

    Limited-risk systems: chatbots, AI-generated text/images/audio, deepfakes, emotion recognition systems. They trigger transparency obligations but not the full high-risk framework.

    • Yes: Limited-risk AI system — transparency obligations apply
    • No: Minimal-risk AI system — no mandatory obligations