EuroComply
Créer un compte

EU AI Act

What are my obligations if I deploy (not develop) AI?

If you use an AI system built by someone else, you are a 'deployer' under the EU AI Act. 7 questions to find your specific obligations.

Last updated: 1 May 2025

Do AI Act deployer obligations need to comply with EU AI Act?

If you use an AI system built by someone else, you are a 'deployer' under the EU AI Act. 7 questions to find your specific obligations. If yes: High-risk deployer — implement log retention immediately. If not: Limited-risk deployer — transparency obligations met. Use the interact…

  • Yes path: High-risk deployer — implement log retention immediately
  • No path: Limited-risk deployer — transparency obligations met
  • Use the step-by-step decision tree below for your exact situation
Step 1

EU AI Act · Question 1

Do you use an AI system under your own authority for a professional purpose — even if you did not build it?

Deployer = any natural or legal person who uses an AI system in the course of a professional activity (not for personal use). Using ChatGPT, Workday AI, or a vendor's model to process customer data makes you a deployer.

For informational purposes only. Consult qualified legal counsel before making compliance decisions.

Decision tree questions

  1. Do you use an AI system under your own authority for a professional purpose — even if you did not build it?

    Deployer = any natural or legal person who uses an AI system in the course of a professional activity (not for personal use). Using ChatGPT, Workday AI, or a vendor's model to process customer data makes you a deployer.

    • Yes: Continue to: Is the AI system classified as high-risk under the EU AI Act?
    • No: Not a deployer — AI Act deployer obligations do not apply
  2. Is the AI system classified as high-risk under the EU AI Act?

    High-risk systems cover: biometric ID, critical infrastructure, education, employment/HR, financial services, law enforcement, migration, and justice. If unsure, run the Risk Tier tree first.

    • Yes: Continue to: Do you have a human in the loop who can review, override, or stop the AI system's outputs before they take effect?
    • No: Continue to: Does the AI system interact directly with your customers or employees in a conversational or content-generation way?
  3. Do you have a human in the loop who can review, override, or stop the AI system's outputs before they take effect?

    Human oversight (Art. 14) means a competent person monitors the system in real-time or reviews outputs before they affect individuals. Rubber-stamping does not count.

    • Yes: Continue to: Have you conducted a Fundamental Rights Impact Assessment (FRIA) for this high-risk AI system?
    • No: High-risk deployer — human oversight required
  4. Have you conducted a Fundamental Rights Impact Assessment (FRIA) for this high-risk AI system?

    Art. 27 requires deployers of high-risk AI to carry out a FRIA before deployment. Public bodies and private operators of critical infrastructure have stricter FRIA requirements.

    • Yes: Continue to: Do you keep the logs automatically generated by the AI system for at least 6 months?
    • No: High-risk deployer — FRIA required before deployment
  5. Do you keep the logs automatically generated by the AI system for at least 6 months?

    Art. 26(6) requires deployers to keep logs generated by high-risk AI systems for at least 6 months (or longer if national law requires). These logs allow post-hoc review of decisions.

    • Yes: Good deployer compliance baseline — confirm provider DPA
    • No: High-risk deployer — implement log retention immediately
  6. Does the AI system interact directly with your customers or employees in a conversational or content-generation way?

    Chatbots, AI email writers, AI phone assistants, image generators deployed to users — all require transparency disclosures even at limited-risk tier.

    • Yes: Continue to: Do you clearly inform users that they are interacting with an AI system at the start of each interaction?
    • No: Minimal-risk deployer — limited mandatory obligations
  7. Do you clearly inform users that they are interacting with an AI system at the start of each interaction?

    Art. 50(1) requires that natural persons interacting with an AI system are informed of this fact in a clear, intelligible, and timely manner — unless it is obvious from the context.

    • Yes: Limited-risk deployer — transparency obligations met
    • No: Limited-risk deployer — add AI disclosure immediately