EU AI Act Annex IV documentation: the 9 sections you must write
What you need to know: EU AI Act Annex IV documentation: the 9 sections you must write
Annex IV technical documentation is what regulators ask for if your high-risk AI system is audited. Here are the 9 required sections in plain English, with a worked example.
TL;DR
Annex IV of Regulation (EU) 2024/1689 lists nine sections of technical documentation that providers of high-risk AI systems must maintain. Notified bodies and market surveillance authorities ask for this file during conformity assessments and after-market audits. The obligation applies from 2 August 2026. This article maps each section to what you actually write, with a worked example for a recruitment AI system.
Who must produce Annex IV documentation
The obligation falls on the provider of a high-risk AI system as defined in Article 6 and Annex III. A provider develops the system, or has it developed, and places it on the EU market or puts it into service under its own name or trademark. Distributors, importers, and deployers have lighter obligations.
A system is high-risk under Annex III if it falls within one of eight listed areas. The most common in practice are biometric identification (§1), critical infrastructure (§2), education (§3), employment (§4), access to essential services (§5), law enforcement (§6), migration (§7), and administration of justice (§8). Article 6(1) also classifies any AI system covered by EU product safety legislation as high-risk where it is a safety component.
The technical documentation must be ready before the system is placed on the market. Article 11(1) requires it to be "drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in Section 2 of Chapter III". Article 18 requires the provider to keep it for ten years after the system is placed on the market.
The 9 required sections (in plain English)
Annex IV gives nine sections. The official numbering goes 1 to 8 plus a final paragraph that effectively functions as a ninth requirement on changes.
Section 1 — General description of the AI system. The intended purpose, the provider's name, the system version, the interaction with hardware and software outside the AI system, the form in which the system is placed on the market, the relevant CE marking, and the instructions for use. Be precise about intended purpose: regulators read the rest of the file against this statement.
Section 2 — Detailed description of the elements and development process. The methods used, the design choices including key assumptions, the system architecture, the data requirements, the human oversight measures, the predetermined changes to the system, and the validation and testing procedures. This is the longest section in practice.
Section 3 — Monitoring, functioning, and control. How the provider monitors the system in production, the expected level of accuracy, the foreseeable unintended outcomes, the human oversight measures in operational terms, and the input data specifications. This is where you describe accuracy, robustness, and cybersecurity metrics from Article 15.
Section 4 — Risk management system. A description of the risk management system required under Article 9. The system must be a continuous iterative process running through the entire lifecycle. Identify foreseeable risks to health, safety, and fundamental rights. Document the mitigations.
Section 5 — Lifecycle changes. A description of any change made by the provider during the lifecycle. Include version numbers, change reasons, and re-evaluation outcomes.
Section 6 — Harmonised standards applied. A list of the harmonised standards applied in full or in part, with the references published in the Official Journal of the European Union. Where no harmonised standard is applied, describe the technical solutions adopted to meet the Chapter III Section 2 requirements.
Section 7 — EU declaration of conformity. A copy of the declaration referred to in Article 47. The declaration must be drawn up in machine-readable form and updated where appropriate.
Section 8 — Post-market monitoring plan. A description of the post-market monitoring system referred to in Article 72. The plan describes how the provider collects, documents, and analyses data on system performance after deployment.
Section 9 (the closing paragraph) — Changes through the lifecycle. For systems trained on data, a detailed description of the methodology used to train the model and the data governance measures described in Article 10. The closing paragraph of Annex IV also requires the documentation to be kept up to date.
How long should each section be?
The regulation does not prescribe lengths. Three reference points help.
A typical Annex IV file for a single high-risk system runs 80-150 pages once complete. Section 2 (development process) accounts for 30-50 pages because it includes data sheets, training methodology, and validation results. Section 4 (risk management) runs 15-25 pages with a structured risk register. Sections 1, 6, 7, and 8 are usually short. Section 3 sits in the middle at 10-20 pages.
The German Federal Office for Information Security (BSI) AI assurance catalogue and the NIST AI Risk Management Framework provide useful structural references. The CEN-CENELEC JTC 21 standards work, particularly the draft prEN ISO/IEC 42001 on AI management systems, is the harmonised reference most providers will cite under Section 6.
Common mistakes from current AI Act preparation work
Five mistakes recur in technical documentation drafts.
Vague intended purpose. Writing "an AI system for HR support" instead of "an AI system that ranks candidate CVs against job descriptions for the recruiting team at Acme GmbH, used as a triage step before human review of the top 20 candidates per role." The narrower statement makes the rest of the file enforceable. The wider statement attracts wider risk.
Treating risk management as a one-time document. Article 9 requires a continuous iterative process. A static PDF created at certification time and never updated will fail Article 9(2) on review.
Missing data governance evidence. Article 10 requires training, validation, and testing data sets to be relevant, representative, free of errors, and complete to the extent possible. Providers regularly skip the evidence trail. Data sheets, dataset version control, and pre-processing logs are necessary.
Confusing log retention with post-market monitoring. Article 12 requires automatic event logging. Article 72 requires a separate post-market monitoring plan. Both are obligations. Auditors check the plan, not just the logs.
Skipping the quality management system. Article 17 requires a documented quality management system covering the strategy for regulatory compliance, design control, data management, risk management, post-market monitoring, communication with authorities, and an accountability framework. A common omission is the accountability framework — who signs off, with what evidence.
Worked example: a recruitment AI system
Acme GmbH builds a CV-ranking system for employment recruitment. The system is high-risk under Annex III §4 (employment). The technical documentation file looks like this.
Section 1. "Acme CV Ranker v2.4. Provider: Acme GmbH, Berlin. Purpose: ranks candidate CVs against job descriptions for use by Acme recruiters as a triage step. Input: structured CV data (PDF parsed by upstream component) plus job description text. Output: a ranked list of up to 50 candidates per role with a numerical score and three explanation tokens per candidate. Form on market: SaaS, accessed via REST API. CE marking: yes."
Section 2. Architecture (a transformer-based ranking model fine-tuned on internal data plus a calibration head), design choices (excluded features: protected attributes from GDPR Article 9, demographic proxies removed by feature audit), training data (200,000 historical applications from 2018-2024, processed under GDPR Article 6(1)(f) legitimate interest with explicit DPIA reference), validation methodology (5-fold cross-validation plus a held-out test set of 10,000 applications from 2025), human oversight (Article 14 measures: recruiters see the rationale tokens and can override; an output review is required before any rejection email).
Section 3. Accuracy target: nDCG@10 ≥ 0.78 measured monthly. Robustness: tested against adversarial CV reformulations. Cybersecurity: input validation, prompt injection defences, rate limiting. Foreseeable unintended outcomes: demographic skew on protected attributes; mitigation = monthly fairness audit with equal opportunity difference and disparate impact metrics, thresholds at 0.10 and 0.20 respectively.
Section 4. Risk register with 23 identified risks. Top three: discriminatory ranking against protected groups (mitigation: feature audit, monthly fairness metrics, human override), data poisoning of feedback loops (mitigation: anomaly detection, training data versioning), over-reliance by recruiters (mitigation: UX prompts, mandatory rationale review).
Section 5. Version history v2.0 through v2.4 with change reasons and re-evaluation outcomes.
Section 6. Harmonised standards applied: ISO/IEC 42001:2023 (AI management system, applied in full), ISO/IEC 23894:2023 (AI risk management, applied in full), ISO/IEC 5259 series (data quality for ML, applied in part).
Section 7. EU declaration of conformity, signed by the head of engineering, with references to Articles 6(2), 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17.
Section 8. Post-market monitoring plan: daily aggregate metrics, weekly fairness audit, monthly review by the AI compliance committee, quarterly external audit. Reporting trigger: any serious incident under Article 73 reported within 15 days.
Section 9 (closing). Training methodology covered in Section 2; data governance covered separately under the Article 10 evidence pack. The full file is updated on every model release.
Tooling and templates
Annex IV is structurally repetitive across systems within the same organisation. Most of Sections 1, 6, 7, and 8 are reusable. Sections 2, 3, 4, and 5 are system-specific.
EuroComply maintains the open-source ai-act-implementation-toolkit on GitHub. It ships Annex IV templates, an Article 10 data governance checklist, an Article 14 human oversight playbook, and a risk register template aligned with Article 9. The repository is MIT-licensed.
For automated generation against your own system, the EuroComply AI X-Ray parses your model documentation, training data manifests, and code repositories and produces a structured Annex IV draft. The output is informational and requires legal review before submission to a notified body.
FAQ
Does Annex IV apply if my AI system is built in-house and not sold?
Yes, if you put it into service in the EU and it is high-risk. Article 3(3) defines "putting into service" to include in-house deployment. The technical documentation obligation applies whether or not money changes hands.
Can I use a foundation model under the hood and skip Annex IV?
No. If your system is high-risk, you are the provider of that system regardless of which model you use. The foundation model provider has separate obligations under Chapter V (general-purpose AI). You still owe Annex IV for the high-risk system you build on top.
When does the obligation start?
Article 113 sets staggered dates. Prohibitions in Article 5 applied from 2 February 2025. General-purpose AI obligations applied from 2 August 2025. High-risk AI obligations under Annex III apply from 2 August 2026. High-risk AI under product safety legislation (Annex I) applies from 2 August 2027.
Who reads the technical documentation?
Notified bodies during ex-ante conformity assessment (where the system requires third-party assessment under Article 43), and market surveillance authorities during post-market checks. The documentation must be made available in a language easily understood by the relevant authorities of the Member State concerned (Article 11(2)).
Is there an EU template for Annex IV?
The Commission may issue templates and simplified frameworks under Article 11(1). As of May 2026, no official template is published. Providers use industry frameworks (ISO 42001, NIST AI RMF) and structure their files against the Annex IV section order directly.
The 9 sections of Annex IV are not optional after 2 August 2026. Start the file now. Reference templates and the AI X-Ray generator are at /dashboard/ai-act.
EuroComply Editorial Team
EU regulatory compliance specialists covering the AI Act, GDPR, NIS2, and related legislation. Content reviewed against official EU regulation texts and enforcement guidance.
For informational purposes only. Consult qualified legal counsel.