AI governance comparison
EU AI Act vs NIST AI RMF: Mandatory vs Voluntary AI Governance
The EU AI Act (Regulation (EU) 2024/1689) is binding EU law with fines up to β¬35M or 7% of global annual turnover; the NIST AI Risk Management Framework is a voluntary US guidance document with no enforcement mechanism. Both take a risk-based approach but the AI Act mandates conformity assessments for high-risk AI systems and prohibits certain uses outright. Organisations operating in the EU must comply with the AI Act; those outside the EU may adopt NIST voluntarily.
What is the difference between the EU AI Act and NIST AI RMF?
The EU AI Act (Regulation (EU) 2024/1689) is binding EU law with fines up to β¬35M or 7% of global annual turnover; the NIST AI Risk Management Framework is a voluntary US guidance document with no enforcement mechanism. Both take a risk-based approach but the AI Act mandates conformity assessments for high-risk AI systems and prohibits certain uses outright. Organisations operating in the EU must comply with the AI Act; those outside the EU may adopt NIST voluntarily.
- EU AI Act is binding EU law β fines up to β¬35M or 7% of global turnover
- NIST AI RMF is voluntary β no fines, no enforcement, global adoption
- AI Act mandates conformity assessments and Annex IV documentation for high-risk AI
- NIST compliance does not satisfy EU AI Act legal obligations
- High-risk AI deadline: 2 August 2026
Side-by-side comparison
| Aspect | EU Β· MandatoryEU AI Act | US Β· VoluntaryNIST AI RMF |
|---|---|---|
| Legal status | Binding EU regulation β mandatory for in-scope organisations | Voluntary US framework β no legal obligation |
| Enforcement | National market surveillance authorities (e.g. BNetzA, AMF) | None β no enforcement mechanism |
| Maximum fine | β¬35M or 7% global turnover (prohibited practices); β¬15M or 3% (other violations) | No fines β voluntary adoption |
| Geographic scope | Any AI system placed on the EU market or affecting EU persons | Global, voluntary β primarily used in US federal and regulated sectors |
| Risk approach | 4 tiers: Prohibited / High-risk (Annex III) / Limited-risk / Minimal-risk | Four functions: Govern, Map, Measure, Manage β organisation-defined risk appetite |
| Prohibited uses | Explicit list in Article 5 (social scoring, real-time biometrics in public, manipulative AI, etc.) | No prohibited list β framework is risk-neutral |
| Conformity assessment | Required for high-risk AI: self-assessment or third-party audit (Art. 43) | Voluntary β no required assessment process |
| Documentation | Mandatory Annex IV technical documentation for high-risk AI providers | Suggested AI RMF profiles and playbooks |
| Human oversight | Mandatory for high-risk AI systems (Art. 14) | Recommended governance practice |
| Incident reporting | Serious incidents with high-risk AI: 15 days (Art. 73) | Not required |
Key deadline: 2 August 2026
High-risk AI system obligations under the EU AI Act fully apply from 2 August 2026. Article 4 AI literacy obligations have been in force since 2 February 2025. GPAI model obligations apply from 2 August 2025. There are no equivalent deadlines under NIST AI RMF.
See full AI Act deadline timeline βFrequently asked questions
What is the difference between the EU AI Act and NIST AI RMF?
The EU AI Act is mandatory EU law that applies to any AI system placed on the EU market, with fines up to β¬35M or 7% of global turnover. NIST AI RMF is a voluntary US framework with no enforcement or fines. Both are risk-based, but the AI Act specifies prohibited uses, mandatory conformity assessments, and specific documentation requirements; NIST provides a flexible governance structure organisations can adapt.
Does NIST AI RMF satisfy EU AI Act requirements?
No. NIST AI RMF alignment does not satisfy EU AI Act compliance obligations. The AI Act requires specific conformity assessments, technical documentation (Annex IV), registration in the EU AI Act database for some high-risk systems, and incident reporting. NIST RMF can serve as a useful internal governance framework but must be supplemented with AI Act-specific controls for EU market access.
Does ISO 42001 help with AI Act compliance?
ISO 42001 (AI Management System standard) covers governance and risk management practices that overlap with many AI Act Article 9 risk management requirements. Like NIST AI RMF, it does not replace the legal obligations β conformity assessments, prohibited use checks, and Annex IV documentation remain mandatory under the AI Act regardless of ISO 42001 certification.
Who must comply with the EU AI Act?
Providers (developers placing AI systems on the EU market), deployers (organisations using AI systems in the EU), importers, and distributors. Most SMEs are deployers and face obligations under Article 4 (AI literacy, since Feb 2025) and Article 26 (deployer obligations for high-risk AI). The high-risk AI compliance deadline is 2 August 2026.
Classify your AI systems under the EU AI Act
The AI Act Compliance Checker answers 13 questions to classify one AI system by risk tier, with Article 4, 5, 6, Annex III and Article 50 obligation links.
Related comparisons
Informational only. Not legal advice β consult qualified legal counsel for your specific situation.
Last reviewed: