ISO 42001 and the EU AI Act are frequently described as complementary. They are — but the complementarity is more nuanced than most technology organisations have been told. ISO 42001 is a management system standard that tells you how to govern AI. The EU AI Act is binding law that tells you what specific obligations apply to specific AI systems. Neither can substitute for the other, and the organisations building governance programmes that try to address both with a single framework are discovering the gaps when it matters most — in regulatory scrutiny, in customer due diligence, and in the conformity assessment process.
The Distinction That Technology Leaders Are Getting Wrong
Across technology organisations building AI products, deploying AI systems, or developing foundation models, the same misunderstanding appears consistently: that pursuing ISO 42001 certification addresses the EU AI Act, or conversely, that EU AI Act compliance work makes ISO 42001 unnecessary. Both assumptions produce governance gaps — and in a regulatory environment where enforcement is beginning and customer due diligence is intensifying, those gaps carry real consequences.
The technology sector has additional complexity beyond what other sectors face. Technology organisations are frequently both providers of AI systems (developing and placing AI on the market) and deployers of AI systems (using AI in their own operations). The EU AI Act treats providers and deployers differently — with more extensive obligations on providers. Technology organisations need to assess their obligations in both capacities, not just one.
Where the Frameworks Differ — And Why It Matters
ISO 42001 is principles-based; the EU AI Act is prescriptive
ISO 42001 Clause 6.1 requires organisations to assess AI-related risks and implement appropriate treatment. The standard does not specify what that risk assessment must contain, what methodologies must be used, or what specific controls must result. This flexibility is valuable for implementing a management system — it allows proportionate responses to different AI risk levels.
The EU AI Act is prescriptive for high-risk AI. It specifies exactly what a risk management system must cover (Article 9), exactly what data governance measures are required (Article 10), exactly what technical documentation must contain (Annex IV), and exactly what human oversight capabilities must be implemented (Article 14). An organisation that has implemented ISO 42001 and believes it has therefore addressed EU AI Act requirements has not — the EU AI Act prescriptive requirements go significantly beyond ISO 42001 principles-based controls.
ISO 42001 covers the entire AI portfolio; the EU AI Act has tiered obligations
ISO 42001 applies across all AI systems within an organisation's scope — a uniform management system approach regardless of AI risk level. This is appropriate for management system purposes: you need consistent governance regardless of whether the AI system carries high or low risk.
The EU AI Act applies different obligations based on risk tier. Minimal and limited risk AI face minimal or transparency-only requirements. High-risk AI faces the full compliance regime. GPAI models above capability thresholds face additional requirements. An ISO 42001 management system that applies the same governance controls across all risk tiers may be over-engineering governance for low-risk AI and under-engineering it for high-risk AI that requires EU AI Act prescriptive controls.
Conformity assessment mechanisms are different
ISO 42001 certification is performed by an accredited certification body through a Stage 1 (documentation review) and Stage 2 (implementation assessment) audit process — the same process used for ISO 27001, ISO 9001 and other management system certifications. The certificate demonstrates that your AI management system meets the standard's requirements.
EU AI Act conformity assessment for high-risk AI systems is a different process with different outcomes. For some high-risk AI systems, self-assessment with appropriate technical documentation is sufficient. For others — particularly AI components in products covered by existing EU safety legislation — a notified body assessment is required. The notified body is not assessing your management system: it is assessing whether a specific AI system meets the technical requirements for its risk category. These two processes address different questions and produce different evidence.
Technology organisations that have ISO 42001 certification and believe their EU AI Act obligations are covered are making a mistake that their legal team, their largest enterprise clients, and eventually their national market surveillance authority will identify. The certification and the compliance are different things.
The Integrated Governance Programme for Technology Organisations
- Separate your AI portfolio into provider and deployer categories. Technology organisations developing and selling AI products are EU AI Act providers. Technology organisations using AI tools built by others (GitHub Copilot, Salesforce Einstein, OpenAI APIs) are EU AI Act deployers. Each category has different obligations. Many technology organisations are both simultaneously — a company that develops its own AI products and uses third-party AI tools must manage both sets of obligations.
- Apply EU AI Act risk classification to your entire AI portfolio — specifically and technically. The Annex III high-risk list must be assessed by someone with regulatory knowledge of the categories and their boundaries. Generic descriptions of your AI systems — "we use AI to improve customer experience" — are not sufficient for classification. The classification must address: what decisions or recommendations does the system make or contribute to, in what context, affecting which persons, with what consequences if the system is wrong?
- Build ISO 42001 as your governance architecture — not your compliance solution. Use ISO 42001 to establish the management framework: AI policy, risk assessment process, roles and responsibilities, documentation requirements, monitoring and measurement, audit programme, and management review. This architecture creates the organisational discipline that makes EU AI Act compliance sustainable rather than a one-time project.
- Layer EU AI Act prescriptive requirements on top of ISO 42001 for high-risk systems. For each AI system classified as high-risk under Annex III, implement the EU AI Act prescriptive requirements: the Article 9 risk management system, Article 10 data governance, Article 11 technical documentation (Annex IV format), Article 13 transparency, Article 14 human oversight, and Article 15 accuracy and robustness specifications. These requirements should be documented and maintained within your ISO 42001 management system — they are the specific controls your system generates for high-risk AI.
- Implement EU AI Act-specific obligations that have no ISO 42001 equivalent. Several EU AI Act requirements have no equivalent in ISO 42001: registration of high-risk AI systems in the EU AI Act database before deployment, conformity assessment procedures (self-assessment or notified body depending on system type), CE marking for high-risk AI in regulated products, and the specific GPAI model obligations if you develop or deploy models above capability thresholds. These must be addressed as EU AI Act-specific workstreams, not through ISO 42001 controls.
- Build customer transparency into your AI governance programme. Enterprise customers are increasingly asking technology vendors to demonstrate their AI governance maturity as part of procurement due diligence. ISO 42001 certification provides a credible, third-party validated signal of management system maturity. EU AI Act Annex IV technical documentation provides the system-specific evidence that sophisticated customers will request for high-risk applications. Both need to be available and accessible — not a discovery exercise when a customer due diligence request arrives.
| Dimension | ISO 42001 | EU AI Act (High Risk) |
|---|---|---|
| Legal basis | Voluntary standard | Binding EU law |
| Scope | All AI in organisational scope | AI systems classified under Annex III |
| Risk approach | Principles-based, proportionate | Prescriptive requirements by category |
| Conformity mechanism | Certification body audit | Self-assessment or notified body |
| Documentation | Management system documentation | Annex IV technical file — specific content |
| Market access | No direct requirement | CE marking + EU database registration |
| Enforcement | Certification suspension/withdrawal | National market surveillance — fines up to €35M / 7% turnover |
| GPAI coverage | General AI management principles | Specific GPAI model obligations above capability thresholds |
Frequently Asked Questions
How AjaCertX Helps
AjaCertX delivers integrated ISO 42001 and EU AI Act compliance programmes for technology organisations — software companies, AI developers, SaaS providers, and enterprises deploying AI at scale.
- AI portfolio assessment — provider vs deployer categorisation and EU AI Act Annex III classification
- ISO 42001 gap assessment and management system implementation
- EU AI Act Annex IV technical documentation development for high-risk AI systems
- GPAI model obligation assessment and compliance programme design
- EU AI Act database registration support
- Customer-facing AI governance documentation package development
- Third-party AI vendor due diligence framework design
- ISO 42001 certification support — Stage 1 and Stage 2 preparation
AI Governance specialists. Integrated ISO 42001 and EU AI Act programme. Proposal within 48 hours.
Conclusion
Technology organisations that treat ISO 42001 and the EU AI Act as alternatives are building governance programmes with gaps. Those that treat them as layers — ISO 42001 as the management system architecture, EU AI Act as the prescriptive compliance layer for high-risk systems — are building programmes that will satisfy regulators, enterprise customers, and the conformity assessment processes that will increasingly determine market access.
The window for building before enforcement pressure arrives is closing. The technology organisations that establish robust, dual-framework AI governance now will be demonstrably better positioned for the regulatory environment of 2026 and beyond — and will have the customer-facing documentation to prove it.