HomeResourcesArticles › Technology & AI
Article · 9 min read

ISO 42001 vs EU AI Act: The Distinction Technology Organisations Cannot Afford to Miss

ISO 42001 certification demonstrates you have a systematic approach to AI management. The EU AI Act requires something specific: that high-risk AI systems meet prescriptive technical requirements, documented in a precise format, assessed by the appropriate conformity route. These are different claims. Most technology organisations making the first are not making the second.

Published May 2026Technology & AIISO 42001 EU AI Act AI Governance Technology
Executive Summary

ISO 42001 and the EU AI Act are frequently described as complementary. They are — but the complementarity is more nuanced than most technology organisations have been told. ISO 42001 is a management system standard that tells you how to govern AI. The EU AI Act is binding law that tells you what specific obligations apply to specific AI systems. Neither can substitute for the other, and the organisations building governance programmes that try to address both with a single framework are discovering the gaps when it matters most — in regulatory scrutiny, in customer due diligence, and in the conformity assessment process.

Aug 2026EU AI Act high-risk AI system obligations fully applicable — organisations without compliant governance programmes face enforcement risk from this date
ISO 42001First international standard for AI management systems — published December 2023, available for certification, and already referenced in public sector procurement frameworks across the EU and UK
7%Maximum EU AI Act fine as a percentage of global annual turnover for the most serious violations — making this one of the highest-consequence regulatory frameworks in the technology sector

The Distinction That Technology Leaders Are Getting Wrong

Across technology organisations building AI products, deploying AI systems, or developing foundation models, the same misunderstanding appears consistently: that pursuing ISO 42001 certification addresses the EU AI Act, or conversely, that EU AI Act compliance work makes ISO 42001 unnecessary. Both assumptions produce governance gaps — and in a regulatory environment where enforcement is beginning and customer due diligence is intensifying, those gaps carry real consequences.

The technology sector has additional complexity beyond what other sectors face. Technology organisations are frequently both providers of AI systems (developing and placing AI on the market) and deployers of AI systems (using AI in their own operations). The EU AI Act treats providers and deployers differently — with more extensive obligations on providers. Technology organisations need to assess their obligations in both capacities, not just one.

Where the Frameworks Differ — And Why It Matters

ISO 42001 is principles-based; the EU AI Act is prescriptive

ISO 42001 Clause 6.1 requires organisations to assess AI-related risks and implement appropriate treatment. The standard does not specify what that risk assessment must contain, what methodologies must be used, or what specific controls must result. This flexibility is valuable for implementing a management system — it allows proportionate responses to different AI risk levels.

The EU AI Act is prescriptive for high-risk AI. It specifies exactly what a risk management system must cover (Article 9), exactly what data governance measures are required (Article 10), exactly what technical documentation must contain (Annex IV), and exactly what human oversight capabilities must be implemented (Article 14). An organisation that has implemented ISO 42001 and believes it has therefore addressed EU AI Act requirements has not — the EU AI Act prescriptive requirements go significantly beyond ISO 42001 principles-based controls.

ISO 42001 covers the entire AI portfolio; the EU AI Act has tiered obligations

ISO 42001 applies across all AI systems within an organisation's scope — a uniform management system approach regardless of AI risk level. This is appropriate for management system purposes: you need consistent governance regardless of whether the AI system carries high or low risk.

The EU AI Act applies different obligations based on risk tier. Minimal and limited risk AI face minimal or transparency-only requirements. High-risk AI faces the full compliance regime. GPAI models above capability thresholds face additional requirements. An ISO 42001 management system that applies the same governance controls across all risk tiers may be over-engineering governance for low-risk AI and under-engineering it for high-risk AI that requires EU AI Act prescriptive controls.

Conformity assessment mechanisms are different

ISO 42001 certification is performed by an accredited certification body through a Stage 1 (documentation review) and Stage 2 (implementation assessment) audit process — the same process used for ISO 27001, ISO 9001 and other management system certifications. The certificate demonstrates that your AI management system meets the standard's requirements.

EU AI Act conformity assessment for high-risk AI systems is a different process with different outcomes. For some high-risk AI systems, self-assessment with appropriate technical documentation is sufficient. For others — particularly AI components in products covered by existing EU safety legislation — a notified body assessment is required. The notified body is not assessing your management system: it is assessing whether a specific AI system meets the technical requirements for its risk category. These two processes address different questions and produce different evidence.

Technology organisations that have ISO 42001 certification and believe their EU AI Act obligations are covered are making a mistake that their legal team, their largest enterprise clients, and eventually their national market surveillance authority will identify. The certification and the compliance are different things.

AjaCertX AI Governance Practice

The Integrated Governance Programme for Technology Organisations

  1. Separate your AI portfolio into provider and deployer categories. Technology organisations developing and selling AI products are EU AI Act providers. Technology organisations using AI tools built by others (GitHub Copilot, Salesforce Einstein, OpenAI APIs) are EU AI Act deployers. Each category has different obligations. Many technology organisations are both simultaneously — a company that develops its own AI products and uses third-party AI tools must manage both sets of obligations.
  2. Apply EU AI Act risk classification to your entire AI portfolio — specifically and technically. The Annex III high-risk list must be assessed by someone with regulatory knowledge of the categories and their boundaries. Generic descriptions of your AI systems — "we use AI to improve customer experience" — are not sufficient for classification. The classification must address: what decisions or recommendations does the system make or contribute to, in what context, affecting which persons, with what consequences if the system is wrong?
  3. Build ISO 42001 as your governance architecture — not your compliance solution. Use ISO 42001 to establish the management framework: AI policy, risk assessment process, roles and responsibilities, documentation requirements, monitoring and measurement, audit programme, and management review. This architecture creates the organisational discipline that makes EU AI Act compliance sustainable rather than a one-time project.
  4. Layer EU AI Act prescriptive requirements on top of ISO 42001 for high-risk systems. For each AI system classified as high-risk under Annex III, implement the EU AI Act prescriptive requirements: the Article 9 risk management system, Article 10 data governance, Article 11 technical documentation (Annex IV format), Article 13 transparency, Article 14 human oversight, and Article 15 accuracy and robustness specifications. These requirements should be documented and maintained within your ISO 42001 management system — they are the specific controls your system generates for high-risk AI.
  5. Implement EU AI Act-specific obligations that have no ISO 42001 equivalent. Several EU AI Act requirements have no equivalent in ISO 42001: registration of high-risk AI systems in the EU AI Act database before deployment, conformity assessment procedures (self-assessment or notified body depending on system type), CE marking for high-risk AI in regulated products, and the specific GPAI model obligations if you develop or deploy models above capability thresholds. These must be addressed as EU AI Act-specific workstreams, not through ISO 42001 controls.
  6. Build customer transparency into your AI governance programme. Enterprise customers are increasingly asking technology vendors to demonstrate their AI governance maturity as part of procurement due diligence. ISO 42001 certification provides a credible, third-party validated signal of management system maturity. EU AI Act Annex IV technical documentation provides the system-specific evidence that sophisticated customers will request for high-risk applications. Both need to be available and accessible — not a discovery exercise when a customer due diligence request arrives.
DimensionISO 42001EU AI Act (High Risk)
Legal basisVoluntary standardBinding EU law
ScopeAll AI in organisational scopeAI systems classified under Annex III
Risk approachPrinciples-based, proportionatePrescriptive requirements by category
Conformity mechanismCertification body auditSelf-assessment or notified body
DocumentationManagement system documentationAnnex IV technical file — specific content
Market accessNo direct requirementCE marking + EU database registration
EnforcementCertification suspension/withdrawalNational market surveillance — fines up to €35M / 7% turnover
GPAI coverageGeneral AI management principlesSpecific GPAI model obligations above capability thresholds
AI Governance Programme Readiness Checklist for Technology Organisations
We have assessed our EU AI Act obligations both as a provider (AI we develop and sell) and as a deployer (AI tools we use)
Every AI system has been classified against EU AI Act Annex III by someone with specific regulatory knowledge of the categories
For each high-risk AI system, EU AI Act Article 9-15 requirements have been implemented and documented
EU AI Act Annex IV technical documentation exists and is maintained for each high-risk system
High-risk AI systems are registered in the EU AI Act database before deployment
GPAI model obligations have been assessed if we develop or deploy models above defined capability thresholds
ISO 42001 management system is implemented and provides the governance architecture for our entire AI portfolio
Third-party AI vendor due diligence requests EU AI Act technical documentation from providers
Customer-facing AI governance documentation is available for enterprise procurement due diligence requests

Frequently Asked Questions

We are a UK-based technology company. Does the EU AI Act apply to us?
Yes, if you develop AI systems that are placed on the EU market or whose outputs are used by EU persons — regardless of where your company is based. The EU AI Act follows the same extraterritorial logic as GDPR. UK-based AI developers selling to EU enterprise clients, providing AI through EU data centres, or building AI into products sold in the EU are in scope. The UK is developing its own AI regulation but has not yet enacted an AI Act equivalent — UK-based companies need to comply with the EU AI Act for their EU market activities independently of whatever UK regulation emerges.
We are an AI startup with limited resource. How should we prioritise?
Prioritise EU AI Act classification first — this determines whether you have high-risk obligations at all. If none of your AI systems fall within the Annex III high-risk categories and you are not developing GPAI models above capability thresholds, your EU AI Act obligations are relatively light (transparency requirements for limited-risk AI). In that case, ISO 42001 implementation as a voluntary governance signal for enterprise customers is a commercially valuable investment that is proportionate to your resource. If you do have high-risk AI, the EU AI Act prescriptive requirements are non-negotiable — prioritise those before pursuing ISO 42001 certification.
Can ISO 42001 certification be used as evidence of EU AI Act compliance in customer contracts?
It can be referenced as evidence of AI management system maturity — not as evidence of EU AI Act compliance. These are different claims. A customer contract clause that requires "ISO 42001 certification" is asking for evidence of management system maturity. A customer contract clause that requires "compliance with the EU AI Act" requires evidence that your AI systems meet the specific legal requirements of the Act — which requires EU AI Act-specific documentation, not just an ISO certificate. Make sure your contracts and your marketing materials are precise about which claim you are making.

How AjaCertX Helps

AjaCertX delivers integrated ISO 42001 and EU AI Act compliance programmes for technology organisations — software companies, AI developers, SaaS providers, and enterprises deploying AI at scale.

  • AI portfolio assessment — provider vs deployer categorisation and EU AI Act Annex III classification
  • ISO 42001 gap assessment and management system implementation
  • EU AI Act Annex IV technical documentation development for high-risk AI systems
  • GPAI model obligation assessment and compliance programme design
  • EU AI Act database registration support
  • Customer-facing AI governance documentation package development
  • Third-party AI vendor due diligence framework design
  • ISO 42001 certification support — Stage 1 and Stage 2 preparation
Building your technology AI governance programme?

AI Governance specialists. Integrated ISO 42001 and EU AI Act programme. Proposal within 48 hours.

Conclusion

Technology organisations that treat ISO 42001 and the EU AI Act as alternatives are building governance programmes with gaps. Those that treat them as layers — ISO 42001 as the management system architecture, EU AI Act as the prescriptive compliance layer for high-risk systems — are building programmes that will satisfy regulators, enterprise customers, and the conformity assessment processes that will increasingly determine market access.

The window for building before enforcement pressure arrives is closing. The technology organisations that establish robust, dual-framework AI governance now will be demonstrably better positioned for the regulatory environment of 2026 and beyond — and will have the customer-facing documentation to prove it.

About AjaCertX
AjaCertX is a specialist compliance, certification and assurance partner serving technology organisations globally. Our AI Governance practice delivers ISO 42001 implementation, EU AI Act compliance programmes, and integrated AI governance frameworks for software companies, AI developers, SaaS providers and enterprises deploying AI at scale.
WhatsAppConnect