HomeResourcesWhitepapers › AI Governance
Whitepaper · 9 pages · Free

AI Governance Failures: What the First Wave of ISO 42001 Implementations Revealed

The first wave of ISO 42001 certifications has revealed six consistent implementation failures. Most are structural — gaps in inventory methodology, risk assessment proportionality, and governance monitoring — not gaps in the quality of individual AI systems. This whitepaper analyses each failure and the implementation approach that avoids it.

Published May 2026·AI Governance·ISO 42001 AI Governance EU AI Act

What Early ISO 42001 Implementations Have Revealed

ISO/IEC 42001 was published in December 2023 — the first international standard for artificial intelligence management systems. By end of 2024, several hundred organisations globally had achieved certification. The audit findings from these early certifications reveal consistent patterns: not random gaps but structural weaknesses in how organisations are approaching AI governance implementation.

ISO 42001First international AI management system standard — December 2023
38%of early ISO 42001 Stage 2 audits found a significant finding related to AI risk assessment scope or methodology
Dec 2023Publication date — most implementers are working without the benefit of established industry practice
Download the complete whitepaper
All 9 pages — free, instant access.
No spam. No sales calls. We will email you a copy for reference.

The Six Most Common ISO 42001 Audit Findings

Finding 1 — AI system inventory incomplete

The most consistent finding across early ISO 42001 certifications is an AI system inventory that is incomplete. Organisations identify bespoke AI systems they have built but fail to identify AI embedded in commercial software platforms — AI-powered features in CRM systems, AI-based anomaly detection in IT security tools, AI-assisted features in collaboration platforms. Auditors probe whether the organisation has a methodology for identifying AI beyond systems the technology team built themselves.

Finding 2 — AI risk assessment not proportionate

ISO 42001 Clause 6.1 requires a risk assessment appropriate to the nature and context of AI systems. Early implementations frequently apply a uniform methodology to all AI systems regardless of risk level — creating over-engineering for low-risk AI and under-engineering for high-risk AI. The proportionality principle requires different depth of assessment for different risk levels.

Finding 3 — AI policy does not reflect actual AI use

An AI policy describing responsible AI principles without reference to the specific AI systems, use cases and contexts in the organisation's scope reads as a generic document rather than an organisational commitment. Auditors test whether the AI policy is connected to actual AI operations.

Finding 4 — Monitoring covers performance but not governance

ISO 42001 requires monitoring of both AI system performance AND AI governance effectiveness. Early implementations implement technical performance monitoring — accuracy metrics, drift detection, availability — without implementing governance monitoring: whether risk assessments are maintained, AI incidents investigated, and governance training conducted as planned.

Finding 5 — Management review receives data but does not generate decisions

ISO 42001 Clause 9.3 requires management review to generate decisions about AI governance improvements. Auditors consistently find management review minutes that acknowledge AI performance data without generating specific decisions or actions. A review that notes "AI performance was satisfactory" without generating any actions is not demonstrating continual improvement.

Finding 6 — EU AI Act requirements not integrated

For EU market organisations, auditors increasingly ask whether ISO 42001 implementation addresses EU AI Act obligations. An ISO 42001 programme that does not reference the EU AI Act and does not demonstrate how high-risk AI systems are managed to meet both frameworks represents a gap — particularly where Annex III high-risk AI systems are in scope.

What Successful Implementations Do Differently

  • Start with a comprehensive AI inventory methodology before defining scope — covering third-party and embedded AI
  • Apply risk assessment depth proportionate to system risk — more rigorous for customer-facing, decision-making AI
  • Connect AI policy to specific systems, use cases and risk levels — not generic principles
  • Implement both technical performance monitoring and governance effectiveness monitoring
  • Design management review to generate specific AI governance decisions — with assigned responsibilities and timelines
  • Build EU AI Act technical documentation requirements into the ISO 42001 documentation framework from the outset
ISO 42001 Implementation Readiness Assessment
AI inventory includes AI embedded in commercial software — not only bespoke systems
Risk assessment depth proportionate to system risk level
AI policy references specific use cases and contexts in scope
Monitoring covers both technical AI performance and AI governance effectiveness
Management review generates specific AI governance improvement decisions
EU AI Act obligations identified and integrated for applicable high-risk systems
Implementing ISO 42001 and avoiding these gaps?

AI Governance specialists. Implementation programme within 48 hours.

About AjaCertX
AjaCertX is a specialist compliance, certification and assurance partner. Our AI Governance practice delivers ISO 42001 implementation, EU AI Act compliance, and integrated AI governance programmes.
WhatsAppConnect