What Early ISO 42001 Implementations Have Revealed
ISO/IEC 42001 was published in December 2023 — the first international standard for artificial intelligence management systems. By end of 2024, several hundred organisations globally had achieved certification. The audit findings from these early certifications reveal consistent patterns: not random gaps but structural weaknesses in how organisations are approaching AI governance implementation.
The Six Most Common ISO 42001 Audit Findings
Finding 1 — AI system inventory incomplete
The most consistent finding across early ISO 42001 certifications is an AI system inventory that is incomplete. Organisations identify bespoke AI systems they have built but fail to identify AI embedded in commercial software platforms — AI-powered features in CRM systems, AI-based anomaly detection in IT security tools, AI-assisted features in collaboration platforms. Auditors probe whether the organisation has a methodology for identifying AI beyond systems the technology team built themselves.
Finding 2 — AI risk assessment not proportionate
ISO 42001 Clause 6.1 requires a risk assessment appropriate to the nature and context of AI systems. Early implementations frequently apply a uniform methodology to all AI systems regardless of risk level — creating over-engineering for low-risk AI and under-engineering for high-risk AI. The proportionality principle requires different depth of assessment for different risk levels.
Finding 3 — AI policy does not reflect actual AI use
An AI policy describing responsible AI principles without reference to the specific AI systems, use cases and contexts in the organisation's scope reads as a generic document rather than an organisational commitment. Auditors test whether the AI policy is connected to actual AI operations.
Finding 4 — Monitoring covers performance but not governance
ISO 42001 requires monitoring of both AI system performance AND AI governance effectiveness. Early implementations implement technical performance monitoring — accuracy metrics, drift detection, availability — without implementing governance monitoring: whether risk assessments are maintained, AI incidents investigated, and governance training conducted as planned.
Finding 5 — Management review receives data but does not generate decisions
ISO 42001 Clause 9.3 requires management review to generate decisions about AI governance improvements. Auditors consistently find management review minutes that acknowledge AI performance data without generating specific decisions or actions. A review that notes "AI performance was satisfactory" without generating any actions is not demonstrating continual improvement.
Finding 6 — EU AI Act requirements not integrated
For EU market organisations, auditors increasingly ask whether ISO 42001 implementation addresses EU AI Act obligations. An ISO 42001 programme that does not reference the EU AI Act and does not demonstrate how high-risk AI systems are managed to meet both frameworks represents a gap — particularly where Annex III high-risk AI systems are in scope.
What Successful Implementations Do Differently
- Start with a comprehensive AI inventory methodology before defining scope — covering third-party and embedded AI
- Apply risk assessment depth proportionate to system risk — more rigorous for customer-facing, decision-making AI
- Connect AI policy to specific systems, use cases and risk levels — not generic principles
- Implement both technical performance monitoring and governance effectiveness monitoring
- Design management review to generate specific AI governance decisions — with assigned responsibilities and timelines
- Build EU AI Act technical documentation requirements into the ISO 42001 documentation framework from the outset
AI Governance specialists. Implementation programme within 48 hours.