Life science organisations deploying AI in GxP-regulated environments face a regulatory intersection that most AI governance programmes are not designed for: EU GMP Annex 22 (GxP-specific AI governance) and the EU AI Act (horizontal AI regulation) apply simultaneously to the same systems — with different requirements, different enforcement authorities, and different but overlapping technical documentation obligations. Organisations that address these as a single regulatory challenge will miss Annex 22-specific requirements. Those that address them as entirely separate challenges will duplicate effort unnecessarily. This article explains the intersection and the integrated programme that addresses both.
Two Regulatory Clocks Running Simultaneously
The announcement of EU GMP Annex 22 in draft form in 2023, and the simultaneous progression of the EU AI Act through its final stages, created an immediate challenge for pharmaceutical, biotech and medical device organisations: how do two AI regulatory frameworks — one GxP-specific and one horizontal — interact for life science AI systems?
The answer is that they do not simply overlap — they apply from different regulatory angles to the same systems, creating obligations that must be satisfied simultaneously but to different authorities. An AI-based visual inspection system at a pharmaceutical manufacturer is subject to Annex 22 as a GxP computerised system, and to the EU AI Act as a high-risk AI system deployed in a safety-critical manufacturing context. The EMA and the national market surveillance authority enforcing the EU AI Act are both potentially interested in how that system is governed — and they have different questions.
Understanding where the two frameworks' requirements align, where they diverge, and how to build a single validation and governance programme that satisfies both is now an operational necessity for any life science organisation with AI in its GxP environment.
Where Annex 22 and the EU AI Act Create Different Demands
Different concepts of risk
Annex 22 risk assessment is aligned to GxP risk management methodology — assessing the impact of AI system failure on product quality, patient safety, and data integrity. The risk categorisation follows GxP logic: what is the direct impact on the product or patient if this system produces an incorrect output?
The EU AI Act risk classification is based on the intended use context — Annex III defines high-risk categories by the domain in which the AI is used, not by the technical characteristics of the AI system itself. A batch release AI is high-risk under the EU AI Act because batch release is a safety-critical pharmaceutical manufacturing step, regardless of how sophisticated or reliable the AI system is. The risk concepts are related but distinct — and both risk assessments must be documented separately.
Different validation requirements
Annex 22 requires validation aligned to the GxP computerised system validation framework — IQ, OQ, PQ extended with AI-specific elements: intended use boundary definition, training data governance, model performance validation against GxP acceptance criteria, and continuous performance monitoring with drift detection. The GAMP 5 AI Supplement provides the methodology for implementing these requirements.
The EU AI Act requires conformity assessment — either self-assessment or notified body assessment depending on whether the AI system is embedded in a regulated product. For pharmaceutical manufacturing AI that is a standalone system (not embedded in a CE-marked medical device), self-assessment with Annex IV technical documentation is typically sufficient. For AI embedded in a medical device, the medical device notified body assessment may be required.
Different documentation requirements
Annex 22 documentation is within the GxP document management framework — controlled, version-managed, subject to change control, maintained as part of the quality management system. The documentation covers the full GxP validation lifecycle: user requirements specification, risk assessment, validation plan, IQ/OQ/PQ protocols and reports, validation summary report, ongoing monitoring plan.
EU AI Act Annex IV technical documentation has a prescribed content structure that is different from GxP validation documentation: general description of the AI system, description of design and development process, information on training, validation and testing data, information on monitoring, functioning and control, and a description of any change management procedure. This documentation is maintained by the provider and made available to national market surveillance authorities and notified bodies on request. It is not the same as GxP validation documentation — but there is significant content overlap that should be exploited.
The life science organisations that manage Annex 22 and EU AI Act compliance most efficiently are those that design a single documentation framework that produces both GxP validation records and EU AI Act technical documentation from the same evidence base. The overlap is large enough to make this achievable — but only with deliberate design, not by accident.
The Integrated Annex 22 and EU AI Act Compliance Programme
- Conduct a dual classification of every AI system in your GxP environment. For each system: apply Annex 22 GxP risk assessment (impact on product quality, patient safety, data integrity) AND apply EU AI Act Annex III risk classification (is this system used in a context specified in Annex III?). For most GxP AI systems, both assessments will conclude that significant governance is required — but the specific requirements they generate are different and must be tracked separately.
- Design an integrated validation protocol that produces both Annex 22 and EU AI Act evidence. Structure your validation documentation to serve both regulatory purposes simultaneously. The user requirements specification should document the intended use boundary (Annex 22) and the system description for EU AI Act Annex IV. The training data governance section should satisfy Annex 22 data integrity requirements and EU AI Act Article 10 data governance. The performance validation section should satisfy Annex 22 GxP acceptance criteria and EU AI Act accuracy and robustness requirements. One validation exercise, two regulatory outputs.
- Implement continuous monitoring that addresses both Annex 22 and EU AI Act requirements. Annex 22 requires continuous performance monitoring against GxP acceptance criteria with drift detection and escalation. EU AI Act Article 14 requires human oversight capability and monitoring for high-risk AI systems. These requirements are substantively similar — but Annex 22 is more prescriptive about the GxP-specific metrics that must be monitored (product quality outcomes, process capability indicators) while the EU AI Act is more prescriptive about the human oversight capability that must be maintained.
- Establish separate but linked change control procedures for Annex 22 and EU AI Act. Any change to an AI system in a GxP environment requires GxP change control — documented, impact assessed, validated as required before implementation. For EU AI Act purposes, significant changes to high-risk AI systems must also be assessed for their impact on the conformity of the system with EU AI Act requirements, and significant modifications may require re-assessment or updated conformity documentation. These are separate procedures with different triggers and different outputs — but they should be linked so that a GxP change control automatically triggers an EU AI Act impact assessment.
- Prepare for dual regulatory scrutiny. Annex 22 compliance will be assessed by national health authority inspectors — FDA, EMA, MHRA, TGA — as part of GxP inspections. EU AI Act compliance will be assessed by national market surveillance authorities as part of EU AI Act enforcement. These are different inspection programmes with different inspector expertise and different evidence requirements. Prepare documentation packages for both — the GxP validation package and the EU AI Act Annex IV technical file — as separate but cross-referenced documents.
- Align your ISO 42001 AI management system to both frameworks. ISO 42001 provides the management system governance — risk assessment, documentation, monitoring, change management, management review — that supports both Annex 22 and EU AI Act compliance. For life science organisations, the ISO 42001 AI management system should be scoped to cover all AI systems in the GxP environment and should explicitly reference Annex 22 and EU AI Act requirements as the specific content requirements that the management system controls must produce.
| Requirement | EU GMP Annex 22 | EU AI Act (High Risk) | Integration opportunity |
|---|---|---|---|
| Risk assessment | GxP impact assessment — product quality, patient safety, data integrity | Annex III classification + Article 9 risk management system | Single risk assessment with dual output — GxP impact AND Annex III classification |
| Training data | ALCOA+ data integrity for training datasets | Article 10 data governance — quality, representativeness, bias | Single training data governance framework satisfying both |
| Validation / conformity | GxP IQ/OQ/PQ + AI-specific performance validation | Self-assessment or notified body — Annex IV technical documentation | Integrated validation protocol producing both GxP records and Annex IV documentation |
| Monitoring | Continuous GxP performance monitoring with drift detection | Article 14 human oversight and Article 9 risk monitoring | Single monitoring programme with GxP and AI Act outputs |
| Change control | GxP change control for all system modifications | Assessment of impact on EU AI Act conformity for significant changes | Linked change control — GxP trigger includes AI Act impact assessment |
| Documentation | GxP validation package in QMS | Annex IV technical file maintained by provider | Cross-referenced documentation packages serving both regulatory programmes |
Frequently Asked Questions
How AjaCertX Helps
AjaCertX delivers integrated Annex 22 and EU AI Act compliance programmes for pharmaceutical, biotech, medical device and clinical research organisations. Our GxP and AI Validation practice combines deep GxP regulatory expertise with EU AI Act regulatory knowledge — the combination required to address both frameworks simultaneously.
- Dual classification — Annex 22 GxP risk assessment AND EU AI Act Annex III classification for every GxP AI system
- Integrated validation documentation framework design — producing GxP validation records and EU AI Act Annex IV technical documentation from a single validation exercise
- Training data governance programme satisfying both ALCOA+ and EU AI Act Article 10 requirements
- Continuous performance monitoring programme design covering both GxP and EU AI Act requirements
- Gap assessment for legacy AI systems validated before Annex 22 and EU AI Act requirements
- ISO 42001 AI management system implementation for life science organisations
- Inspection readiness for both health authority GxP inspection and EU AI Act market surveillance
- GAMP 5 AI Supplement implementation methodology training for GxP teams
GxP and AI Validation specialists. Integrated compliance programme proposal within 48 hours.
Conclusion
EU GMP Annex 22 and the EU AI Act create parallel, simultaneous obligations for life science organisations with AI in their GxP environments. The frameworks address the same AI systems from different regulatory angles — Annex 22 from the GxP quality and safety perspective, the EU AI Act from the horizontal AI risk perspective — and both will be enforced in 2026 and beyond.
The organisations that manage this most efficiently are building integrated programmes from the outset — designing validation documentation that serves both regulatory purposes, implementing monitoring that addresses both frameworks' requirements, and maintaining cross-referenced documentation packages for the different inspection and assessment programmes that will assess compliance. The organisations that manage Annex 22 and EU AI Act as entirely separate programmes will spend significantly more resource on compliance and still risk gaps where the two frameworks interact.