Frequently Asked Questions
GAMP 5 AI, Annex 22, EU AI Act, ISO 42001 — answers to the questions QA and regulatory professionals ask most.
What is GAMP 5 AI?
GAMP 5 AI refers to the ISPE GAMP AI Guide (2025) — the pharmaceutical industry's validation standard for artificial intelligence and machine learning systems used in GxP-regulated environments. It extends the existing GAMP 5 methodology to address the specific challenges of AI: data governance, model validation, ongoing monitoring, and change control for model updates. It is the framework FDA and EMA inspectors expect organisations to follow when auditing AI systems.
Do I need GAMP 5 AI validation for my system?
If your organisation uses any AI or ML system in a GxP-regulated activity — batch release, visual inspection, pharmacovigilance signal detection, clinical decision support, quality control — then yes. The GAMP AI Guide applies wherever the output of an AI system is used to make or influence a GxP decision. If in doubt, a gap assessment will determine whether your specific systems are in scope.
What does EU GMP Annex 22 require?
EU GMP Annex 22 on Artificial Intelligence requires that AI systems used in GxP pharmaceutical manufacturing be validated using a risk-based approach consistent with GAMP AI Guide methodology. Key requirements: data governance documentation, pre-defined acceptance criteria, model version control, ongoing performance monitoring, and a change control procedure for AI model updates. Annex 22 entered into force in March 2026.
What is the EU AI Act and does it apply to pharma?
The EU AI Act is the European Union's binding regulation for artificial intelligence. It classifies AI systems by risk level. Pharmaceutical AI — including batch release algorithms, diagnostic AI, clinical decision support, and visual inspection systems — is classified as High Risk under Annex III. High Risk AI systems must undergo a conformity assessment before deployment and before the August 2026 enforcement date for systems already in use.
What are the EU AI Act penalties for pharma?
For High Risk AI non-compliance (the category that covers most pharma AI): up to €30 million or 6% of global annual turnover — whichever is higher. For prohibited AI: up to €35 million or 7% of global annual turnover. For general non-compliance with other obligations: up to €15 million or 3%. These apply from August 2026.
How long does an AI gap analysis take?
Our AI gap assessment takes 10 working days from kick-off to delivery of the written report. This covers the 8-area GAMP AI maturity model across all AI systems in scope, maps findings against EU GMP Annex 22, GAMP AI Guide and EU AI Act requirements simultaneously, and delivers a prioritised remediation plan. Delivered to a fixed timeline. Contact us for a tailored quote.
What is the difference between data drift and concept drift?
Data drift occurs when the statistical characteristics of your input data change — for example, a new raw material supplier changes the spectral profile your AI was trained on. Concept drift occurs when the underlying relationship between inputs and outputs changes — the pattern the model learned is no longer valid in the current operational environment. Both require monitoring; both can invalidate a previously validated model without any code changes.
What is ISO 42001 and why is it relevant to pharma?
ISO 42001:2023 is the AI Management System standard — structurally similar to ISO 9001 or ISO 27001. It establishes the governance framework for an organisation's AI programme: policy, risk management, oversight structure, supplier qualification, and post-market surveillance. For pharma, ISO 42001 is relevant because it satisfies EU AI Act conformity assessment requirements for High Risk AI, and aligns directly with EU GMP Annex 22 governance obligations.
What is the six-document AI inspection package?
The six documents an FDA or EMA inspector expects to find for every AI system in a GxP environment: (1) User Requirements Specification — what the AI must do, written before development; (2) Validation Protocol — how it will be tested, written before testing; (3) Pre-defined Acceptance Criteria — pass/fail thresholds set before testing; (4) Validation Report — evidence testing was completed and criteria met; (5) Monitoring Plan — how drift and performance degradation will be detected; (6) Change Control Log — record of every model update with hashing and impact assessment.