FDA Issued Four AI Warning Letters
Analysis of all four enforcement actions from August 2024 to August 2025.
In August 2024, the FDA issued its first warning letter citing an AI algorithm as the primary compliance failure. By August 2025, four enforcement actions had established clear expectations for AI validation in GxP environments.
Most Cited Finding: Post-Hoc Acceptance Criteria
Three of the four warning letters cited post-hoc acceptance criteria as a primary finding — selecting performance thresholds after seeing test results. The GAMP AI Guide is unambiguous: acceptance criteria must be defined prospectively, before model testing begins.
What Your QA Team Must Have
Based on all four enforcement actions, every AI system in a GxP environment needs: a URS written before development, acceptance criteria defined before testing, a validation protocol and report, a model version control log with SHA-256 hashing, and an ongoing performance monitoring programme.