AjaCertX
ONE ECOSYSTEM. INFINITE STANDARDS.

We Value Your Privacy

We use essential cookies to make our website work. With your consent, we also use optional analytics cookies to understand how visitors use our site.

Cookie Policy  ·  Privacy Policy

You must make a selection to access this website.

Your preference will be saved for future visits.

Regulatory Enforcement · Case Studies

FDA Issued Four AI Warning Letters

Analysis of all four enforcement actions from August 2024 to August 2025.

January 2026 · 7 min read

In August 2024, the FDA issued its first warning letter citing an AI algorithm as the primary compliance failure. By August 2025, four enforcement actions had established clear expectations for AI validation in GxP environments.

Most Cited Finding: Post-Hoc Acceptance Criteria

Three of the four warning letters cited post-hoc acceptance criteria as a primary finding — selecting performance thresholds after seeing test results. The GAMP AI Guide is unambiguous: acceptance criteria must be defined prospectively, before model testing begins.

What Your QA Team Must Have

Based on all four enforcement actions, every AI system in a GxP environment needs: a URS written before development, acceptance criteria defined before testing, a validation protocol and report, a model version control log with SHA-256 hashing, and an ongoing performance monitoring programme.

Download Whitepaper

Ready to Set the Standard?

Partner with AjaCertX for integrated compliance and assurance solutions.