AjaCertX
ONE ECOSYSTEM. INFINITE STANDARDS.

We Value Your Privacy

We use essential cookies to make our website work. With your consent, we also use optional analytics cookies to understand how visitors use our site.

Cookie Policy  ·  Privacy Policy

You must make a selection to access this website.

Your preference will be saved for future visits.

AI Validation · Best Practice

The Most Common AI Validation Failure

Why post-hoc acceptance criteria appear in most FDA AI warning letters.

November 2025 · 4 min read

The most common finding in every FDA AI warning letter to date: acceptance criteria defined after testing was complete. This practice is the AI equivalent of changing your test protocol after seeing the results — and the FDA considers it a fundamental validation failure.

Why This Happens

AI model development is inherently iterative. Teams build a model, evaluate performance, and adjust. The natural instinct is to set the acceptance threshold at whatever the model achieved. The GAMP AI Guide requires acceptance criteria to be defined before testing — based on the clinical or process requirement, not the model output.

How to Get It Right

Start with the user requirement: what does this AI system need to achieve? For a visual inspection AI: sensitivity ≥ 99.5%, specificity ≥ 98%. These numbers come from the quality requirement — write them into the URS and validation protocol before model development begins.

Download Whitepaper

Ready to Set the Standard?

Partner with AjaCertX for integrated compliance and assurance solutions.