HomeResourcesGuides › Technology & AI
Practical Guide · 15 pages · Free

EU AI Act: What Technology Companies Must Do Before August 2026

August 2026 is when high-risk AI system obligations under the EU AI Act become fully enforceable. Most technology companies have not yet completed their Annex III risk classification — the prerequisite for everything else. This guide walks through every obligation in sequence.

Published May 2026·Technology & AI·EU AI Act Technology AI Governance Compliance

The EU AI Act Timeline That Technology Companies Must Know

The EU AI Act entered into force on 1 August 2024. Its obligations apply in phases: prohibited AI practices (unacceptable risk) applied from 2 February 2025. GPAI model obligations applied from 2 August 2025. High-risk AI system obligations — the most extensive compliance requirements — apply from 2 August 2026. This is the deadline that most technology companies are working toward, and for many, the preparation has not yet begun at the depth required.

The error most technology companies make is treating the EU AI Act as a future regulatory concern while they wait for enforcement clarity. The technical documentation requirements, the conformity assessment processes, and the registration procedures for high-risk AI systems take months to complete correctly. Companies beginning compliance work in June 2026 will not be compliant by August 2026.

Access the complete guide
All 15 pages — practical implementation guidance, checklists and templates. Free, instant access.
No spam. No sales calls. AjaCertX will email you a copy for reference.
Guide unlocked ✓
A copy has been sent to your email for reference.
Step 01
Determine your role — provider, deployer or both
Technology companies developing AI and placing it on the market are providers — subject to the most extensive EU AI Act obligations. Technology companies using AI built by others in their own operations are deployers — subject to a narrower set of obligations. Many technology companies are both simultaneously: they develop AI products (provider) and use third-party AI tools (deployer). Assess your role for every AI system in your portfolio.
Step 02
Classify every AI system against Annex III
Work through the Annex III high-risk AI list for every AI system you develop or deploy. The categories are: biometric identification, critical infrastructure, education, employment, essential services access, law enforcement, migration, administration of justice, and democratic processes. AI embedded in products covered by existing EU safety legislation (medical devices, machinery, vehicles) is also high-risk. This classification must be done by someone with specific knowledge of the categories and their legal boundaries.
Step 03
Implement high-risk AI system requirements
For each Annex III high-risk system: implement the Article 9 risk management system throughout the AI lifecycle, apply Article 10 data governance to training and testing datasets, produce Annex IV technical documentation, ensure Article 13 transparency (instructions for use), implement Article 14 human oversight measures, and assess Article 15 accuracy, robustness and cyber security requirements.
Step 04
Determine the conformity assessment route
For most high-risk AI systems not embedded in regulated products: self-assessment with internal documentation review is the required conformity assessment route. For AI embedded in products covered by existing EU safety legislation where a notified body assessment was already required for the product: the notified body assessing the product also assesses the AI component. Document your conformity assessment determination and the evidence that supports it.
Step 05
Register in the EU AI Act database
Before placing a high-risk AI system on the market, register it in the EU AI Act database — the centrally maintained registry of high-risk AI systems maintained by the European AI Office. Registration is a mandatory pre-market step for providers. Prepare the required registration information: system name, provider details, intended purpose, risk management summary, and post-market monitoring plan.
Step 06
GPAI model obligations if applicable
If you develop or provide a general-purpose AI model — a large language model, foundation model, or similar — above the 10^25 FLOPs training compute threshold, you face additional obligations: technical documentation, transparency information for downstream providers, and copyright compliance policies. Models above 10^25 FLOPs that are designated systemic risk models face additional safety and adversarial testing requirements.
EU AI Act August 2026 Readiness Checklist
Provider vs deployer role determined for every AI system in the portfolio
Annex III risk classification completed by someone with specific regulatory knowledge
Article 9 risk management system implemented for all high-risk AI systems
Annex IV technical documentation produced and maintained for all high-risk systems
Conformity assessment route determined and documented for all high-risk systems
High-risk AI systems registered in EU AI Act database before market placement
GPAI model obligations assessed if foundation or large language models are developed or provided
Preparing for EU AI Act August 2026 obligations?

AI Governance and EU AI Act specialists. Compliance programme proposal within 48 hours.

About AjaCertX
AjaCertX is a specialist compliance, certification and assurance partner serving technology organisations globally. Our AI Governance practice delivers EU AI Act compliance programmes, ISO 42001 implementation, and integrated AI governance frameworks.
WhatsAppConnect