EU AI Act Omnibus Deal Closed: What Actually Changed for Developers on 7 May 2026
Post #897 in the sota.io EU Cyber Compliance Series
On 7 May 2026, EU co-legislators reached a provisional political agreement on the EU AI Act Omnibus package — the first significant amendment to the AI Act since its entry into force in August 2024. The deal was struck under the Cypriot Council Presidency during Trilogue #3.
The Omnibus is not a rewrite. The fundamental structure of the AI Act — the risk-based framework, the prohibited practices, GPAI obligations, and the supervisory architecture — remains intact. What changed is a set of targeted adjustments to timelines, new specific prohibitions, and a grace period for AI-generated content labelling.
For developers and SaaS companies, the question is simple: does this change your compliance roadmap? In some areas, yes. In the most urgent areas — the ones hitting in August 2026 — no.
This guide breaks down every material change in the Omnibus agreement and maps the implications for your August 2026 deadline.
The August 2026 Deadline Is Still Coming
Before the changes: the single most important thing to understand about the EU AI Act Omnibus is what it did not change.
Article 50 (Transparency for AI-generated content) and the full GPAI chapter (Chapter V, Articles 51–56) still apply from 2 August 2026. These provisions were not modified in the Omnibus agreement.
This means:
- AI systems that generate synthetic images, audio, video, or text must label that content as AI-generated (Article 50(2))
- General-purpose AI model providers must publish technical documentation, implement EU copyright law compliance policies, and publish training data summaries (Article 53)
- Systemic-risk GPAI models (above 10^25 FLOPs) face additional obligations including adversarial testing and incident reporting (Article 55)
- Every organisation deploying AI systems must maintain AI literacy among staff (Article 4)
If you have been planning around the August 2, 2026 date for these obligations — your plan does not change.
What the Omnibus Changed: Five Concrete Modifications
1. Annex III High-Risk AI — Timeline Extended to 2 December 2027
The most significant Omnibus change for product companies is the timeline extension for Annex III high-risk AI systems.
Under the original AI Act, Annex III high-risk AI systems were subject to full compliance requirements — conformity assessment, technical documentation, human oversight obligations, and registration in the EU database — from 2 August 2026 (for new systems placed on the market after that date).
The Omnibus agreement extends this deadline to 2 December 2027.
What Annex III covers: Annex III lists specific high-risk AI application areas. The current categories include:
- Biometric identification and categorisation (Article 6(1) and Annex III point 1)
- Critical infrastructure management (point 2)
- Education and vocational training (point 3)
- Employment and worker management — CV screening, promotion decisions (point 4)
- Access to essential private and public services — credit scoring, insurance underwriting (point 5)
- Law enforcement (point 6)
- Migration, asylum, border control (point 7)
- Administration of justice and democratic processes (point 8)
The extension applies specifically to new systems placed on the market or put into service after the Omnibus takes effect. Existing systems already in the market as of the Omnibus effective date may benefit from transition provisions — the exact cut-off depends on the final legislative text.
What this means for developers: If your product falls into one of these Annex III categories — particularly employment AI (CV screening, performance scoring), credit assessment, or educational assessment — you have until December 2027 to implement the full high-risk compliance framework. This gives you approximately 18 additional months compared to the original August 2026 deadline.
However, be careful about what the extension does not cover. The Annex III extension does not affect:
- Article 5 prohibited practices (already in force since February 2025)
- Article 50 transparency and AI content labelling (still August 2026)
- Article 4 AI literacy obligations (still August 2026)
- GPAI obligations (still August 2026)
2. Annex I Safety Component AI — Extended to 2 August 2028
The Omnibus also extends the compliance deadline for AI systems that are safety components of products regulated under Annex I sectoral legislation — covering machinery, medical devices, civil aviation, automotive, and similar product safety frameworks.
These systems now have until 2 August 2028 to comply with the AI Act's high-risk requirements (previously August 2026).
Why this category is separate: Annex I AI systems are doubly regulated — they must comply with both the product safety regulation (e.g., the Machinery Regulation) and the AI Act. The extension acknowledges that conformity assessment under both frameworks simultaneously is operationally complex, particularly for hardware-embedded AI systems with long certification timelines.
If you are building AI functionality for medical devices, industrial machinery, or automotive systems, the 2028 timeline is relevant. Note that your primary product safety obligations under the sectoral regulation remain unchanged.
3. New Prohibition: AI-Generated Non-Consensual Intimate Images (Nudifier Ban)
The Omnibus adds a specific prohibition on AI systems that generate non-consensual intimate images of identifiable individuals — commonly called "nudifier" or "deepfake porn" applications.
This prohibition is inserted as a new category within Article 5 (prohibited AI practices), which is already in force since February 2025. The new prohibition applies immediately upon the Omnibus entering into force.
Scope: The prohibition covers AI systems specifically designed or marketed to generate realistic synthetic intimate images of real, identifiable individuals without their consent. It does not apply to:
- Legitimate artistic or medical applications
- General-purpose image generation models that could theoretically be misused (the prohibition targets systems specifically designed for this purpose)
What this means for developers: If you operate or have operated any AI image generation service, review whether your system has specific features, configurations, or marketed use cases that fall within this definition. General-purpose text-to-image models with content filters are not targeted. Services marketed specifically for generating intimate images of specific real individuals are prohibited.
4. New Prohibition: AI-Generated CSAM
Similarly, the Omnibus adds an explicit prohibition on AI systems used to generate child sexual abuse material (CSAM). This was already prohibited under existing criminal law across EU member states, but the Omnibus inserts an explicit AI Act prohibition.
This applies immediately upon Omnibus entry into force.
5. Watermarking Grace Period — Extended to 2 December 2026
Article 50(2) requires AI systems generating synthetic audio, images, video, or text to label that content as AI-generated. The technical implementation of this requirement — particularly the machine-readable watermarking or metadata standards — was always going to require industry alignment.
The Omnibus introduces a grace period for watermarking technical standards until 2 December 2026. The underlying human-readable disclosure obligation (content must be labelled as AI-generated in a way that is clear to the viewer or listener) remains applicable from 2 August 2026.
Practical implication: From 2 August 2026, your AI-generated content must carry a human-readable disclosure. The machine-readable technical watermarking standard — which will be developed by CEN/CENELEC and the European AI Office — has until December 2026 to be finalised and implemented.
If you are building AI video generation, AI voice synthesis, or AI image generation features, you need two things:
- A human-readable disclosure label from August 2026 (no change from original requirement)
- Technical watermarking conforming to the EU standard from December 2026 (grace period from Omnibus)
The Machinery Regulation Carve-Out
The Omnibus also introduces a carve-out for AI systems embedded in machinery regulated under the new Machinery Regulation (EU) 2023/1230. These systems, where the AI functionality is integral to the machinery product and already subject to Machinery Regulation conformity assessment, are treated as Annex I systems and benefit from the 2028 timeline.
This is primarily relevant for industrial automation, robotics, and connected machinery sectors. Consumer-facing AI features in products not primarily classified as machinery are not affected.
Revised Compliance Timeline at a Glance
| Obligation | Original Deadline | Post-Omnibus Deadline |
|---|---|---|
| Article 5 prohibited practices | 2 Feb 2025 | Unchanged (+ new nudifier/CSAM prohibitions) |
| Article 4 AI literacy | 2 Aug 2026 | Unchanged |
| Article 50 AI content transparency (human-readable) | 2 Aug 2026 | Unchanged |
| Article 50 watermarking (machine-readable technical standard) | 2 Aug 2026 | 2 Dec 2026 (grace period) |
| GPAI model obligations (Articles 51–56) | 2 Aug 2026 | Unchanged |
| Annex III high-risk AI (new systems) | 2 Aug 2026 | 2 Dec 2027 |
| Annex I safety component AI | 2 Aug 2026 | 2 Aug 2028 |
What the Omnibus Does Not Change
Beyond the specific modifications above, it is worth being explicit about what the Omnibus agreement leaves intact:
The risk-based framework is unchanged. Prohibited practices, high-risk classification criteria, GPAI thresholds, and the supervisory authority structure all remain as enacted in the original AI Act.
Article 4 AI literacy obligations are unchanged. Every organisation deploying AI must ensure staff AI literacy by 2 August 2026. There is no grace period.
GPAI model provider obligations are unchanged. If you provide a general-purpose AI model — including open-weight models distributed within the EU — Articles 51–56 apply from August 2026.
Article 50 human-readable disclosure is unchanged. The watermarking grace period applies to the machine-readable technical standard, not the fundamental transparency obligation. Labelling AI-generated content must happen from August 2026.
Existing systems already placed on the market before the Omnibus may have different transitional provisions depending on the final legislative text. Monitor the Official Journal publication for exact cut-off dates.
Practical Steps for August 2026
Given the Omnibus changes, here is what your compliance roadmap should look like for the next 90 days:
Immediate (before 2 August 2026):
- AI content disclosure. Audit every feature that generates text, images, audio, or video using AI. Implement human-readable labelling ("This content was generated by AI"). Document your labelling policy.
- AI literacy programme. Identify all staff and contractors who operate or manage AI systems. Implement a structured AI literacy training programme. Document training completion.
- GPAI model assessment. If you provide a general-purpose AI model (not just call an external API), assess your obligations under Articles 51–56. For most SaaS companies using third-party foundation models via API, this does not apply — but verify the distinction.
By December 2026: 4. Watermarking technical standard. Monitor CEN/CENELEC and EU AI Office publications for the machine-readable watermarking specification. Implement when published.
By December 2027 (if Annex III applies to you): 5. High-risk AI compliance. If your product falls into an Annex III category (employment AI, credit assessment, educational assessment, etc.), implement the full high-risk compliance framework: conformity assessment, technical documentation, transparency obligations (Article 13), human oversight requirements (Article 14), and EU database registration (Article 49).
What This Means for EU-Hosted AI Products
The Omnibus does not change the fundamental data sovereignty argument for hosting AI products on EU infrastructure. If anything, the extended Annex III timeline gives product companies additional time to restructure their AI pipelines to process personal data on EU infrastructure — which is a prerequisite for compliance in high-risk categories where AI processes data about EU residents.
The CLOUD Act exposure argument remains unchanged: AI systems that process EU residents' personal data through US-hosted APIs or infrastructure face inherent data transfer risk under GDPR regardless of AI Act compliance status. The Omnibus did not touch GDPR, the CLOUD Act analysis, or the Schrems II adequacy framework.
The August 2026 GPAI transparency obligations create a specific incentive for EU-based AI companies to build on EU-hosted AI infrastructure: if you are a GPAI model provider, your technical documentation and training data disclosures must cover your infrastructure. Using EU-based infrastructure simplifies this documentation considerably compared to distributed global infrastructure subject to multiple jurisdictions' legal processes.
Monitoring What Comes Next
The provisional political agreement reached on 7 May 2026 is a political deal — the legal text will be finalised over the coming weeks and published in the EU Official Journal. Key dates to monitor:
- Official Journal publication: Expected within 4–6 weeks of the political agreement
- Annex III effective date: 2 December 2027 (confirmed in political agreement)
- Annex I effective date: 2 August 2028 (confirmed in political agreement)
- Watermarking standard: CEN/CENELEC mandate expected Q3 2026, final standard by December 2026
Subscribe to the EU AI Office newsletter (ai-office.ec.europa.eu) and the EDPB for updates as the final text is published.
sota.io is a European PaaS built for developers who need EU data sovereignty for AI products, GPAI model infrastructure, and compliance-sensitive workloads. Deploy your AI stack on infrastructure that stays in the EU — no CLOUD Act exposure, no transatlantic data transfers.
Questions about EU AI Act compliance for your infrastructure? Try sota.io free.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.