2026-05-03·16 min read·

EU AI Act Omnibus Before Trilogue #3: What Developers Must Plan for Right Now (Both Scenarios)

Post #803 in the sota.io EU Compliance Series

On May 13, 2026 — ten days from today — EU institutions will sit down for Trilogue #3 on the Digital Omnibus package that proposes significant amendments to the EU AI Act. Trilogue #2 ended on April 28 after twelve hours of negotiations without agreement. The Cypriot Council Presidency has set June 30, 2026 as its hard deadline. If Trilogue #3 fails, or if a deal cannot be finalized and formally adopted by late June, the EU AI Act applies in its original form with its original timeline.

August 2, 2026 is 91 days away. Art.50 transparency obligations and GPAI Code of Practice enforcement activate on that date regardless of what happens at Trilogue #3. The Omnibus only affects the high-risk AI system obligations under Annex III and the embedded software provisions under Annex I — not the foundational transparency and GPAI enforcement that lands in August.

For development teams building EU-facing AI products, the Omnibus uncertainty creates a planning problem: do you implement full August 2026 compliance for high-risk AI systems now, or do you wait to see whether Trilogue #3 produces the Annex III delay that would push those obligations to December 2027? This guide answers that question by mapping both scenarios and telling you which compliance investments are certain regardless, and which depend on the Trilogue outcome.


What the Digital Omnibus Actually Proposes

The Digital Omnibus is a Commission proposal published in early 2026 that bundles amendments to multiple digital regulations into a single legislative package. The AI Act component of the Omnibus proposes three principal changes:

Annex III High-Risk AI Postponement: The Omnibus would delay the full application of Annex III (high-risk AI systems across employment, education, essential private and public services, law enforcement, migration, and administration of justice) from August 2, 2026 to December 2, 2027 — an 18-month delay. High-risk AI system providers would gain an additional compliance window to implement the full conformity assessment requirements, technical documentation, human oversight provisions, accuracy and robustness testing, and registration obligations under Art.49.

Annex I Embedded AI Scope Clarification: Annex I defines AI components embedded in regulated products — medical devices, machinery, vehicles, toys, and other CE-marked products already subject to sector-specific EU legislation. The Omnibus proposes to clarify when the AI Act's high-risk classification applies to these embedded systems versus when the underlying sector legislation provides sufficient coverage. This is the provision that broke Trilogue #2: Parliament wants broader AI Act coverage of embedded AI, Council wants sector legislation to take precedence to avoid regulatory duplication.

SME Regulatory Sandbox Expansion: The Omnibus proposes extended regulatory sandbox access for small and medium enterprises, with simplified conformity assessment options and reduced technical documentation requirements for SMEs below certain thresholds.

What the Omnibus does not propose to change: Art.50 transparency obligations (disclosure of AI-generated content, chatbot identification), GPAI model obligations and Code of Practice enforcement, Art.5 prohibited AI practices (already in effect since February 2, 2025), and the general framework obligations for all AI systems under Arts.9-15.


Why Trilogue #2 Failed on April 28

The April 28 negotiation session collapsed on the Annex I embedded AI provision. The core dispute runs as follows:

Parliament's position: AI systems embedded in regulated products — AI-assisted diagnostic tools in medical devices, autonomous navigation in vehicles, AI-based hazard detection in machinery — should be subject to the EU AI Act's high-risk AI obligations even when the host product is already governed by sector-specific legislation. Parliament argues that the AI Act established the first comprehensive AI-specific regulatory framework precisely to address gaps in sector legislation that was not designed with AI in mind.

Council's position: Requiring AI Act conformity assessments on top of existing sectoral conformity assessments — MDR for medical devices, Machinery Regulation, General Vehicle Safety Regulation — creates duplicative compliance burdens on manufacturers who already undergo rigorous third-party assessments under sector law. Council wants the AI Act to defer to sector legislation for these products, treating sector compliance as presumption of AI Act compliance.

Why this matters for developers: If Council's position prevails, AI components in CE-marked products face simpler compliance paths through sector legislation. If Parliament's position prevails, embedded AI developers face both their sector framework and the full AI Act Annex III obligations. The difference in compliance investment can reach €200,000-500,000 per product for third-party conformity assessment, technical documentation, and ongoing monitoring.

Trilogue #3 will try again on this exact provision. Both sides have reportedly moved incrementally, but remain apart on which AI components are "sufficiently covered" by sector legislation.


Scenario A: Omnibus Passes at Trilogue #3

If Trilogue #3 produces an agreement and the final text is formally adopted before late June, the developer implications are:

August 2, 2026 obligations (unchanged):

December 2, 2027 obligations (Annex III systems, if Omnibus passes):

For developers building Annex III systems (employment screening, credit scoring, biometric categorization, emotion recognition in workplaces, AI in essential public services, law enforcement AI, migration AI):

For SMEs:


Scenario B: Omnibus Fails or Deadline Missed

If Trilogue #3 fails to reach agreement, or if a political agreement is reached but cannot be formally adopted and published before the Cypriot Presidency deadline of June 30, the original EU AI Act timeline applies:

August 2, 2026 (91 days away):

What "full application of Annex III" means for developers in August 2026 (Scenario B):

If you are providing an AI system that qualifies as high-risk under Annex III, you must by August 2, 2026 have completed:

The critical timing implication: If you are building an Annex III system and Omnibus Scenario B materializes, the work to complete full conformity assessment cannot begin in August. The typical timeline for conformity assessment preparation for a complex Annex III system — assembling technical documentation, engaging a notified body if required, completing systematic risk management — is 6-18 months. Organizations that have not started this work and face Scenario B in August will be non-compliant at activation.


What Is Certain Regardless: August 2, 2026 Art.50 and GPAI

Before examining the planning matrix, it is essential to establish what August 2, 2026 activates regardless of the Omnibus outcome. These obligations are not modified by the Omnibus:

Art.50 Transparency Obligations — Every AI System Builder

Art.50 establishes disclosure requirements for AI systems that interact with natural persons and for systems generating synthetic content:

Art.50(1) — Chatbot identification: Providers of AI systems that interact directly with natural persons must ensure those systems inform users they are interacting with an AI, unless the context makes this obvious. This applies to every chatbot, virtual assistant, and conversational AI interface deployed to EU users — including embedded support bots, sales assistants, onboarding flows, and customer service automation.

Art.50(2) — Deep fake disclosure: Natural persons who use AI to produce synthetic audio, image, video, or text content representing real persons must disclose that the content is AI-generated. For SaaS products that include AI-generated content features — personalized marketing copy, AI-generated reports, synthetic media tools — this requires disclosure mechanisms built into the product.

Art.50(3) — AI-generated content labeling: Providers of AI systems that generate text, image, audio, or video content must use machine-readable formats to mark that content as AI-generated where technically feasible. The AI Office is developing technical standards for this marking.

Art.50(4) — Operators: Operators (those deploying AI systems developed by others) deploying chatbots must ensure the Art.50(1) disclosure is made to users. Even if you are using a third-party AI SDK rather than building your own model, your chatbot must disclose its AI nature.

For SaaS developers: Art.50 compliance is concrete, binary, and immediate. If your product includes a chatbot that talks to EU users without identifying itself as AI, you are non-compliant from August 2, 2026. This requires no Omnibus guidance, no sector-specific interpretation — it is unambiguous and applies universally.

GPAI Code of Practice — Teams Using Foundation Models

The GPAI Code of Practice — developed by the AI Office with model providers through 2025 — establishes compliance standards for GPAI model providers. From August 2, 2026, the AI Office can begin enforcement actions against GPAI providers (companies providing general-purpose AI models, including API-accessible foundation models).

For developers who are GPAI Providers (providing a model accessible via API to other developers):

For developers who are deployers (using foundation models from Claude, GPT, Gemini, etc. to build your product):

The provider/deployer distinction is the most common point of confusion in EU AI Act GPAI compliance. If you use the Claude API to build a customer-facing product, Anthropic is the GPAI provider and you are the deployer. You do not bear GPAI Code of Practice obligations directly — but you inherit obligations through your deployment contract with Anthropic and through your own Art.50 disclosure duties.


The Developer Compliance Planning Matrix

ObligationOmnibus A (Passes)Omnibus B (Fails)Certainty
Art.50 chatbot identificationAug 2, 2026Aug 2, 2026CERTAIN
Art.50 AI-generated content disclosureAug 2, 2026Aug 2, 2026CERTAIN
GPAI Code of Practice enforcementAug 2, 2026Aug 2, 2026CERTAIN
Art.5 prohibited AI practicesFeb 2, 2025 (already active)Feb 2, 2025 (already active)ALREADY IN FORCE
Annex III high-risk conformity assessmentDec 2, 2027Aug 2, 2026OMNIBUS-DEPENDENT
Annex III EU database registrationDec 2, 2027Aug 2, 2026OMNIBUS-DEPENDENT
Annex III technical documentation (Art.11)Dec 2, 2027Aug 2, 2026OMNIBUS-DEPENDENT
Annex I embedded AI (medical/machinery)Sector law primary (Council position) or Aug 2, 2026 (Parliament position)Original Annex I timelineTRILOGUE-DISPUTED
GPAI systemic risk evaluation (>10^25 FLOP)Aug 2, 2026Aug 2, 2026CERTAIN

Reading the matrix: The top four rows require action now regardless of Trilogue outcome. The middle three rows are Omnibus-dependent — start preparation now (documentation can begin before deadline), but the activation date depends on whether Omnibus passes. The Annex I embedded AI row is still disputed within the Omnibus itself.


Your 7-Step Pre-Trilogue Action Checklist

The following actions are worthwhile regardless of which scenario materializes. Execute these before May 13.

Step 1 — Identify every AI touchpoint in your product that interacts with EU users. Map every chatbot, AI assistant, AI-generated content feature, and AI-powered decision in your product stack. For each touchpoint: Does this involve direct interaction with a natural person? Does it generate text, image, audio, or video? Who controls it — you or a third-party provider? This mapping is required for both Art.50 compliance and for Annex III risk classification.

Step 2 — Classify your AI systems against Annex III. For each AI system in your product, determine whether it falls within any Annex III category: biometric categorization, critical infrastructure management, educational or vocational training access decisions, employment decisions, access to essential services (credit, insurance, social benefits), law enforcement, migration and asylum, administration of justice. If even one of your systems may qualify, begin technical documentation preparation now. An 18-month window from August 2026 is not unlimited — conformity assessment preparation takes time.

Step 3 — Implement Art.50 disclosure mechanisms. This is the highest-certainty obligation and requires no legislative outcome. If your product includes AI interfaces that interact with EU users: add "This is an AI" identification to every conversational interface. Add disclosure to AI-generated content where it could be mistaken for human-created content. Review your terms of service and UI copy to ensure disclosure is clear and not buried.

Step 4 — Document your GPAI usage. For every foundation model API you call — Claude, GPT-4, Gemini, Mistral, Llama — document: which model and version, what you use it for, what your deployment contract says about AI Act compliance, and whether the provider has published its AI Act compliance statement. You need this documentation for deployer due diligence and for any future audit.

Step 5 — Establish your Art.50 monitoring baseline. Before August 2, 2026, run a compliance audit against Art.50(1)-(4) across your product. Test every chatbot with an EU IP address. Check every AI-generated content output. Document the audit with timestamps. If you are non-compliant at audit time, you have 91 days to remediate.

Step 6 — Prepare both Omnibus scenarios in your compliance budget. For Annex III systems: budget for Scenario B (August 2026 deadline) as your base case. If Omnibus passes, you have until December 2027 — treat that as a bonus window, not the plan. Organizations that budget for December 2027 and face Scenario B will be scrambling in July 2026 with no budget and no time.

Step 7 — Set a Trilogue #3 watch for May 13. Subscribe to EU Parliament press releases, AI Office publications, and European Data Protection Board communications for Trilogue #3 outcome announcements. If Omnibus passes, your December 2027 compliance planning for Annex III can proceed at measured pace. If it fails again, activate your August 2026 Annex III compliance sprint immediately — you will have approximately 80 days from May 13 to August 2.


EU-Sovereign Infrastructure and the AI Act

One dimension of EU AI Act compliance that receives insufficient attention is infrastructure: where your AI inference, training data, and AI system logs are processed and stored.

The CLOUD Act intersection: If you are building an Annex III high-risk AI system and your infrastructure runs on AWS, Azure, or GCP in EU regions, your Art.11 technical documentation — the system's training data documentation, risk management evidence, accuracy testing records — is stored on US-controlled infrastructure subject to CLOUD Act compelled disclosure. A US law enforcement request could compel disclosure of your conformity assessment documentation without your knowledge or ability to challenge it in EU courts.

Art.10 training data sovereignty: High-risk AI systems under Art.10 must maintain documentation of training data characteristics, collection methods, and data governance practices. Storing this documentation under US jurisdiction creates the same documentation sovereignty problem that GDPR Art.30 creates for RoPA documentation.

The practical implication for Annex III compliance: If you are building high-risk AI systems for EU deployment, the infrastructure on which you store your Art.11 technical documentation and Art.12 event logs should be EU-sovereign — not merely EU-region on a US cloud provider. This is not an AI Act explicit requirement in the current text, but it is the defensible position when a supervisory authority requests access to your conformity documentation.

EU-native cloud providers — Hetzner, Scaleway, OVH Cloud — provide the infrastructure baseline for AI Act compliance documentation that is structurally isolated from CLOUD Act reach. For sota.io, the combination of EU-sovereign hosting infrastructure and the AI Act compliance documentation it hosts represents exactly this use case: your deployment platform and your compliance documentation remain within EU jurisdiction.


The June 30 Endpoint

Even in the most optimistic scenario — Trilogue #3 reaches agreement on May 13 — the formal legislative process from political agreement to published final text takes weeks. The Cypriot Presidency deadline of June 30 is a political hard stop: if a deal cannot be reached and processed before the Croatian Presidency takes over on July 1, the legislative process restarts under new presidency priorities.

June 30 means: if you do not have a Trilogue agreement that is formally signed and on track for OJ publication by late June, plan for the original August 2, 2026 timeline in full.

This creates a practical planning window: the period between May 13 (Trilogue #3) and June 1 (approximately the last point at which you could realistically course-correct compliance plans) is the decision window. Organizations with active Annex III compliance programs should have contingency budgets and acceleration plans ready to activate within two weeks of a failed Trilogue #3.


What Developers Should Do This Week

Regardless of whether you are a GPAI deployer, Annex III provider, or building AI systems outside these categories, three actions are immediately actionable:

Implement Art.50 disclosure now. There is no legislative uncertainty about this obligation. Review your product's AI interfaces, add chatbot identification where missing, add AI-generated content disclosure where applicable. If you complete this before August 2, you are compliant on the highest-certainty obligation regardless of Omnibus outcome.

Get your GPAI contracts in order. Review your contracts with foundation model providers. Confirm that your contract terms reflect AI Act compliance commitments from the provider side. Request provider AI Act compliance statements if not already available. Document this review.

Start your Annex III classification analysis. Determine whether your systems qualify as high-risk under Annex III. If there is any uncertainty, seek legal advice before Trilogue #3. Knowing whether you are in the Omnibus-dependent risk zone before May 13 lets you make a faster, better-informed decision once the Trilogue outcome is known.


Conclusion

Trilogue #3 on May 13, 2026 is not a clean binary: pass means compliance pressure eases, fail means everything stays. It is a partial modifier — the Omnibus affects Annex III timelines and embedded AI scope, but the foundational August 2026 obligations are not contingent on it. Art.50 lands in 91 days. GPAI enforcement lands in 91 days. Build your compliance program for those certainties now, and keep your Annex III plan ready to accelerate if May 13 fails.

The organizations that will be compliant on August 2, 2026 are not the ones who waited for Trilogue clarity. They are the ones who identified their Art.50 obligations, mapped their Annex III exposure, and started technical documentation in March or April — before the Omnibus outcome was known.


sota.io provides EU-sovereign hosting infrastructure for teams that need to keep their AI systems, compliance documentation, and user data within EU jurisdiction. EU AI Act Art.11 technical documentation and Art.12 event logs stored on sota.io infrastructure are not subject to CLOUD Act compelled disclosure. Start free.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.