2026-05-04·13 min read·sota.io team

On April 28, 2026, trilogue negotiators emerged from twelve hours of talks without an agreement. The EU AI Act Omnibus — which would amend High-Risk AI classification timelines, soften GPAI fine structures, and add SME flexibility provisions — failed to clear Trilogue #2. Trilogue #3 is scheduled for May 13, 2026.

The developer forums and LinkedIn posts that followed mostly asked the wrong question: "Will the Omnibus pass?"

The right question is: "What applies to my product on August 2, 2026, regardless of what happens on May 13?"

The answer is more specific than most compliance guides acknowledge. Several EU AI Act obligations are structurally immune to the Omnibus negotiations — they exist in articles that the Omnibus text does not touch. Your August 2, 2026 compliance calendar should be built around those, not around a political guess.


The Two-Layer Problem

The EU AI Act creates two distinct enforcement timelines that developers confuse constantly:

Layer 1: Articles that apply on August 2, 2026 These include the Prohibited Practices (Art.5), GPAI Model Obligations (Art.51–55), and Transparency Obligations (Art.50). These provisions entered into force on August 1, 2024 and become fully enforceable 24 months later — August 2, 2026.

Layer 2: Articles that apply on August 2, 2027 These cover High-Risk AI systems under Annex III. This is the layer the Omnibus primarily targets — specifically, whether to delay or narrow the Annex III scope for embedded AI in regulated products.

The Omnibus fight is almost entirely a Layer 2 fight. The Layer 1 obligations — the ones that hit in August 2026 — are largely untouched.


What the Omnibus Actually Proposes (and Doesn't)

To understand what's immune, you first need to understand what's in play.

The Omnibus amendments under negotiation target:

Omnibus ProposalCurrent Status
Delay Annex III High-Risk classification for embedded AI (medical devices, machinery, vehicles) from Aug 2027 → Dec 2027In dispute — Annex I scope is the core trilogue sticking point
Reduce GPAI systemic-risk fine cap from 3% → 1.5% global turnoverLikely included in any deal
Add SME flexibility: micro-enterprises exempt from some documentation requirementsLikely included in any deal
Narrow Annex III scope: remove some sub-categories from High-RiskIn dispute
Postpone AI-liability alignment provisionsLikely included

What the Omnibus does not touch:

Every one of these is August 2, 2026 regardless of whether Trilogue #3 succeeds or fails.


What Definitely Applies on August 2, 2026

Art.50 — Transparency Obligations (Every Developer Building AI Products)

Art.50 is the provision most developers underestimate because it doesn't say "high risk" anywhere. It applies to any AI system that interacts with natural persons or generates synthetic content.

Sub-obligations that activate August 2:

Art.50(1) — AI interaction disclosure If your product uses an AI system to interact with a human — a chatbot, a voice assistant, an AI-powered support tool — you must inform that person they are interacting with an AI. This applies unless the context makes it obvious. The burden of proving "obvious" is on you.

Art.50(2) — Emotion recognition and biometric categorisation If your system infers emotion or categorises individuals by biometric characteristics (age estimation, gender inference, mood detection), you must inform users at the time of operation. Silence is non-compliance, regardless of how you describe the feature in your marketing.

Art.50(3) — Deepfake and synthetic media marking If your system generates or manipulates image, audio, or video content resembling real persons, places, or events — including AI-generated avatars, voice clones, or synthesised video — the output must be machine-readable marked as artificially generated. The Transparency Code of Practice (finalised June 2026) will specify the technical formats.

Art.50(4) — AI-generated text on matters of public interest If your system generates text on electoral issues, public health, or similar public-interest topics, outputs must be labelled as AI-generated. This applies even if the text is accurate and beneficial.

Practical scope check:

Ask yourself: Does your product do any of the following?

If yes to any of these, Art.50 applies on August 2, 2026. The Omnibus does not change this.


Art.51–55 — GPAI Model Obligations (If You Fine-Tune, Distill, or Serve Models)

Are you a GPAI Provider?

The definition matters enormously, and most developers get it wrong. A GPAI Provider is:

Any provider that places on the market or puts into service in the Union a general purpose AI model.

A GPAI model is one trained on a broad corpus with "significant generality" — meaning it can perform a wide range of tasks. The threshold is not perfection or ubiquity. Claude, GPT-4, Llama 3, Mistral 7B, and even smaller fine-tuned models that retain broad task coverage qualify.

You are a GPAI Provider if you:

You are NOT a GPAI Provider if you:

Obligations that apply August 2 for all GPAI Providers (Art.51–52):

  1. Technical documentation — you must maintain records of training data sources, model architecture, intended capabilities, and known limitations. This is analogous to a GDPR Record of Processing Activities (RoPA) — if your team doesn't own a GPAI Model Registry, August 2 is the deadline to create one.

  2. Copyright policy and training data summary — you must publish a "sufficiently detailed summary" of training data used, and make available a policy for copyright compliance (opt-out respect under Art.53(1)(c)). If you used web-scraped data, this requires documentation of your opt-out signal handling.

  3. Downstream transparency — you must provide downstream providers (developers who build on your model) with information adequate for them to comply with their own obligations. If you offer a fine-tuned model as a service, this includes model cards, known failure modes, and recommended safety configurations.

Additional obligations for Advanced GPAI (Systemic Risk Models — Art.53):

These apply to models trained with more than 10²⁵ FLOPs — roughly at or above the scale of GPT-4-class models. If you operate at this scale, additional obligations include:

The Omnibus fine reduction (3% → 1.5%) affects what you pay if you violate these obligations. It does not affect whether the obligations exist.


What Is Actually in Flux (Omnibus-Dependent)

For completeness, here is what genuinely depends on the Omnibus outcome:

Annex III High-Risk Classification (Deployer Obligations from Aug 2027)

The big question: Is your AI system "high risk" under Annex III?

Annex III lists eight categories of use case that trigger high-risk classification — education, employment, essential private services, law enforcement, migration, administration of justice, and critical infrastructure. The Omnibus proposes narrowing this list and adding a "significant risk" filter.

If the Omnibus passes: The Annex III scope narrows. Some systems currently classified as high-risk may no longer be. The effective date for Annex III compliance shifts (likely to December 2027 for embedded AI in regulated sectors).

If the Omnibus fails: Annex III applies as written, with the August 2027 compliance date. This is the status quo, not a new development.

What this means in practice: If your system doesn't touch education, employment decisions, biometric identification, critical infrastructure control, or similar Annex III domains, the Omnibus outcome has minimal impact on you. If it does, you should plan for August 2027 regardless — Omnibus is at best a delay, not an exemption.

GPAI Fine Caps

The proposed reduction from 3% to 1.5% of global turnover for GPAI violations is an Omnibus provision. It doesn't change what applies August 2, 2026 — it changes what the penalty is if you fail to comply.

SME Flexibility Provisions

Micro-enterprises and SMEs (fewer than 50 employees, under €10M revenue) would benefit from simplified documentation requirements under the proposed Omnibus text. If you qualify, this reduces compliance burden but does not eliminate it.


The Practical 60-Day Checklist for August 2, 2026

Given the above, here is what your team should be doing right now, independent of the Trilogue #3 outcome:

Weeks 1–2: Assess and classify

Weeks 3–4: Implement Art.50 transparency

Weeks 5–7: GPAI documentation (if applicable)

Week 8: Legal review and record-keeping


Infrastructure as a Compliance Input

One factor that runs through Art.50, GPAI obligations, and future Annex III requirements is data jurisdiction. The AI Act is a regulation of the European Union — but its enforcement assumes that competent authorities can access the system's data, logs, and documentation.

Running your AI product on US-controlled infrastructure creates a structural tension: a US cloud provider is subject to CLOUD Act obligations that can compel data disclosure outside EU legal procedures. For AI systems that process personal data (which most do, given that user interactions are personal data under GDPR), this creates an Art.46 third-country transfer problem alongside the AI Act obligations.

EU-native infrastructure — where the cloud provider is incorporated in an EU member state, not a US-parent company — eliminates this problem structurally rather than contractually. Contractual safeguards (SCCs, BCRs) are required when you use US-controlled infrastructure; they become unnecessary when your provider is jurisdictionally inside the EU.

sota.io is a managed deployment platform for EU-based teams — no US parent company, no CLOUD Act reach. If your AI product's August 2026 compliance plan includes the infrastructure layer, the simplest approach is to build on a platform where data jurisdiction is clear from the start.


Key Dates for Your Calendar

DateEventWhat it means for you
May 13, 2026Trilogue #3Watch for Omnibus deal/no-deal. 48h reaction window for compliance calendar updates
June 2026AI Transparency CoP FinalTechnical format requirements for synthetic content marking confirmed
June 30, 2026Cypriot Presidency deadlineIf no Omnibus deal by this date, status quo (current AI Act text) applies for all timelines
August 2, 2026Art.50 + GPAI EnforcementDisclosure obligations, GPAI documentation requirements, AI Office can begin enforcement
September 11, 2026CRA vulnerability reportingSeparate from AI Act — 24h reporting to ENISA/CSIRT for actively exploited vulnerabilities
August 2, 2027Annex III High-Risk (under current law)High-Risk AI deployer obligations. Omnibus could delay this for embedded AI categories

What Happens If Trilogue #3 Succeeds

If negotiators reach agreement on May 13, expect:

What Happens If Trilogue #3 Fails

If May 13 ends without a deal:


The Developer's Simple Decision Rule

Stop waiting for Trilogue #3 to decide your August 2026 compliance plan.

Do this now, regardless of outcome:

Do this after the Omnibus outcome is clear:

The ratio of August-2026-immune obligations to Omnibus-dependent obligations is roughly 70/30. Don't let the 30% uncertain part paralyse planning for the 70% that is fixed law.


EU AI Act enforcement enforcement schedule current as of May 2026. Trilogue #3 outcome pending May 13, 2026 — this post will be updated within 48 hours of the outcome. Follow sota.io/blog for real-time EU AI Act developer guidance.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.