On April 28, 2026, trilogue negotiators emerged from twelve hours of talks without an agreement. The EU AI Act Omnibus — which would amend High-Risk AI classification timelines, soften GPAI fine structures, and add SME flexibility provisions — failed to clear Trilogue #2. Trilogue #3 is scheduled for May 13, 2026.
The developer forums and LinkedIn posts that followed mostly asked the wrong question: "Will the Omnibus pass?"
The right question is: "What applies to my product on August 2, 2026, regardless of what happens on May 13?"
The answer is more specific than most compliance guides acknowledge. Several EU AI Act obligations are structurally immune to the Omnibus negotiations — they exist in articles that the Omnibus text does not touch. Your August 2, 2026 compliance calendar should be built around those, not around a political guess.
The Two-Layer Problem
The EU AI Act creates two distinct enforcement timelines that developers confuse constantly:
Layer 1: Articles that apply on August 2, 2026 These include the Prohibited Practices (Art.5), GPAI Model Obligations (Art.51–55), and Transparency Obligations (Art.50). These provisions entered into force on August 1, 2024 and become fully enforceable 24 months later — August 2, 2026.
Layer 2: Articles that apply on August 2, 2027 These cover High-Risk AI systems under Annex III. This is the layer the Omnibus primarily targets — specifically, whether to delay or narrow the Annex III scope for embedded AI in regulated products.
The Omnibus fight is almost entirely a Layer 2 fight. The Layer 1 obligations — the ones that hit in August 2026 — are largely untouched.
What the Omnibus Actually Proposes (and Doesn't)
To understand what's immune, you first need to understand what's in play.
The Omnibus amendments under negotiation target:
| Omnibus Proposal | Current Status |
|---|---|
| Delay Annex III High-Risk classification for embedded AI (medical devices, machinery, vehicles) from Aug 2027 → Dec 2027 | In dispute — Annex I scope is the core trilogue sticking point |
| Reduce GPAI systemic-risk fine cap from 3% → 1.5% global turnover | Likely included in any deal |
| Add SME flexibility: micro-enterprises exempt from some documentation requirements | Likely included in any deal |
| Narrow Annex III scope: remove some sub-categories from High-Risk | In dispute |
| Postpone AI-liability alignment provisions | Likely included |
What the Omnibus does not touch:
- Art.5 Prohibited Practices (deepfake, subliminal manipulation, biometric categorisation)
- Art.50 Transparency Obligations (synthetic content marking, AI disclosure)
- Art.51 GPAI Model transparency duties
- Art.52 General Purpose AI training data summaries and copyright policy
- Art.53 Advanced GPAI (systemic risk) model obligations
- Art.55 GPAI incident monitoring and reporting
Every one of these is August 2, 2026 regardless of whether Trilogue #3 succeeds or fails.
What Definitely Applies on August 2, 2026
Art.50 — Transparency Obligations (Every Developer Building AI Products)
Art.50 is the provision most developers underestimate because it doesn't say "high risk" anywhere. It applies to any AI system that interacts with natural persons or generates synthetic content.
Sub-obligations that activate August 2:
Art.50(1) — AI interaction disclosure If your product uses an AI system to interact with a human — a chatbot, a voice assistant, an AI-powered support tool — you must inform that person they are interacting with an AI. This applies unless the context makes it obvious. The burden of proving "obvious" is on you.
Art.50(2) — Emotion recognition and biometric categorisation If your system infers emotion or categorises individuals by biometric characteristics (age estimation, gender inference, mood detection), you must inform users at the time of operation. Silence is non-compliance, regardless of how you describe the feature in your marketing.
Art.50(3) — Deepfake and synthetic media marking If your system generates or manipulates image, audio, or video content resembling real persons, places, or events — including AI-generated avatars, voice clones, or synthesised video — the output must be machine-readable marked as artificially generated. The Transparency Code of Practice (finalised June 2026) will specify the technical formats.
Art.50(4) — AI-generated text on matters of public interest If your system generates text on electoral issues, public health, or similar public-interest topics, outputs must be labelled as AI-generated. This applies even if the text is accurate and beneficial.
Practical scope check:
Ask yourself: Does your product do any of the following?
- Present users with AI-generated responses without clearly identifying the AI source?
- Generate images, audio, or video with AI?
- Use AI to infer user mood, age, gender, or emotional state?
- Produce AI-written content on public-interest topics at scale?
If yes to any of these, Art.50 applies on August 2, 2026. The Omnibus does not change this.
Art.51–55 — GPAI Model Obligations (If You Fine-Tune, Distill, or Serve Models)
Are you a GPAI Provider?
The definition matters enormously, and most developers get it wrong. A GPAI Provider is:
Any provider that places on the market or puts into service in the Union a general purpose AI model.
A GPAI model is one trained on a broad corpus with "significant generality" — meaning it can perform a wide range of tasks. The threshold is not perfection or ubiquity. Claude, GPT-4, Llama 3, Mistral 7B, and even smaller fine-tuned models that retain broad task coverage qualify.
You are a GPAI Provider if you:
- Fine-tune an open-weight model (even lightly) and make it available to others via API or deployment
- Distill a model from another model and offer it as a service
- Build on a base model and offer your resulting model as a component to third-party developers
You are NOT a GPAI Provider if you:
- Use Claude or GPT-4 via API and build your product on top — you are a downstream provider or deployer
- Use a GPAI model for purely internal tooling not offered to others
Obligations that apply August 2 for all GPAI Providers (Art.51–52):
-
Technical documentation — you must maintain records of training data sources, model architecture, intended capabilities, and known limitations. This is analogous to a GDPR Record of Processing Activities (RoPA) — if your team doesn't own a GPAI Model Registry, August 2 is the deadline to create one.
-
Copyright policy and training data summary — you must publish a "sufficiently detailed summary" of training data used, and make available a policy for copyright compliance (opt-out respect under Art.53(1)(c)). If you used web-scraped data, this requires documentation of your opt-out signal handling.
-
Downstream transparency — you must provide downstream providers (developers who build on your model) with information adequate for them to comply with their own obligations. If you offer a fine-tuned model as a service, this includes model cards, known failure modes, and recommended safety configurations.
Additional obligations for Advanced GPAI (Systemic Risk Models — Art.53):
These apply to models trained with more than 10²⁵ FLOPs — roughly at or above the scale of GPT-4-class models. If you operate at this scale, additional obligations include:
- Adversarial testing (red-teaming) before deployment
- Incident reporting to the AI Office
- Cybersecurity measures appropriate to the risk level
The Omnibus fine reduction (3% → 1.5%) affects what you pay if you violate these obligations. It does not affect whether the obligations exist.
What Is Actually in Flux (Omnibus-Dependent)
For completeness, here is what genuinely depends on the Omnibus outcome:
Annex III High-Risk Classification (Deployer Obligations from Aug 2027)
The big question: Is your AI system "high risk" under Annex III?
Annex III lists eight categories of use case that trigger high-risk classification — education, employment, essential private services, law enforcement, migration, administration of justice, and critical infrastructure. The Omnibus proposes narrowing this list and adding a "significant risk" filter.
If the Omnibus passes: The Annex III scope narrows. Some systems currently classified as high-risk may no longer be. The effective date for Annex III compliance shifts (likely to December 2027 for embedded AI in regulated sectors).
If the Omnibus fails: Annex III applies as written, with the August 2027 compliance date. This is the status quo, not a new development.
What this means in practice: If your system doesn't touch education, employment decisions, biometric identification, critical infrastructure control, or similar Annex III domains, the Omnibus outcome has minimal impact on you. If it does, you should plan for August 2027 regardless — Omnibus is at best a delay, not an exemption.
GPAI Fine Caps
The proposed reduction from 3% to 1.5% of global turnover for GPAI violations is an Omnibus provision. It doesn't change what applies August 2, 2026 — it changes what the penalty is if you fail to comply.
SME Flexibility Provisions
Micro-enterprises and SMEs (fewer than 50 employees, under €10M revenue) would benefit from simplified documentation requirements under the proposed Omnibus text. If you qualify, this reduces compliance burden but does not eliminate it.
The Practical 60-Day Checklist for August 2, 2026
Given the above, here is what your team should be doing right now, independent of the Trilogue #3 outcome:
Weeks 1–2: Assess and classify
- Map every AI component in your product stack — model, vendor, capabilities, user-facing or internal
- Determine if you are a GPAI Provider (serve a model to others) or Downstream Provider/Deployer (use someone else's model)
- For each AI component, check against Art.50 scope: Does it interact with users? Generate synthetic content? Infer emotional or biometric attributes?
- Identify any Annex III use-case domains that your product touches (employment decisions, education assessment, health-related outputs)
Weeks 3–4: Implement Art.50 transparency
- Add explicit AI disclosure to every user-facing AI interaction point (chat, voice, AI-generated summaries)
- Implement machine-readable watermarking on AI-generated images, audio, or video. Refer to C2PA or the draft ETSI TS 103 702 standard for technical formats
- Add "generated by AI" labels to AI-written public-interest content
- Review chatbot and AI-assistant UX flows for implicit "human" framing that would violate Art.50(1)
Weeks 5–7: GPAI documentation (if applicable)
- Create a Model Registry: model ID, training data summary, known limitations, intended use cases
- Draft and publish your copyright compliance policy (opt-out signal handling, robots.txt respect)
- Prepare downstream transparency documentation for any API customers who build on your model
Week 8: Legal review and record-keeping
- Legal review of Art.50 implementation — does your disclosure language meet "clear and distinguishable" standard?
- Set up incident logging for GPAI systems (required under Art.55 for advanced GPAI providers)
- Document your Annex III analysis — even a negative determination ("our system does not qualify as high-risk under Annex III for the following reasons...") is a compliance record
Infrastructure as a Compliance Input
One factor that runs through Art.50, GPAI obligations, and future Annex III requirements is data jurisdiction. The AI Act is a regulation of the European Union — but its enforcement assumes that competent authorities can access the system's data, logs, and documentation.
Running your AI product on US-controlled infrastructure creates a structural tension: a US cloud provider is subject to CLOUD Act obligations that can compel data disclosure outside EU legal procedures. For AI systems that process personal data (which most do, given that user interactions are personal data under GDPR), this creates an Art.46 third-country transfer problem alongside the AI Act obligations.
EU-native infrastructure — where the cloud provider is incorporated in an EU member state, not a US-parent company — eliminates this problem structurally rather than contractually. Contractual safeguards (SCCs, BCRs) are required when you use US-controlled infrastructure; they become unnecessary when your provider is jurisdictionally inside the EU.
sota.io is a managed deployment platform for EU-based teams — no US parent company, no CLOUD Act reach. If your AI product's August 2026 compliance plan includes the infrastructure layer, the simplest approach is to build on a platform where data jurisdiction is clear from the start.
Key Dates for Your Calendar
| Date | Event | What it means for you |
|---|---|---|
| May 13, 2026 | Trilogue #3 | Watch for Omnibus deal/no-deal. 48h reaction window for compliance calendar updates |
| June 2026 | AI Transparency CoP Final | Technical format requirements for synthetic content marking confirmed |
| June 30, 2026 | Cypriot Presidency deadline | If no Omnibus deal by this date, status quo (current AI Act text) applies for all timelines |
| August 2, 2026 | Art.50 + GPAI Enforcement | Disclosure obligations, GPAI documentation requirements, AI Office can begin enforcement |
| September 11, 2026 | CRA vulnerability reporting | Separate from AI Act — 24h reporting to ENISA/CSIRT for actively exploited vulnerabilities |
| August 2, 2027 | Annex III High-Risk (under current law) | High-Risk AI deployer obligations. Omnibus could delay this for embedded AI categories |
What Happens If Trilogue #3 Succeeds
If negotiators reach agreement on May 13, expect:
- Immediate political announcement but no immediate legal effect — EU legislative process requires several additional steps (coreper approval, EP vote, Council adoption, OJ publication)
- Amended Annex III scope and extended timelines would take effect upon entry into force of the Omnibus Regulation — realistically Q3 or Q4 2026
- August 2, 2026 obligations are not affected even by a successful Omnibus deal — they are in the enacted AI Act, not subject to amendment by the Omnibus timeline
What Happens If Trilogue #3 Fails
If May 13 ends without a deal:
- The Cypriot Presidency June 30 deadline becomes critical. A Polish or other future presidency could attempt to resurrect the Omnibus, but this is speculative
- Current AI Act text applies in full — Annex III High-Risk scope as written, August 2027 deadline as written
- August 2, 2026 obligations are unchanged either way
The Developer's Simple Decision Rule
Stop waiting for Trilogue #3 to decide your August 2026 compliance plan.
Do this now, regardless of outcome:
- Implement Art.50 transparency disclosures
- If you serve a model to others: create your GPAI technical documentation and copyright policy
- Map your Annex III exposure so you're ready for August 2027 regardless of whether the scope narrows
Do this after the Omnibus outcome is clear:
- Adjust your Annex III compliance roadmap based on whether scope narrowed or stayed the same
- Recalculate your GPAI fine exposure if the 3% cap is confirmed reduced
The ratio of August-2026-immune obligations to Omnibus-dependent obligations is roughly 70/30. Don't let the 30% uncertain part paralyse planning for the 70% that is fixed law.
EU AI Act enforcement enforcement schedule current as of May 2026. Trilogue #3 outcome pending May 13, 2026 — this post will be updated within 48 hours of the outcome. Follow sota.io/blog for real-time EU AI Act developer guidance.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.