2026-05-04·11 min read·

EU AI Act Omnibus Trilogue #3: What Developers Need to Know Before May 13

May 13, 2026 — 9 days from now — is the scheduled date for the third Trilogue session of the EU AI Act Omnibus revision. This session could resolve the most developer-relevant open disputes in the EU AI Act, accelerate implementation timelines, or — if it fails — push final text into late 2026.

If you are building AI-powered applications, deploying general-purpose AI models, or operating SaaS platforms that integrate AI components, the outcome of Trilogue #3 directly determines your compliance obligations, deadlines, and costs.

This guide covers what is still being negotiated, why each disputed point matters for developers, and what to prepare regardless of the Trilogue outcome.


What Is the AI Act Omnibus?

The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. The phased application schedule began with prohibited practices on February 2, 2025, with GPAI (General-Purpose AI) obligations following on August 2, 2025, and most remaining obligations applying from August 2, 2026.

The AI Act Omnibus is a legislative revision package initiated by the European Commission in early 2026 to address implementation problems identified since the Act entered into force. Key drivers:

The Commission published an Omnibus proposal in March 2026. Trilogue negotiations began in April 2026 with sessions on April 3 (Trilogue #1) and April 24 (Trilogue #2). Trilogue #3 on May 13 is intended to close remaining open points.


What Is Still Being Negotiated in Trilogue #3?

Based on Trilogue #2 outcomes and Parliament/Council position papers, four provisions remain contested as of early May 2026:

1. GPAI Threshold Revision (Article 51)

Current text: Models trained with more than 10^25 FLOPs are classified as GPAI models with systemic risk, triggering the most stringent requirements (adversarial testing, incident reporting, model evaluations).

Parliament position: Lower the threshold to 10^24 FLOPs to capture current frontier models (including GPT-4 class and equivalent open-weight models), arguing the 10^25 threshold already excludes most models posing real systemic risk.

Council position: Keep the 10^25 threshold but add a qualitative "significant systemic risk" test that the AI Office can apply to models below the compute threshold.

Developer impact:

2. SME Exemption Scope (Article 2(9))

Current text: Micro and small enterprises (fewer than 50 employees, turnover ≤ €10M) are exempt from certain third-party conformity assessment requirements but not from classification or high-risk obligations themselves.

Parliament proposal: Extend the exemption to include medium enterprises (fewer than 250 employees, turnover ≤ €50M) for a transitional 24-month period.

Council position: Reject medium-enterprise extension. Instead, provide compliance support through the AI regulatory sandboxes (Article 57) and extend sandbox access deadlines.

Developer impact:

3. Prohibited Practices Clarification (Article 5)

Key disputed sub-provisions:

Article 5(1)(a) — Subliminal manipulation: The Parliament wants to narrow the prohibition to techniques that "impair rational decision-making," excluding persuasion-as-a-service features like recommendation engines and dynamic pricing unless harm is demonstrable.

Article 5(1)(d) — Social scoring by private operators: The original prohibition targeted public authority social scoring. The Council wants to explicitly include private-sector social scoring (credit bureaus, insurance risk scoring, employer monitoring) in the prohibition. Parliament resists, arguing this creates uncertainty for legitimate credit and fraud scoring systems.

Article 5(1)(f) — Emotion recognition in workplaces: Parliament wants an absolute ban on biometric emotion recognition in workplace and educational settings. Council allows exceptions for safety-critical environments (transportation, nuclear).

Developer impact:

4. Notified Body Accreditation Timeline (Article 43–44)

Issue: The EU currently has limited numbers of accredited Notified Bodies capable of conducting AI Act conformity assessments. As of May 2026, fewer than 20 Notified Bodies across the EU are accredited for AI system assessments.

Parliament: Extend the grace period for third-party conformity assessments by 12 months (to August 2027) given Notified Body scarcity.

Council: Reject blanket extension; instead, allow companies to begin assessments under a "prospective compliance" mechanism with existing Notified Bodies.

Developer impact:


Timeline Implications: What Trilogue #3 Can and Cannot Change

If Trilogue #3 Reaches AgreementTimeline
Final Omnibus text agreed May 13European Parliament vote: June–July 2026. Entry into force: August 2026
Amended provisions take effect simultaneously with current August 2, 2026 deadlineHigh-risk AI deployment deadline unchanged (August 2026)
Notified Body extension grantedGrace period to August 2027 for conformity assessment only
If Trilogue #3 Fails (No Agreement)Timeline
Further negotiations into June–July 2026Final text delayed to Q3/Q4 2026
Current AI Act text applies unchangedAugust 2, 2026 high-risk deadline stands
No SME extensionAll size categories face full obligations

The critical insight for developers: The August 2, 2026 high-risk deadline applies to the original AI Act text regardless of the Omnibus. The Omnibus can only relieve certain obligations — it cannot extend the underlying Act's application date. Do not wait for Trilogue outcome to start your compliance work.


What August 2, 2026 Requires Regardless of Trilogue Outcome

These obligations apply under the existing AI Act text, unaffected by Omnibus negotiations:

For AI System Providers (building and placing AI on the market)

  1. Annex III Classification — Have you systematically checked whether your AI systems fall into any of the eight high-risk categories? This includes: biometric identification, critical infrastructure management, educational/vocational training, employment and worker management, access to essential private and public services, law enforcement, migration/border control, administration of justice.

  2. Technical Documentation — Article 11 requires technical documentation before market placement. This must include training data governance, intended purpose description, accuracy/robustness/cybersecurity metrics, and post-market monitoring plan.

  3. Conformity Assessment — Article 43: Third-party assessment for most Annex III systems. Self-assessment is permitted for some categories (general-purpose AI systems not falling under Annex III high-risk categories).

  4. EU Declaration of Conformity — Article 47: Must be issued before placing on the market and updated when material changes occur.

  5. Registration in EU Database — Article 49: High-risk AI systems must be registered in the EU AI Database (EUID) before deployment.

For AI System Deployers (using AI built by others in your SaaS)

  1. Transparency to Users — Article 52: Users interacting with AI systems capable of generating synthetic content must be informed. Chatbots, AI assistants, deepfake-capable features: disclosure is mandatory.

  2. Fundamental Rights Impact Assessment — Article 27: Required for high-risk AI systems deployed by public bodies or in regulated sectors.

  3. Human Oversight — Article 14: Deployers of high-risk AI must implement appropriate human oversight measures. If you are using a GPAI model API for a high-risk use case, you are responsible for the oversight layer.

  4. Incident Reporting — Article 73: Serious incidents (harm to health, fundamental rights, safety) must be reported to national market surveillance authorities within 15 days (immediate risk) or 3 months.

For GPAI Model Providers (>10^25 FLOPs or designated)

  1. Technical Documentation — Annex XI: Training methodology, energy consumption, testing results, intended capabilities and limitations.

  2. Copyright Policy — Article 53(1)(c): Summary of training data used, published at minimum. Full policy for models with systemic risk.

  3. Downstream Provider Information — Article 53(1)(d): Provide information enabling downstream deployers to comply.

  4. Systemic Risk Obligations (if applicable) — Articles 55-56: Adversarial testing, incident reporting to AI Office, cybersecurity measures, energy efficiency reporting.


Your Pre-Trilogue Compliance Checklist

Use this before May 13, 2026 regardless of Trilogue outcome:

Classification (1-2 hours)

If High-Risk AI Provider (weeks of work, start now)

If GPAI Provider (check compute threshold)

If AI Deployer Using Third-Party Models


How sota.io Supports AI Act Compliance

Operating AI workloads under EU jurisdiction — with no US-parent cloud provider, no CLOUD Act exposure — is one concrete compliance lever for AI Act Article 5(1)(g) (which restricts biometric categorization using personal data) and for GPAI deployers managing data governance obligations.

sota.io provides:

For SaaS developers building AI-augmented applications under the AI Act, infrastructure sovereignty is not a marketing claim — it is a technical prerequisite for demonstrable compliance.


What to Watch on May 13

Three outcomes are possible from Trilogue #3:

Full agreement: Final Omnibus text sent to Parliament for plenary vote. Developers get amended provisions with clear text by August 2026.

Partial agreement: Most provisions settled, one or two (likely GPAI threshold or SME scope) referred to further technical working groups. Timeline extends to July 2026 for final text.

Failure: Trilogues pause. Current AI Act text applies unchanged from August 2, 2026. Omnibus is unlikely to change August 2 deadline even if eventually agreed.

Subscribe to sota.io to be notified when Blog #823 covers the Trilogue #3 outcome — we will publish within 48 hours of the session result.


Sources: EU AI Act (Regulation 2024/1689), European Parliament AI Act Omnibus Position Paper (April 2026), Council of the EU Omnibus Working Party documents (April–May 2026), EU AI Office GPAI Code of Practice (April 2026 draft), European Commission DG CNECT AI Act Implementation FAQ (Q1 2026).

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.