EU AI Act Omnibus Trilogue #3: What Developers Need to Know Before May 13
May 13, 2026 — 9 days from now — is the scheduled date for the third Trilogue session of the EU AI Act Omnibus revision. This session could resolve the most developer-relevant open disputes in the EU AI Act, accelerate implementation timelines, or — if it fails — push final text into late 2026.
If you are building AI-powered applications, deploying general-purpose AI models, or operating SaaS platforms that integrate AI components, the outcome of Trilogue #3 directly determines your compliance obligations, deadlines, and costs.
This guide covers what is still being negotiated, why each disputed point matters for developers, and what to prepare regardless of the Trilogue outcome.
What Is the AI Act Omnibus?
The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. The phased application schedule began with prohibited practices on February 2, 2025, with GPAI (General-Purpose AI) obligations following on August 2, 2025, and most remaining obligations applying from August 2, 2026.
The AI Act Omnibus is a legislative revision package initiated by the European Commission in early 2026 to address implementation problems identified since the Act entered into force. Key drivers:
- GPAI definition ambiguity: The original Act's definition of General-Purpose AI models created uncertainty for developers of fine-tuned models and AI pipelines.
- SME compliance burden: Startups and scale-ups flagged that the high-risk classification triggers were disproportionate for small operators.
- Conformity assessment overlap: Requirements overlapped with existing CE marking under the Machinery Regulation and Radio Equipment Directive.
- Prohibited practices clarity: Article 5 prohibited practices (including the subliminal manipulation ban and social scoring prohibitions) required further interpretation guidance.
The Commission published an Omnibus proposal in March 2026. Trilogue negotiations began in April 2026 with sessions on April 3 (Trilogue #1) and April 24 (Trilogue #2). Trilogue #3 on May 13 is intended to close remaining open points.
What Is Still Being Negotiated in Trilogue #3?
Based on Trilogue #2 outcomes and Parliament/Council position papers, four provisions remain contested as of early May 2026:
1. GPAI Threshold Revision (Article 51)
Current text: Models trained with more than 10^25 FLOPs are classified as GPAI models with systemic risk, triggering the most stringent requirements (adversarial testing, incident reporting, model evaluations).
Parliament position: Lower the threshold to 10^24 FLOPs to capture current frontier models (including GPT-4 class and equivalent open-weight models), arguing the 10^25 threshold already excludes most models posing real systemic risk.
Council position: Keep the 10^25 threshold but add a qualitative "significant systemic risk" test that the AI Office can apply to models below the compute threshold.
Developer impact:
- If Parliament wins: More models fall into the systemic risk category. Fine-tuned versions of open-weight models may inherit obligations.
- If Council wins: A model below 10^25 FLOPs can still be designated systemic-risk by the AI Office — less predictability, but narrower automatic scope.
- What to prepare now: Document your compute usage for any GPAI component. If you fine-tune or integrate frontier models, clarify whether you are a "provider" or "deployer" under Article 3(3) and 3(4).
2. SME Exemption Scope (Article 2(9))
Current text: Micro and small enterprises (fewer than 50 employees, turnover ≤ €10M) are exempt from certain third-party conformity assessment requirements but not from classification or high-risk obligations themselves.
Parliament proposal: Extend the exemption to include medium enterprises (fewer than 250 employees, turnover ≤ €50M) for a transitional 24-month period.
Council position: Reject medium-enterprise extension. Instead, provide compliance support through the AI regulatory sandboxes (Article 57) and extend sandbox access deadlines.
Developer impact:
- If Parliament wins: 24-month window for sub-250-employee companies to defer third-party conformity assessments. Note: this does NOT defer the obligation to classify your AI systems or implement prohibited-practice restrictions.
- If Council wins: Only micro/small enterprises get the assessment exemption. Medium-sized SaaS companies building high-risk AI (hiring tools, credit scoring, medical devices) face the full conformity assessment by August 2026.
- What to prepare now: Know your employee count and revenue. If you are borderline medium/large, assume no exemption. Start the high-risk classification checklist in Annex III.
3. Prohibited Practices Clarification (Article 5)
Key disputed sub-provisions:
Article 5(1)(a) — Subliminal manipulation: The Parliament wants to narrow the prohibition to techniques that "impair rational decision-making," excluding persuasion-as-a-service features like recommendation engines and dynamic pricing unless harm is demonstrable.
Article 5(1)(d) — Social scoring by private operators: The original prohibition targeted public authority social scoring. The Council wants to explicitly include private-sector social scoring (credit bureaus, insurance risk scoring, employer monitoring) in the prohibition. Parliament resists, arguing this creates uncertainty for legitimate credit and fraud scoring systems.
Article 5(1)(f) — Emotion recognition in workplaces: Parliament wants an absolute ban on biometric emotion recognition in workplace and educational settings. Council allows exceptions for safety-critical environments (transportation, nuclear).
Developer impact:
- Recommendation engines, engagement optimization, and behavioral targeting tools may gain clarity on where they stand relative to Article 5(1)(a).
- If Council wins on social scoring: Private-sector scoring systems need legal analysis before August 2, 2026.
- Emotion recognition at work: If your product includes this, assume prohibition unless you are in a safety-critical sector.
- What to prepare now: Map every AI feature against Article 5(1)(a)–(g). The February 2025 enforcement deadline has already passed — if you have prohibited practices deployed, exposure is current, not future.
4. Notified Body Accreditation Timeline (Article 43–44)
Issue: The EU currently has limited numbers of accredited Notified Bodies capable of conducting AI Act conformity assessments. As of May 2026, fewer than 20 Notified Bodies across the EU are accredited for AI system assessments.
Parliament: Extend the grace period for third-party conformity assessments by 12 months (to August 2027) given Notified Body scarcity.
Council: Reject blanket extension; instead, allow companies to begin assessments under a "prospective compliance" mechanism with existing Notified Bodies.
Developer impact:
- High-risk AI system providers (Annex III categories: biometric, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) need to start Notified Body queues now regardless of Trilogue outcome.
- The queue problem is real: If you are building high-risk AI and have not engaged a Notified Body, the grace period extension only helps if Trilogue #3 agrees it — not guaranteed.
- What to prepare now: Identify the relevant Notified Body for your sector. File an intent-to-assess inquiry before May 13 regardless of Trilogue outcome.
Timeline Implications: What Trilogue #3 Can and Cannot Change
| If Trilogue #3 Reaches Agreement | Timeline |
|---|---|
| Final Omnibus text agreed May 13 | European Parliament vote: June–July 2026. Entry into force: August 2026 |
| Amended provisions take effect simultaneously with current August 2, 2026 deadline | High-risk AI deployment deadline unchanged (August 2026) |
| Notified Body extension granted | Grace period to August 2027 for conformity assessment only |
| If Trilogue #3 Fails (No Agreement) | Timeline |
|---|---|
| Further negotiations into June–July 2026 | Final text delayed to Q3/Q4 2026 |
| Current AI Act text applies unchanged | August 2, 2026 high-risk deadline stands |
| No SME extension | All size categories face full obligations |
The critical insight for developers: The August 2, 2026 high-risk deadline applies to the original AI Act text regardless of the Omnibus. The Omnibus can only relieve certain obligations — it cannot extend the underlying Act's application date. Do not wait for Trilogue outcome to start your compliance work.
What August 2, 2026 Requires Regardless of Trilogue Outcome
These obligations apply under the existing AI Act text, unaffected by Omnibus negotiations:
For AI System Providers (building and placing AI on the market)
-
Annex III Classification — Have you systematically checked whether your AI systems fall into any of the eight high-risk categories? This includes: biometric identification, critical infrastructure management, educational/vocational training, employment and worker management, access to essential private and public services, law enforcement, migration/border control, administration of justice.
-
Technical Documentation — Article 11 requires technical documentation before market placement. This must include training data governance, intended purpose description, accuracy/robustness/cybersecurity metrics, and post-market monitoring plan.
-
Conformity Assessment — Article 43: Third-party assessment for most Annex III systems. Self-assessment is permitted for some categories (general-purpose AI systems not falling under Annex III high-risk categories).
-
EU Declaration of Conformity — Article 47: Must be issued before placing on the market and updated when material changes occur.
-
Registration in EU Database — Article 49: High-risk AI systems must be registered in the EU AI Database (EUID) before deployment.
For AI System Deployers (using AI built by others in your SaaS)
-
Transparency to Users — Article 52: Users interacting with AI systems capable of generating synthetic content must be informed. Chatbots, AI assistants, deepfake-capable features: disclosure is mandatory.
-
Fundamental Rights Impact Assessment — Article 27: Required for high-risk AI systems deployed by public bodies or in regulated sectors.
-
Human Oversight — Article 14: Deployers of high-risk AI must implement appropriate human oversight measures. If you are using a GPAI model API for a high-risk use case, you are responsible for the oversight layer.
-
Incident Reporting — Article 73: Serious incidents (harm to health, fundamental rights, safety) must be reported to national market surveillance authorities within 15 days (immediate risk) or 3 months.
For GPAI Model Providers (>10^25 FLOPs or designated)
-
Technical Documentation — Annex XI: Training methodology, energy consumption, testing results, intended capabilities and limitations.
-
Copyright Policy — Article 53(1)(c): Summary of training data used, published at minimum. Full policy for models with systemic risk.
-
Downstream Provider Information — Article 53(1)(d): Provide information enabling downstream deployers to comply.
-
Systemic Risk Obligations (if applicable) — Articles 55-56: Adversarial testing, incident reporting to AI Office, cybersecurity measures, energy efficiency reporting.
Your Pre-Trilogue Compliance Checklist
Use this before May 13, 2026 regardless of Trilogue outcome:
Classification (1-2 hours)
- List all AI components in your product (own-built + third-party APIs)
- Check each against Article 5 prohibited practices — enforcement has been live since February 2025
- Check each against Annex III high-risk categories
- Determine for each: are you provider, deployer, or importer?
If High-Risk AI Provider (weeks of work, start now)
- Begin technical documentation per Article 11 and Annex IV
- Contact an accredited Notified Body (if required) — queues are forming
- Set up quality management system per Article 17
- Draft EU Declaration of Conformity (Article 47)
If GPAI Provider (check compute threshold)
- Calculate training compute (FLOPs) for your models
- Prepare Annex XI technical documentation
- Publish copyright/training data policy (minimum: summary)
- Register in AI Office GPAI model registry
If AI Deployer Using Third-Party Models
- Add user-facing disclosure for AI-generated content (Article 52)
- Confirm human oversight mechanisms for any high-risk use cases
- Establish incident logging process (Article 73)
- Request and review your GPAI provider's Annex XI documentation
How sota.io Supports AI Act Compliance
Operating AI workloads under EU jurisdiction — with no US-parent cloud provider, no CLOUD Act exposure — is one concrete compliance lever for AI Act Article 5(1)(g) (which restricts biometric categorization using personal data) and for GPAI deployers managing data governance obligations.
sota.io provides:
- EU-only infrastructure: All compute, storage, and data processing stays within EU borders under EU law, with no transfer to US parent companies.
- Full control plane: No shared-tenancy secrets, isolated container environments, EU-resident support team.
- Deploy from Git: The same workflow as Vercel or Railway, running entirely within the EU.
For SaaS developers building AI-augmented applications under the AI Act, infrastructure sovereignty is not a marketing claim — it is a technical prerequisite for demonstrable compliance.
What to Watch on May 13
Three outcomes are possible from Trilogue #3:
Full agreement: Final Omnibus text sent to Parliament for plenary vote. Developers get amended provisions with clear text by August 2026.
Partial agreement: Most provisions settled, one or two (likely GPAI threshold or SME scope) referred to further technical working groups. Timeline extends to July 2026 for final text.
Failure: Trilogues pause. Current AI Act text applies unchanged from August 2, 2026. Omnibus is unlikely to change August 2 deadline even if eventually agreed.
Subscribe to sota.io to be notified when Blog #823 covers the Trilogue #3 outcome — we will publish within 48 hours of the session result.
Sources: EU AI Act (Regulation 2024/1689), European Parliament AI Act Omnibus Position Paper (April 2026), Council of the EU Omnibus Working Party documents (April–May 2026), EU AI Office GPAI Code of Practice (April 2026 draft), European Commission DG CNECT AI Act Implementation FAQ (Q1 2026).
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.