title: "EU AI Act Omnibus Trilogue #3 (May 13): What Changes for SaaS Developers Building AI Features" date: "2026-05-07" description: "The third EU AI Act Omnibus trilogue on May 13, 2026 will negotiate GPAI thresholds, SME exemptions, and high-risk classification changes. Here's what SaaS developers building AI features need to know before the deadline." tags: ["eu-ai-act", "omnibus", "trilogue", "gpai", "saas-compliance", "ai-regulation", "high-risk-ai", "developer-guide", "2026"]
EU AI Act Omnibus Trilogue #3 (May 13): What Changes for SaaS Developers Building AI Features
On May 13, 2026 — six days from today — the European Parliament, the Council of the EU, and the European Commission will sit down for the third round of trilogue negotiations on the AI Act Omnibus simplification package. For most developers, this sounds like Brussels procedural noise. It isn't.
The Omnibus proposes to rewrite the obligations that apply to General-Purpose AI (GPAI) model integrators, change the compute thresholds that trigger "systemic risk" status, and introduce SME exemptions that could substantially change your compliance posture — depending on how the trilogue lands.
The stakes are concrete: the EU AI Act's full application date is August 2, 2026. That's 87 days from now. If you're building AI features into a SaaS product used by customers in the EU, you are already in scope. Trilogue #3 will determine whether the obligations you need to meet by August are the original AI Act text or the Omnibus revision.
This guide explains what's on the table at Trilogue #3, which outcomes matter for developers, and how to prepare regardless of how it resolves.
Background: What Is the AI Act Omnibus?
The EU AI Act itself was published in the Official Journal on July 12, 2024 and entered into force on August 2, 2024. It followed a risk-based tiered structure:
- February 2, 2025: Prohibited AI practices apply (e.g., social scoring, certain biometric categorisation)
- August 2, 2025: Governance provisions and GPAI obligations apply
- August 2, 2026: High-risk AI system obligations apply (the main compliance burden for most developers)
- August 2, 2027: Additional provisions for specific use cases
The EU AI Act Omnibus is part of a broader Commission initiative launched in early 2025 to reduce regulatory compliance costs — the same package that proposed simplifying the CSRD (Corporate Sustainability Reporting Directive) and CSDDD (Corporate Sustainability Due Diligence Directive). The AI Act Omnibus specifically targets:
- GPAI model thresholds — the compute threshold that triggers "systemic risk" status
- SME exemptions — creating lighter obligations for small and medium-sized enterprises
- High-risk classification changes — narrowing or clarifying which AI systems fall into high-risk categories
- Deployment obligations — adjusting the split between provider and deployer responsibilities
The first two trilogue sessions established broad negotiating positions. Trilogue #3 is where the specific numbers and carve-outs are expected to be finalised or narrowed.
What's on the Table at Trilogue #3
1. GPAI Systemic Risk Thresholds
The original AI Act used 10<sup>25</sup> FLOPs (floating-point operations) of cumulative training compute as the threshold above which a GPAI model is designated "systemic risk" status. Above this threshold, providers face additional transparency obligations, adversarial testing requirements, and incident reporting duties.
The Commission proposed raising this threshold significantly — to 10<sup>26</sup> FLOPs — arguing that the original threshold was set too conservatively and would capture models that don't actually present systemic risk.
What this means for SaaS developers: If you're using a GPAI model API (GPT-4o, Gemini 1.5 Pro, Claude 3.x) in your product, you're a deployer of a GPAI model, not a provider. Your obligations don't change based on the compute threshold. But your vendor's obligations change — which affects the technical documentation, model cards, and safety information they're required to give you. If the threshold rises, your vendors have fewer disclosure obligations, which means you may receive less information for your own AI impact assessments.
Developer action: Request your GPAI vendor's compliance documentation now. Under Article 53(1)(d), GPAI providers must give deployers the information and access to documentation necessary for them to comply. That obligation applies regardless of the compute threshold change.
2. SME Exemptions and Proportionality
The Omnibus proposes to introduce explicit SME exemptions that weren't in the original AI Act text. The Commission's draft included:
- Full exemption from some high-risk AI obligations for enterprises with fewer than 10 employees and <€2M annual turnover (micro-enterprises)
- Simplified compliance pathways for SMEs (generally <250 employees, <€50M revenue, or <€43M balance sheet)
- Reduced technical documentation requirements for SME deployers of third-party AI systems
The Parliament's position in Trilogue #1 pushed back on the micro-enterprise full exemption, arguing it creates compliance gaps. The Council's position was closer to the Commission's. Trilogue #3 is expected to negotiate a compromise.
What this means for SaaS developers: Most early-stage SaaS startups will qualify as SMEs. If the Omnibus SME provisions survive trilogue substantially intact, your compliance obligations for high-risk AI features will be significantly lighter than under the original text.
Developer action (regardless of Omnibus outcome): Don't plan your compliance strategy around a best-case Omnibus outcome. The Parliament's position is more restrictive, and you may end up with a compromise that provides limited SME relief. Build your compliance baseline against the original AI Act text, then revise down if the Omnibus changes are favourable.
3. High-Risk AI Classification
Annex III of the AI Act lists the categories of high-risk AI systems. The Omnibus proposed to remove some sub-categories from Annex III, arguing they were captured by other EU legislation (e.g., the Medical Devices Regulation) and didn't need AI Act duplication.
The key Annex III sub-categories being debated for removal or modification:
- Education (Annex III, point 3): AI for student assessment, learning analytics — being proposed for removal in some positions
- Employment (Annex III, point 4): AI for recruiting, promoting, contract termination — surviving trilogue so far but with narrowed definitions
- Credit scoring (Annex III, point 5b): AI systems used by financial institutions for creditworthiness assessment — some proposals to narrow to "natural persons only" and exclude B2B credit scoring
What this means for SaaS developers: If you're building HR tech, edtech, or fintech AI features, the final Annex III text after Trilogue #3 will directly determine whether you're in the "high-risk" category. The current status is:
- HR/recruiting AI → still high-risk in all positions
- EdTech assessment AI → potentially removed from high-risk (Commission/Council position)
- B2B credit scoring → potentially narrowed out of high-risk
4. Deployer vs. Provider Responsibility Split
The original AI Act placed substantial obligations on both providers (who develop or put AI systems on the market) and deployers (who use AI systems in a professional context to serve end users). The Omnibus proposes to clarify and in some cases shift obligations.
The key change: If you integrate a third-party GPAI model API to build an AI feature, are you a "provider" or "deployer" under the AI Act?
The original text drew the line based on whether you "substantially modify" the upstream AI system. The Omnibus proposes clearer criteria: if you build a downstream application using a provider's API without modifying the underlying model, you are a deployer with lighter obligations. Only if you fine-tune, retrain, or substantially modify the model's architecture do you become a provider with full provider obligations.
What this means for SaaS developers: This is the most practically important change for most SaaS builders. Most developers building AI features on top of OpenAI/Anthropic/Google APIs are deployers, not providers. The Omnibus clarification would solidify that status and significantly reduce your compliance burden.
The August 2026 Deadline Is Fixed Regardless of the Omnibus
Here's the critical point many developers miss: the August 2, 2026 application date for high-risk AI obligations does not move based on the Omnibus timeline.
The Omnibus is a separate legislative process. Even in the best case — Trilogue #3 reaches agreement on May 13, the texts are approved by Parliament and Council over summer, published in the Official Journal by July — the new rules would apply from the date of their own entry into force, not backdated to August 2. There will likely be a transition period.
The practical implication: you need to be ready for August 2026 compliance under the original AI Act text. If the Omnibus passes with favourable changes, you can revise down. If it stalls or if Parliament and Council can't agree on Trilogue #3, the original text applies in full.
What does "ready for August 2026" actually mean for a SaaS developer?
Your August 2026 AI Act Compliance Checklist
Step 1: Determine Whether You Have High-Risk AI
The high-risk categories under the current Annex III are:
- Critical infrastructure management
- Education and vocational training (student assessment, exam proctoring)
- Employment (recruiting, hiring, performance management, contract termination)
- Access to essential private and public services (credit scoring, social benefits)
- Law enforcement (biometric identification, predictive policing)
- Migration and border management
- Justice and democratic processes
Most SaaS products don't touch these categories. A project management tool with an AI assistant is not high-risk. A CRM with predictive lead scoring is probably not high-risk. However:
- If your product does automated hiring decisions (resume screening that auto-rejects candidates) → high-risk
- If your product does automated credit or financial decisions for consumers → likely high-risk
- If your product does student assessment with automated grading → possibly high-risk (depends on Omnibus outcome)
- If your product uses biometric identification → likely high-risk
If you're in a grey zone, the safe approach is to treat the system as high-risk until you have legal clarity from the Omnibus outcome.
Step 2: For High-Risk AI — Required Compliance Elements
If you determine you have a high-risk AI system, the Article 9-17 requirements that apply from August 2, 2026 include:
Risk management system (Article 9): A documented risk management process for identifying, analyzing, and evaluating risks throughout the AI system's lifecycle. This needs to exist before the system is placed on the market.
Data governance (Article 10): Training, validation, and testing datasets need to meet quality criteria. You need documentation of data collection practices, preprocessing steps, and bias identification.
Technical documentation (Article 11 + Annex IV): Detailed documentation including: a description of the system and its intended purpose, design specifications, development process description, system capabilities and limitations, accuracy metrics, and risk mitigation measures.
Automatic event logging (Article 12): High-risk AI systems must be capable of logging events. These logs must be retained for the period required by sector-specific law, or at minimum for 6 months.
Transparency and instructions for use (Article 13): Deployers must be provided with instructions for use that include: the system's intended purpose and limitations, performance metrics, human oversight measures, and the circumstances in which the system may fail or produce inaccurate outputs.
Human oversight (Article 14): High-risk AI systems must be designed to allow human oversight. This means a human must be able to understand the system's outputs, override or interrupt it, and understand when to not rely on it.
Accuracy, robustness, cybersecurity (Article 15): High-risk AI systems must achieve appropriate levels of accuracy for their intended purpose, be resilient to attempts to alter their outputs, and be designed with appropriate cybersecurity measures.
Step 3: For GPAI Features — Required Compliance Elements
If your product integrates a GPAI model (regardless of high-risk status), and you are classified as a deployer:
Transparency to end users (Article 52): You must inform users when they're interacting with an AI system (chatbot disclosure) and when content is AI-generated. This is already in force since August 2025.
Prohibited use cases (Article 52(3)): Deployers cannot use GPAI systems for purposes that are prohibited under Title II (social scoring, subliminal manipulation, real-time biometric surveillance in public spaces).
Fundamental rights impact assessment (Article 27): If you're a public body or operating in specific sectors deploying high-risk AI, you need a fundamental rights impact assessment before deployment.
Step 4: Register in the EU AI Act Database (if applicable)
Article 71 creates a public EU database for high-risk AI systems. Providers of standalone high-risk AI systems must register before placing the system on the market. Deployers of high-risk AI in certain categories (law enforcement, migration, justice) must also register.
If you're building a SaaS product used in EU employment, education, or financial services contexts, verify whether your registration obligations apply before August 2.
What to Watch on May 13
The Trilogue #3 session will negotiate specific text, and the outcomes that matter most for SaaS developers are:
1. Does the micro-enterprise full exemption survive? If Parliament agrees to exempt micro-enterprises (<10 employees, <€2M revenue), many early-stage startups building AI features will have substantially lighter obligations.
2. Is EdTech removed from Annex III? If the Commission/Council position prevails on education AI, assessment and learning analytics products drop out of high-risk entirely.
3. Is the deployer/provider line clarified? If the Omnibus text explicitly states that API integrators without model modification are deployers, the compliance burden for most AI-feature builders reduces significantly.
4. Does the GPAI threshold move to 10<sup>26</sup> FLOPs? Less directly relevant for most SaaS developers, but affects what transparency documentation your GPAI vendors are required to give you.
Trilogue #3 may not reach final agreement — trialogues often require multiple rounds. If May 13 is another working session rather than a political agreement, the most likely outcome is that the original AI Act text applies on August 2, 2026, with the Omnibus following later (creating a transition adjustment period).
Where EU Hosting Fits In
The EU AI Act's obligations are not about where your infrastructure runs — they're about the risk profile of your AI system and how you govern it. But hosting location is still relevant for AI Act compliance in two ways:
1. Data governance (Article 10): Your training and evaluation data needs to meet quality criteria. If your training data contains personal data of EU residents, that data needs to comply with GDPR throughout its lifecycle — including during model training. Running training jobs on EU-based infrastructure reduces the CLOUD Act exposure risk for that personal data.
2. Log retention (Article 12): High-risk AI systems need to retain event logs. If those logs contain personal data (which AI decision logs typically do), GDPR storage and data subject rights obligations apply to those logs. Hosting logs on EU infrastructure with proper data processing agreements simplifies your compliance posture.
3. Audits and market surveillance (Articles 74-77): National market surveillance authorities have the right to access your AI system documentation and request information. If you're processing data for EU customers and your infrastructure is in the EU, audit cooperation is substantially simpler than if you're coordinating data access across jurisdictions.
Practical Timeline for SaaS Builders
Given the 87-day countdown to August 2, 2026:
Now (May 2026):
- Audit your product's AI features against Annex III high-risk categories
- Identify which GPAI providers you use and request their Article 53 compliance documentation
- Determine whether you're classified as a deployer or provider for each AI system
June 2026 (after Omnibus Trilogue #3 outcome is clearer):
- Revise your compliance scope based on Omnibus changes if any were agreed in May
- Begin technical documentation drafting for high-risk systems (Article 11)
- Implement or verify automatic event logging capability (Article 12)
July 2026:
- Finalise risk management system documentation (Article 9)
- Complete internal testing and accuracy assessments (Article 15)
- Register high-risk AI systems in EU database if applicable (Article 71)
- Verify GPAI deployer transparency notices are in your product UI (Article 52)
August 2, 2026:
- Full compliance required for high-risk AI systems placed on the market or put into service after this date
The Bottom Line
Trilogue #3 on May 13 matters because it will determine whether the compliance obligations you're building toward in August are the original AI Act or the simplified Omnibus version. The most developer-favourable outcomes — clearer deployer/provider line, SME exemptions, EdTech removed from high-risk — are possible but not guaranteed.
The safest strategy: build to the original AI Act baseline, treat any Omnibus relief as a bonus. The companies that will struggle in August 2026 are those that were waiting for the Omnibus to pass before starting their compliance work.
If you're running EU-hosted infrastructure for your AI features, you already have the data sovereignty piece handled. The regulatory piece — documentation, risk management, oversight mechanisms — is the part that takes calendar time to build. Start now, watch Trilogue #3, and revise if favourable.
This post was published May 7, 2026, six days before EU AI Act Omnibus Trilogue #3 (May 13, 2026). We will update it with the Trilogue #3 outcome once available. Nothing in this post constitutes legal advice — consult a qualified EU regulatory lawyer for advice on your specific AI systems and compliance obligations.