2026-05-06·12 min read·sota.io team

EU AI Act Omnibus 2026: What Trilogue #3 Means for SaaS Developers Building AI Features

The EU AI Act entered into force on 1 August 2024. For most SaaS developers, the compliance calendar looked manageable: prohibited practices banned from 2 February 2025, general purpose AI (GPAI) model obligations from 2 August 2025, high-risk application obligations from 2 August 2026. Enough runway to plan.

Then the European Commission introduced the EU Competitiveness Omnibus package in early 2025 — a broad regulatory simplification initiative that includes proposed amendments to the AI Act itself. If the Omnibus text is finalized as proposed, the compliance burden shifts considerably: fewer products would qualify as high-risk, conformity assessment procedures would be lighter, and SME thresholds would be adjusted upward. If the negotiations stall or the Parliament's amendments prevail, you might end up with different obligations than either the current AI Act text or the Commission's simplification proposal.

Trilogue #3, the third round of negotiations between the European Commission, the European Parliament, and the Council of the EU, is a key milestone in determining which version SaaS developers will actually have to comply with.

This guide explains what the Omnibus proposes to change, what Trilogue #3 might decide, what you should be doing right now regardless of outcome, and — critically — what the Omnibus does not touch at all.


What the EU AI Act Omnibus Proposes

The Omnibus amendments to the AI Act focus on three areas: high-risk classification, conformity assessment, and GPAI model thresholds.

1. Narrowing the High-Risk Classification

Under the current AI Act text (Articles 6 and 7), an AI system falls into the high-risk category if it:

The Commission's Omnibus proposal adds a materiality gate: a system listed in Annex III only qualifies as high-risk if it poses a significant risk of harm to the health, safety, or fundamental rights of natural persons, considering factors like the reversibility of the AI system's output, the degree of human oversight, and the extent to which the system is actually used as a decision-making tool versus a supporting tool.

In practice, this means a product that technically falls into an Annex III category might not be high-risk if there is meaningful human review before consequential decisions. For SaaS developers, this could affect:

The change is not a blanket exemption. Products used in fully automated pipelines without meaningful human review would still qualify as high-risk. But the "significant risk" materiality gate creates interpretive space that did not exist before.

2. Simplified Conformity Assessment for Mid-Risk Products

Currently, providers of high-risk AI systems must either conduct an internal conformity assessment (for most Annex III categories) or obtain a third-party assessment from a notified body (for safety-component products under Annex I legislation). The process involves:

The Omnibus proposes:

The practical implication for developers: if you have been delaying AI feature launches because the conformity assessment process looked overwhelming, the Omnibus simplification path — if it survives Trilogue — would materially reduce that friction.

3. GPAI Model Threshold Adjustments

The current AI Act defines a general purpose AI model (GPAI) as a model trained on broad data at significant compute (implied threshold: 10^25 FLOPs from the current guidance). Models exceeding 10^25 FLOPs also trigger the "systemic risk" designation with additional obligations: adversarial testing, incident reporting, cybersecurity measures.

The Omnibus proposes:

For most SaaS developers, the direct impact is limited — you are more likely to be a downstream deployer of a GPAI model than a foundation model provider. But the threshold adjustment affects your AI vendor's compliance posture, which flows upstream into your own risk assessment and due diligence obligations under Article 28.


What Trilogue #3 Will Decide

The trilogue process involves the three EU institutions negotiating to reach a common text. The European Parliament typically advocates for stronger protections; the Council typically favors industry flexibility; the Commission defends its original proposal. Trilogue #3 in May 2026 is not the final vote — it is a negotiation round — but it often resolves the most contested provisions.

The key contested areas likely to be discussed in Trilogue #3:

ProvisionCommission ProposalParliament Likely PositionCouncil Likely Position
High-risk materiality gate"Significant risk" narrowingPreserve current scope or narrow only with strong criteriaBroader exemptions, flexible interpretation
Startup simplified assessmentAvailable for SMEs below thresholdsConditional on additional transparency obligationsBroad startup exemption, minimal conditions
GPAI systemic risk thresholdRaise to 10^26 FLOPsMaintain 10^25 FLOPs or lowerAccept Commission's raise
Fundamental rights impact assessmentClarify as required for all high-riskExtend scope and strengthen requirementsLimit to deployment contexts only

The most significant risk for SaaS developers is Parliament inserting conditions on the startup pathway that make it more burdensome than the standard conformity assessment it was designed to replace. A simplified procedure with heavy documentation requirements attached is not a simplification.


What You Should Do Right Now, Before Trilogue Outcome

The smart move is not to wait for Trilogue #3 to finalize your compliance strategy. Two reasons:

First, the gap between current obligations and Omnibus simplifications is mostly about paperwork, not infrastructure. The technical requirements — logging, auditability, data quality, monitoring — remain in some form in every version of the proposal. If you build these into your system now, you are compliant under current law and well-positioned for any simplified version.

Second, the Omnibus does nothing about the CLOUD Act problem (see below). Infrastructure decisions made now will determine your compliance posture on data sovereignty for years, regardless of whether the high-risk classification changes.

Practical steps before Trilogue outcome:

Step 1: Determine Your Current High-Risk Status

Map your AI features against current Annex III categories. Be specific about use context: the same AI component used in an internal productivity tool versus a customer-facing access control system may have different classifications.

If you are in Annex III scope today, assess whether the Omnibus "significant risk" materiality gate would realistically change your classification. Document this analysis — if the materiality gate survives Trilogue, this document becomes your primary evidence for a non-high-risk determination.

Step 2: Implement the Logging and Monitoring Baseline

Articles 12, 13, and 17 require high-risk systems to maintain logs of operations, be transparent to users, and have ongoing monitoring in place. These requirements exist in reduced form even under Omnibus simplification proposals.

The logging baseline for any AI-assisted decision system in scope:

# Minimum audit log entry for AI-assisted decisions
{
  "decision_id": str,          # Unique identifier per AI output
  "timestamp_utc": datetime,   # When AI system generated output
  "model_version": str,        # Which model version produced this output
  "input_hash": str,           # Hash of inputs (not PII directly)
  "output_category": str,      # Classification bucket, not raw probability
  "confidence_tier": str,      # "high" / "medium" / "low" — not raw float
  "human_review_flag": bool,   # Was this reviewed before acting?
  "data_subject_id": str,      # For GDPR Art.22 automated decision records
  "jurisdiction": str,         # Where the decision was acted upon
}

Notice that this schema avoids storing raw input data (GDPR minimization) while preserving auditability. The human_review_flag is specifically relevant to the Omnibus materiality gate: if you can demonstrate that human review is systematically applied, that supports a non-high-risk determination.

Step 3: Document the Human Oversight Chain

The Omnibus materiality gate leans heavily on whether humans meaningfully oversee AI outputs. "Meaningful" has specific content: a human who rubber-stamps AI outputs because they lack the information or authority to override them is not meaningful oversight under EU case law interpretations of similar standards in GDPR Article 22.

Meaningful oversight documentation should cover:

If your product cannot demonstrate this chain, the Omnibus materiality gate likely does not move you out of high-risk scope, regardless of what Trilogue #3 decides.

Step 4: Conduct GPAI Vendor Due Diligence

Under Article 28 of the AI Act, deployers of GPAI models in high-risk applications inherit obligations based on their vendor's compliance posture. Before Trilogue finalizes GPAI threshold changes, you need to document:

The GPAI threshold change (10^25 to 10^26 FLOPs) affects primarily foundation model providers like the major US LLM vendors. But the systemic risk designation changes affect whether those vendors are required to share technical documentation with you — documentation you need to complete your own Annex IV technical file.


What the Omnibus Does Not Address: CLOUD Act Infrastructure Exposure

This is the part most AI Act compliance guides skip.

The EU AI Act imposes obligations about what AI systems do and how they are deployed. It does not address where the underlying compute and data infrastructure resides, other than via the existing GDPR framework's Chapter V restrictions on international data transfers.

The CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 2018) creates a structural conflict that the Omnibus does not resolve: US authorities can compel US-headquartered cloud providers to disclose data stored on their infrastructure regardless of physical server location, including data centers in the EU.

For AI systems processing personal data, this creates a problem that no simplification of the high-risk conformity assessment addresses:

Scenario A — Training data: If you fine-tune a model using EU personal data stored on a US-headquartered cloud provider's infrastructure, that training data is potentially accessible to US authorities under CLOUD Act process. GDPR Article 44 and AI Act Article 10 (training data quality and data governance) do not independently protect against this — they require adequate transfer mechanisms and data minimization, but cannot change the legal reachability of data subject to CLOUD Act jurisdiction.

Scenario B — Inference logs: If you store the audit logs required by Articles 12 and 17 on infrastructure under CLOUD Act jurisdiction, those logs — including records of AI decisions about EU individuals — are potentially accessible. This is especially acute for high-risk systems where logs contain sensitive categories of data (Article 10(5)) by necessity.

Scenario C — Model endpoints: If the GPAI model API you call is operated by a US provider, the input data you send per inference (which may include personal data for personalized systems) is transmitted to and processed by infrastructure under CLOUD Act jurisdiction.

None of the Omnibus amendments touch these channels. The conformity assessment simplification does not affect GDPR's data transfer requirements. The startup pathway does not affect Article 44. The GPAI threshold change does not affect where inference compute runs.

For SaaS developers building AI features for EU enterprise customers, this matters for commercial reasons even where individual compliance interpretation leaves ambiguity: enterprise procurement teams increasingly include "jurisdiction of AI processing infrastructure" as a vendor qualification criterion, particularly for systems in employment, financial services, healthcare, and legal contexts.


How Infrastructure Choice Intersects with Omnibus Compliance

If you are choosing infrastructure for AI workloads now, before the Omnibus outcome is finalized, the decision framework should account for both scenarios:

ConsiderationOmnibus Simplification PassesOmnibus Stalls or Amended
High-risk status for your productPotentially lower — materiality gate reduces scopeCurrent Annex III definitions remain, full conformity assessment applies
Conformity assessment burdenStartup pathway available, lighter QMSFull Article 17 QMS, full Annex IV technical file
CLOUD Act infrastructure riskUnchanged — Omnibus does not addressUnchanged — AI Act GDPR intersection remains
Enterprise procurement requirementsUnchanged — GDPR Chapter V and customer requirementsUnchanged

The asymmetry is the key point: infrastructure decisions are binary and long-term (changing cloud providers is a significant migration), while conformity assessment requirements may simplify depending on Trilogue outcome. The rational move is to optimize infrastructure choice for the stable risk (CLOUD Act/GDPR) while building compliance documentation that works under both versions of the AI Act.

EU-native PaaS infrastructure — operators incorporated and operating entirely within EU jurisdiction without US parent companies — eliminates the CLOUD Act channel. Inference compute running on such infrastructure is not reachable via US legal process. Audit logs stored there fall outside US jurisdiction. Training pipelines using EU-native storage satisfy GDPR Chapter V without requiring Standard Contractual Clauses for data in transit to US infrastructure.


Timeline: What to Expect After Trilogue #3

Assuming Trilogue #3 in May 2026 reaches agreement on the contested provisions:

The current AI Act's high-risk application obligation date is 2 August 2026 — which means high-risk system compliance is required before any Omnibus simplification takes legal effect if the parliamentary process extends into Q4 2026 or later. Developers who planned to wait for Omnibus simplification before addressing high-risk obligations may find themselves in a gap period: technically required to comply with current law while waiting for simplifications that are not yet in force.

The practical implication: if your product falls into the current high-risk classification and you anticipated relief from the Omnibus, do not plan around that relief materializing before August 2026. The conformity assessment timeline requires 3–6 months minimum for a thorough Annex IV technical file. Starting that process now, under current law, means you meet the August 2026 deadline and benefit from any subsequent simplification when it arrives.


Summary

The EU AI Act Omnibus amendments are genuinely significant if they pass as proposed: the materiality gate for high-risk classification, the startup simplified assessment pathway, and the GPAI threshold adjustment all reduce compliance friction for SaaS developers. Whether these provisions survive Trilogue intact is uncertain — Parliament has shown consistent preference for stronger protections, and the final text may include conditions that offset the simplification.

Three things remain stable regardless of Trilogue outcome:

  1. The August 2026 obligation date for high-risk systems is not being moved by the Omnibus. Compliance documentation started now is usable under current law; any subsequent simplification reduces the burden rather than changing the direction.

  2. The CLOUD Act infrastructure gap is not addressed by the Omnibus. If your AI system processes EU personal data and the underlying infrastructure is under US-parent jurisdiction, you have a GDPR Chapter V exposure that no AI Act simplification removes.

  3. The human oversight chain determines whether the materiality gate helps you. Building meaningful human oversight now — documented, systematic, auditable — positions you for the materiality gate if it survives and demonstrates responsible AI deployment regardless.

Trilogue #3 is a milestone to track. It is not a reason to delay building compliant AI systems.


sota.io is EU-native PaaS infrastructure for developers who need GDPR-clean compute. Inference workloads, audit log storage, and training pipelines run on infrastructure incorporated and operated entirely within EU jurisdiction — no US-parent jurisdiction, no CLOUD Act channel. Deploy your AI workload on sota.io →