2026-05-04·11 min read·

EU AI Act GPAI Provider vs. Deployer Obligations: Developer Guide to August 2026 Enforcement

The EU AI Act creates two fundamentally different compliance tracks for anyone involved with General-Purpose AI (GPAI) models: one for GPAI model providers who develop and release foundation models, and another for deployers who integrate those models into products and services. Most SaaS developers fall into the deployer category — but the obligations are distinct, and confusing the two is a compliance risk.

August 2, 2026 marks the date when the full EU AI Act enforcement machinery becomes operational: AI Office inspection powers, national market surveillance authority coordination, and the complete technical documentation audit regime. GPAI providers who are not already compliant with Arts. 51–56 have fewer than 90 days to close gaps.

This guide covers the complete GPAI compliance picture: the provider/deployer distinction, obligations for each category, the systemic risk regime, what deployers must verify from their GPAI APIs, and how the ongoing AI Act Omnibus negotiations could shift these requirements before August.


The Core Distinction: GPAI Provider vs. Deployer

EU AI Act Art. 3(63) defines a GPAI model provider as:

"a provider of an AI system that places a general-purpose AI model on the Union market, irrespective of whether it does so under its own name or brand, for a fee or free of charge, including through the provision of AI capabilities via API access."

GPAI model itself is defined in Art. 3(63) as a model trained on large amounts of data using self-supervision at scale, that displays significant generality, and is capable of competently performing a wide range of distinct tasks, regardless of the way it is placed on the market.

The provider/deployer split works like this:

RoleDefinitionExample
GPAI model providerTrains/develops the foundation model and makes it availableOpenAI (GPT-4o), Anthropic (Claude), Meta (Llama), Mistral AI
Downstream providerBuilds a further AI system on top of GPAI capabilitiesSaaS startup integrating the OpenAI API to build a document analysis tool
DeployerUses an AI system in a professional context, not for placing it on the marketEnterprise using an off-the-shelf GPAI-powered tool internally

Key rule for SaaS developers: If you are calling a GPAI API (OpenAI, Anthropic, Gemini, Mistral, Llama via hosted API) to build your product, you are a downstream provider, not a GPAI model provider. The GPAI obligations (Arts. 53–55) fall on the API vendor — not on you.

You are a GPAI model provider if you:

You are not a GPAI model provider if you:


GPAI Provider Obligations: Arts. 53–54

If you qualify as a GPAI model provider, the following obligations apply from August 2, 2025 (12 months after EU AI Act entry into force):

Technical Documentation (Art. 53(1)(a), Annex XI)

Providers must maintain technical documentation covering:

This documentation must be kept current and made available to the AI Office on request.

Information for Downstream Providers (Art. 53(1)(b))

GPAI providers must give downstream providers and deployers the information necessary to enable their compliance. At minimum this includes:

This is why GPAI providers publish model cards, system cards, and usage policies — they are partly discharging Art. 53(1)(b) obligations.

Providers must implement and publish a policy for respecting rights holders' DSM Directive Art. 4 TDM opt-out reservations during training data collection. This includes:

Training Data Summary (Art. 53(1)(d))

Providers must publish a sufficiently detailed summary of training data, including:

The GPAI Code of Practice (CoP) Chapters 2–3 operationalise how this summary must be structured.

Machine-Readable Identifier for AI-Generated Content (Art. 53(1)(e))

GPAI providers must ensure their model's outputs can be marked with machine-readable metadata indicating the content was AI-generated, to the extent technically feasible. This supports downstream transparency obligations under Art. 50.


GPAI with Systemic Risk: Additional Obligations Under Art. 55

GPAI models that meet the systemic risk threshold face significantly heavier obligations. The threshold under Art. 51(1) is:

Training compute exceeding 10^25 floating-point operations (FLOPs)

As of 2026, only a small number of models cross this threshold (GPT-4 class and above, Gemini Ultra class). The Commission can also designate models below this threshold as systemic risk if they present systemic risks based on a case-by-case assessment.

Providers of GPAI models with systemic risk must additionally:

ObligationArticleRequirement
Model evaluationArt. 55(1)(a)Perform standardised model evaluation against state-of-the-art methodologies, including adversarial testing
Systemic risk assessmentArt. 55(1)(b)Track and document possible systemic risks at EU or global level
Incident reportingArt. 55(1)(c)Report serious incidents and corrective actions to the AI Office without undue delay
Cybersecurity protectionArt. 55(1)(d)Ensure adequate protection against cybersecurity attacks, including model weights and training infrastructure

Systemic risk providers must also cooperate with AI Office-initiated evaluations, including by providing access to model weights where requested (Art. 55(2)).


What Deployers and Downstream Providers Must Do

Even if you are not a GPAI model provider, EU AI Act transparency obligations apply to anyone deploying AI-generated content toward end users.

Art. 50: Transparency Obligations for Deployers

Art. 50(1): Deployers using AI systems that interact directly with persons must inform those persons that they are interacting with an AI system — unless the context makes it obvious.

Art. 50(2): Deployers using AI systems that generate synthetic audio, image, video, or text must ensure that the outputs are marked with a machine-readable disclosure that they are AI-generated.

Exceptions apply for content that clearly serves a legitimate purpose (parody, satire, artistic work where disclosure would undermine the purpose).

Art. 50(4) — Deepfake obligation: This is the hardest requirement. Deployers who generate or disseminate deepfakes (realistic AI-generated images/video of real persons) must label them as AI-generated, clearly and visibly. No parody/satire exception applies for deepfakes of real persons in contexts where harm is plausible.

What to Verify from Your GPAI API Provider

Downstream providers should verify the following from their GPAI vendor before August 2026:

Compliance checklist for GPAI API consumers:

If a GPAI provider cannot answer these questions, they may be in breach of Art. 53 — and downstream providers who rely on non-compliant GPAI APIs carry residual risk if regulators interpret Art. 53(1)(b) information duties as creating downstream due diligence obligations.


Open-Weight Models: A Special Case

The EU AI Act's treatment of open-weight GPAI models (LLaMA 3, Mistral, Qwen) is nuanced:

Art. 53(2): GPAI model providers who release model weights under an open-source licence are exempt from some Art. 53(1) obligations — specifically they do not need to provide downstream information under 53(1)(b) in the same structured way, because anyone can inspect the weights.

However, they are not exempt from:

Practical implication: If you self-host LLaMA or Mistral for a commercial product, you are the deployer of an open-weight GPAI model. You do not become the GPAI model provider (Meta/Mistral still bears that obligation). But if you fine-tune the weights and release the result externally, you may become a downstream GPAI provider for the modified model.


Fine-Tuning: When Downstream Providers Become GPAI Providers

The line between "integrating a GPAI API" and "being a GPAI provider" can blur in fine-tuning scenarios.

You remain a downstream provider/deployer if:

You may become a GPAI provider if:

The Commission's guidance on this boundary is expected in the AI Office's model provider technical standards (due 2026). Until then, the Art. 3(63) definition and recital 97 guidance provide the primary framework.


August 2, 2026: What Full Enforcement Means

The EU AI Act applied GPAI rules (Arts. 51–56) from August 2, 2025. But August 2, 2026 is the date when the full enforcement framework becomes operational:

From August 2, 2026Detail
AI Office full enforcement powersComplete inspection and investigation authority under Art. 88
National market surveillance authority coordinationMember states fully operational under Arts. 70–89
Technical documentation auditsAI Office can request and review Annex XI documentation on demand
Standardised evaluation protocolsGPAI evaluation standards referenced in CoP become audit-ready
Penalty regime fully activeArts. 99–101 penalties (up to €35M or 7% global turnover for systemic risk violations)

GPAI providers who have not yet produced Annex XI documentation, published training data summaries, or established copyright compliance policies have until August 2, 2026 to complete these before the full enforcement window opens.


AI Act Omnibus: Potential Changes Before August 2026

The EU AI Act Omnibus (Commission proposal from 2026) may modify GPAI obligations if Trilogue #3 (scheduled May 13, 2026) reaches a political agreement. Proposed changes relevant to GPAI include:

Systemic risk threshold: Discussion of raising from 10^25 to 10^26 FLOPs — which would remove some current systemic risk designations and shift some providers down to the standard Art. 53 track.

SME simplification: Proposed Art. 53 lighter regime for providers with fewer than 250 employees or below €50M turnover — simplified documentation requirements, self-assessment rather than third-party audit.

CoP status clarification: Omnibus may make CoP participation more formally optional for non-systemic-risk providers, with alternative compliance paths.

Important: If Trilogue #3 does not reach agreement by June 30, 2026 (Cypriot Presidency deadline), the current EU AI Act text applies unchanged. Given that Trilogue #2 (April 28, 2026) collapsed without agreement, GPAI providers should plan for compliance against the existing Art. 53–55 framework as a baseline.


Compliance Checklist by Role

If you are a GPAI model provider:

If you are a downstream provider/SaaS developer on GPAI APIs:


Infrastructure and GPAI Compliance

GPAI inference runs on data centre infrastructure. Where that infrastructure is located has compliance implications for the training data pipeline and inference logs:

For SaaS developers deploying GPAI-powered features, hosting your application on EU-native infrastructure (like sota.io) ensures that the data handling layer between your users and the GPAI API stays within EU jurisdiction — even if the GPAI provider's own infrastructure is US-hosted.


Summary

The EU AI Act GPAI framework creates two distinct compliance tracks. Most SaaS developers are downstream providers or deployers — they benefit from GPAI provider compliance (especially Art. 53(1)(b) information duties) but their own primary obligation is the Art. 50 transparency regime.

GPAI model providers (foundation model developers) face the heavier Arts. 53–55 obligations, with additional systemic risk requirements for the largest models. Full enforcement begins August 2, 2026. The AI Act Omnibus may modify thresholds and simplify SME obligations — but given Trilogue uncertainty, the current text is the safe planning baseline.

For any team integrating GPAI APIs into a commercial product, the immediate action is verifying your GPAI vendor's Art. 53 compliance status and ensuring your own Art. 50 disclosure obligations are implemented before the August 2026 enforcement window opens.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.