EU AI Act GPAI Provider vs. Deployer Obligations: Developer Guide to August 2026 Enforcement
The EU AI Act creates two fundamentally different compliance tracks for anyone involved with General-Purpose AI (GPAI) models: one for GPAI model providers who develop and release foundation models, and another for deployers who integrate those models into products and services. Most SaaS developers fall into the deployer category — but the obligations are distinct, and confusing the two is a compliance risk.
August 2, 2026 marks the date when the full EU AI Act enforcement machinery becomes operational: AI Office inspection powers, national market surveillance authority coordination, and the complete technical documentation audit regime. GPAI providers who are not already compliant with Arts. 51–56 have fewer than 90 days to close gaps.
This guide covers the complete GPAI compliance picture: the provider/deployer distinction, obligations for each category, the systemic risk regime, what deployers must verify from their GPAI APIs, and how the ongoing AI Act Omnibus negotiations could shift these requirements before August.
The Core Distinction: GPAI Provider vs. Deployer
EU AI Act Art. 3(63) defines a GPAI model provider as:
"a provider of an AI system that places a general-purpose AI model on the Union market, irrespective of whether it does so under its own name or brand, for a fee or free of charge, including through the provision of AI capabilities via API access."
GPAI model itself is defined in Art. 3(63) as a model trained on large amounts of data using self-supervision at scale, that displays significant generality, and is capable of competently performing a wide range of distinct tasks, regardless of the way it is placed on the market.
The provider/deployer split works like this:
| Role | Definition | Example |
|---|---|---|
| GPAI model provider | Trains/develops the foundation model and makes it available | OpenAI (GPT-4o), Anthropic (Claude), Meta (Llama), Mistral AI |
| Downstream provider | Builds a further AI system on top of GPAI capabilities | SaaS startup integrating the OpenAI API to build a document analysis tool |
| Deployer | Uses an AI system in a professional context, not for placing it on the market | Enterprise using an off-the-shelf GPAI-powered tool internally |
Key rule for SaaS developers: If you are calling a GPAI API (OpenAI, Anthropic, Gemini, Mistral, Llama via hosted API) to build your product, you are a downstream provider, not a GPAI model provider. The GPAI obligations (Arts. 53–55) fall on the API vendor — not on you.
You are a GPAI model provider if you:
- Train your own foundation model and release it (via API, open weights, or hosted product)
- Fine-tune a base model so significantly that the resulting model has different general capabilities and you place that on the market
- Aggregate multiple GPAI models into a new general-purpose product with its own trained components
You are not a GPAI model provider if you:
- Call an external GPAI API to power your application
- Use RAG, prompt engineering, or system prompting without training new model weights
- Fine-tune a model for a narrow domain and use it only internally
GPAI Provider Obligations: Arts. 53–54
If you qualify as a GPAI model provider, the following obligations apply from August 2, 2025 (12 months after EU AI Act entry into force):
Technical Documentation (Art. 53(1)(a), Annex XI)
Providers must maintain technical documentation covering:
- The model's architecture, training methodology, and data pipeline
- Training compute (FLOPs), training dataset composition and sources
- Model capabilities and known limitations
- Performance benchmarks across relevant task categories
- Evaluation results, including safety and red-team findings
- Measures to prevent foreseeable misuse
This documentation must be kept current and made available to the AI Office on request.
Information for Downstream Providers (Art. 53(1)(b))
GPAI providers must give downstream providers and deployers the information necessary to enable their compliance. At minimum this includes:
- Intended uses and known prohibited use cases
- Interaction restrictions (content policy, rate limits, safety mitigations)
- Technical details relevant to downstream transparency obligations (Art. 50)
- Any limitations that could affect downstream high-risk AI system compliance
This is why GPAI providers publish model cards, system cards, and usage policies — they are partly discharging Art. 53(1)(b) obligations.
Copyright Compliance Policy (Art. 53(1)(c))
Providers must implement and publish a policy for respecting rights holders' DSM Directive Art. 4 TDM opt-out reservations during training data collection. This includes:
- A process for identifying and honouring
robots.txtdisallow directives and structured TDM opt-out headers - Documentation of how web crawls respect machine-readable reservations
- A policy for ToS-based opt-outs on third-party platforms
Training Data Summary (Art. 53(1)(d))
Providers must publish a sufficiently detailed summary of training data, including:
- Data sources (web crawl, licensed datasets, synthetic data percentages)
- Languages and modalities covered
- Any data filtering, deduplication, or curation steps
- Known limitations or biases in the training corpus
The GPAI Code of Practice (CoP) Chapters 2–3 operationalise how this summary must be structured.
Machine-Readable Identifier for AI-Generated Content (Art. 53(1)(e))
GPAI providers must ensure their model's outputs can be marked with machine-readable metadata indicating the content was AI-generated, to the extent technically feasible. This supports downstream transparency obligations under Art. 50.
GPAI with Systemic Risk: Additional Obligations Under Art. 55
GPAI models that meet the systemic risk threshold face significantly heavier obligations. The threshold under Art. 51(1) is:
Training compute exceeding 10^25 floating-point operations (FLOPs)
As of 2026, only a small number of models cross this threshold (GPT-4 class and above, Gemini Ultra class). The Commission can also designate models below this threshold as systemic risk if they present systemic risks based on a case-by-case assessment.
Providers of GPAI models with systemic risk must additionally:
| Obligation | Article | Requirement |
|---|---|---|
| Model evaluation | Art. 55(1)(a) | Perform standardised model evaluation against state-of-the-art methodologies, including adversarial testing |
| Systemic risk assessment | Art. 55(1)(b) | Track and document possible systemic risks at EU or global level |
| Incident reporting | Art. 55(1)(c) | Report serious incidents and corrective actions to the AI Office without undue delay |
| Cybersecurity protection | Art. 55(1)(d) | Ensure adequate protection against cybersecurity attacks, including model weights and training infrastructure |
Systemic risk providers must also cooperate with AI Office-initiated evaluations, including by providing access to model weights where requested (Art. 55(2)).
What Deployers and Downstream Providers Must Do
Even if you are not a GPAI model provider, EU AI Act transparency obligations apply to anyone deploying AI-generated content toward end users.
Art. 50: Transparency Obligations for Deployers
Art. 50(1): Deployers using AI systems that interact directly with persons must inform those persons that they are interacting with an AI system — unless the context makes it obvious.
Art. 50(2): Deployers using AI systems that generate synthetic audio, image, video, or text must ensure that the outputs are marked with a machine-readable disclosure that they are AI-generated.
Exceptions apply for content that clearly serves a legitimate purpose (parody, satire, artistic work where disclosure would undermine the purpose).
Art. 50(4) — Deepfake obligation: This is the hardest requirement. Deployers who generate or disseminate deepfakes (realistic AI-generated images/video of real persons) must label them as AI-generated, clearly and visibly. No parody/satire exception applies for deepfakes of real persons in contexts where harm is plausible.
What to Verify from Your GPAI API Provider
Downstream providers should verify the following from their GPAI vendor before August 2026:
Compliance checklist for GPAI API consumers:
- Does the provider publish Annex XI technical documentation (or equivalent)?
- Has the provider published a training data summary under Art. 53(1)(d)?
- Does the provider have a documented copyright compliance policy (TDM opt-out)?
- Does the API return machine-readable AI-generation markers in responses?
- For systemic risk models: has the provider published their model evaluation results?
- Does the provider's acceptable use policy align with the intended use case?
- Has the provider disclosed known limitations relevant to the deployment context?
If a GPAI provider cannot answer these questions, they may be in breach of Art. 53 — and downstream providers who rely on non-compliant GPAI APIs carry residual risk if regulators interpret Art. 53(1)(b) information duties as creating downstream due diligence obligations.
Open-Weight Models: A Special Case
The EU AI Act's treatment of open-weight GPAI models (LLaMA 3, Mistral, Qwen) is nuanced:
Art. 53(2): GPAI model providers who release model weights under an open-source licence are exempt from some Art. 53(1) obligations — specifically they do not need to provide downstream information under 53(1)(b) in the same structured way, because anyone can inspect the weights.
However, they are not exempt from:
- The copyright compliance policy (Art. 53(1)(c))
- The training data summary (Art. 53(1)(d))
- All systemic risk obligations under Art. 55 (if threshold met)
Practical implication: If you self-host LLaMA or Mistral for a commercial product, you are the deployer of an open-weight GPAI model. You do not become the GPAI model provider (Meta/Mistral still bears that obligation). But if you fine-tune the weights and release the result externally, you may become a downstream GPAI provider for the modified model.
Fine-Tuning: When Downstream Providers Become GPAI Providers
The line between "integrating a GPAI API" and "being a GPAI provider" can blur in fine-tuning scenarios.
You remain a downstream provider/deployer if:
- You apply LoRA or RLHF fine-tuning to a base model for a narrow task (e.g., medical document summarisation for your internal tool)
- The resulting model retains the GPAI character of the original but is only used internally
- You use instruction-tuning without modifying the base weights significantly
You may become a GPAI provider if:
- You train significant new capabilities into a model and release it via API for others to build on
- You release fine-tuned weights publicly under an open-source licence where the model has general-purpose character
- You aggregate multiple models into a new general-purpose product that you place on the market
The Commission's guidance on this boundary is expected in the AI Office's model provider technical standards (due 2026). Until then, the Art. 3(63) definition and recital 97 guidance provide the primary framework.
August 2, 2026: What Full Enforcement Means
The EU AI Act applied GPAI rules (Arts. 51–56) from August 2, 2025. But August 2, 2026 is the date when the full enforcement framework becomes operational:
| From August 2, 2026 | Detail |
|---|---|
| AI Office full enforcement powers | Complete inspection and investigation authority under Art. 88 |
| National market surveillance authority coordination | Member states fully operational under Arts. 70–89 |
| Technical documentation audits | AI Office can request and review Annex XI documentation on demand |
| Standardised evaluation protocols | GPAI evaluation standards referenced in CoP become audit-ready |
| Penalty regime fully active | Arts. 99–101 penalties (up to €35M or 7% global turnover for systemic risk violations) |
GPAI providers who have not yet produced Annex XI documentation, published training data summaries, or established copyright compliance policies have until August 2, 2026 to complete these before the full enforcement window opens.
AI Act Omnibus: Potential Changes Before August 2026
The EU AI Act Omnibus (Commission proposal from 2026) may modify GPAI obligations if Trilogue #3 (scheduled May 13, 2026) reaches a political agreement. Proposed changes relevant to GPAI include:
Systemic risk threshold: Discussion of raising from 10^25 to 10^26 FLOPs — which would remove some current systemic risk designations and shift some providers down to the standard Art. 53 track.
SME simplification: Proposed Art. 53 lighter regime for providers with fewer than 250 employees or below €50M turnover — simplified documentation requirements, self-assessment rather than third-party audit.
CoP status clarification: Omnibus may make CoP participation more formally optional for non-systemic-risk providers, with alternative compliance paths.
Important: If Trilogue #3 does not reach agreement by June 30, 2026 (Cypriot Presidency deadline), the current EU AI Act text applies unchanged. Given that Trilogue #2 (April 28, 2026) collapsed without agreement, GPAI providers should plan for compliance against the existing Art. 53–55 framework as a baseline.
Compliance Checklist by Role
If you are a GPAI model provider:
- Annex XI technical documentation complete and maintained
- Training data summary published and current
- Copyright/TDM opt-out policy documented and implemented
- Downstream provider information package prepared (Art. 53(1)(b))
- Machine-readable AI content markers implemented in API responses
- If systemic risk: Model evaluation completed, incident reporting channel established, cybersecurity assessment done
If you are a downstream provider/SaaS developer on GPAI APIs:
- Confirmed GPAI API vendor's Art. 53 compliance status
- Art. 50(1) user disclosure in product UI (AI interaction disclosure)
- Art. 50(2) AI-generation marking for synthetic content outputs
- Art. 50(4) deepfake labelling implemented if applicable
- Acceptable use policy reviewed for compliance with GPAI vendor's terms
- GPAI provider's limitations documented for your risk assessment
Infrastructure and GPAI Compliance
GPAI inference runs on data centre infrastructure. Where that infrastructure is located has compliance implications for the training data pipeline and inference logs:
- GPAI providers logging inference calls for safety monitoring may create personal data under GDPR if prompts contain user data
- EU-hosted inference infrastructure keeps inference logs within EU jurisdiction — no cross-border CLOUD Act exposure for safety logs
- Training runs on EU infrastructure avoid the jurisdiction risk for training data intermediary storage
For SaaS developers deploying GPAI-powered features, hosting your application on EU-native infrastructure (like sota.io) ensures that the data handling layer between your users and the GPAI API stays within EU jurisdiction — even if the GPAI provider's own infrastructure is US-hosted.
Summary
The EU AI Act GPAI framework creates two distinct compliance tracks. Most SaaS developers are downstream providers or deployers — they benefit from GPAI provider compliance (especially Art. 53(1)(b) information duties) but their own primary obligation is the Art. 50 transparency regime.
GPAI model providers (foundation model developers) face the heavier Arts. 53–55 obligations, with additional systemic risk requirements for the largest models. Full enforcement begins August 2, 2026. The AI Act Omnibus may modify thresholds and simplify SME obligations — but given Trilogue uncertainty, the current text is the safe planning baseline.
For any team integrating GPAI APIs into a commercial product, the immediate action is verifying your GPAI vendor's Art. 53 compliance status and ensuring your own Art. 50 disclosure obligations are implemented before the August 2026 enforcement window opens.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.