EU AI Act Art.28 & Art.29: What SaaS Developers Owe When They Use Someone Else's AI Model
You didn't train the model. You didn't write the safety documentation. You signed up for an API key and started shipping features.
But under EU AI Act Articles 28 and 29, you may have inherited a stack of compliance obligations that your model provider never warned you about.
This guide covers exactly what transfers from AI provider to deployer, when a SaaS developer crosses the line from "just using an API" to becoming a downstream provider with full regulatory accountability — and what your compliance calendar looks like heading into the August 2, 2026 enforcement window.
The Two Roles: Provider vs Deployer
The EU AI Act defines two primary actors in any AI system deployment chain:
Provider (Recital 80, Art.3(3)): The entity that develops an AI system or general-purpose AI model and places it on the market — either for sale or for own use. Providers bear the heaviest documentation and conformity obligations.
Deployer (Art.3(4)): A natural or legal person using an AI system under their own authority for a professional activity. Deployers accept a narrower but real obligation set.
If you're a SaaS developer calling the OpenAI, Anthropic, or Google AI APIs to power features in your product, you are a deployer by default.
Here's the problem: deployer status does not mean zero compliance obligations. It means a different set — and Art.28 governs exactly when those obligations escalate toward provider-level accountability.
Article 28: When You Inherit Provider Obligations
Art.28 is the critical rule. It defines the conditions under which a deployer becomes subject to provider obligations.
Art.28(1): The Three Triggers
A deployer becomes subject to provider obligations when they:
- Place an AI system on the market under their own name or trademark — even if the underlying model is third-party
- Make a substantial modification to a high-risk AI system already placed on the market
- Modify the intended purpose of a system in a way that makes it high-risk (or more high-risk) when it was not designed for that purpose
What Each Trigger Means in Practice
Trigger 1 — Own name/trademark:
If you're building a product (call it "TalentScreen.io") that uses an LLM to rank job applicants — and you're selling TalentScreen.io as your product — you have placed a high-risk AI system (Annex III, Point 4: employment decisions) on the market under your own brand.
The fact that the LLM underneath is GPT-4 or Claude doesn't insulate you. You're the provider of TalentScreen.io, not Anthropic.
Trigger 2 — Substantial modification:
If you take a foundation model and fine-tune it on your own dataset, extend it with a retrieval-augmented generation (RAG) system that changes how it processes personal data, or wrap it in an agentic loop that makes autonomous decisions — you may have made a "substantial modification" under Art.28(2).
The AI Act does not define "substantial modification" with mathematical precision, but Art.28(2) refers to a modification that affects the AI system's compliance status. The GPAI Code of Practice (expected August 2026) will offer more guidance, but the working interpretation is: if your modification changes the risk profile of the system, you may have crossed the line.
Trigger 3 — Intended purpose change:
Suppose you take an image classification model cleared for retail product identification and deploy it to flag "suspicious" faces at building entrances. That's a purpose change that may trigger high-risk status (Annex III, Point 6: biometric categorisation). Under Art.28(1)(c), you'd carry provider obligations — even though you didn't modify a line of model code.
Art.28(3): The Information Right
Even when a deployer doesn't cross the Art.28(1) threshold, Art.28(3) gives deployers the right to request technical documentation from providers. If a provider refuses or can't supply it within 30 days, the deployer is entitled to report this to the national market surveillance authority.
This is practically useful: before deploying a model in a high-risk use case, request the provider's EU Declaration of Conformity and technical file. The answer (or refusal) tells you a great deal about your compliance exposure.
Article 29: The Baseline Deployer Obligations
For deployers who haven't triggered Art.28(1) — i.e., standard API usage for non-high-risk use cases — Art.29 sets the floor.
Art.29(1): Follow the Instructions
Deployers must use high-risk AI systems "in accordance with the instructions for use accompanying the system." This sounds trivial, but it has real implications:
- If the provider's documentation says the model should not be used for automated creditworthiness decisions without human review, and you remove the human review step, you've violated Art.29(1).
- If the model card specifies maximum context length or input validation requirements, deploying outside those parameters may shift liability toward you.
Document which version of the model you're using, what the provider's instructions specify for your use case, and how your implementation adheres to those instructions.
Art.29(2): Human Oversight
Deployers of high-risk AI systems must ensure genuine human oversight, not theater. Art.29(2) requires:
- Assigning the oversight function to persons with the necessary competence, authority, and resources
- Ensuring those persons understand the system's capacities and limitations
- Ensuring they can intervene or halt the system if needed
The key word is genuine: a checkbox in a UI that users click past without reading doesn't satisfy Art.29(2). The GPAI Code of Practice drafts describe "meaningful human control" as requiring that the reviewing human has both the capability and the actual authority to override the AI.
Art.29(3): Input Data Quality
Deployers who control the input data fed to a high-risk AI system bear responsibility for ensuring that data's relevance and representativeness. If you're feeding personal data from your user base into a model that makes consequential decisions, you need:
- A data quality assessment process
- Documentation that the training-distribution assumptions match your actual deployment population
- A process to detect and respond to distribution drift
Art.29(4): Fundamental Rights Impact Assessment (High-Risk, Public Bodies + Private Deployers)
Art.29(4) applies specifically to deployers of high-risk AI systems that are:
- Public authorities, OR
- Private entities operating in regulated sectors (banking, insurance, healthcare, critical infrastructure)
These deployers must conduct a Fundamental Rights Impact Assessment (FRIA) before deployment. The FRIA covers:
- Which fundamental rights may be affected (dignity, non-discrimination, data protection, freedom of expression, etc.)
- The specific population likely to be impacted
- Measures to mitigate identified risks
- A plan for monitoring rights impacts post-deployment
The FRIA must be documented and submitted to the relevant national competent authority before the system goes live.
Art.29(5): Logging and Monitoring
Deployers must retain logs generated by high-risk AI systems for the period specified by applicable law — at least 6 months minimum under the AI Act, longer where sector-specific regulation requires more.
# Minimal Art.29(5) logging implementation
import json
import hashlib
from datetime import datetime, timezone
from pathlib import Path
class AIActDeployerLogger:
def __init__(self, log_dir: str, retention_days: int = 180):
self.log_dir = Path(log_dir)
self.log_dir.mkdir(parents=True, exist_ok=True)
self.retention_days = retention_days
def log_decision(
self,
session_id: str,
model_id: str,
model_version: str,
input_hash: str, # hash, not raw input — avoid PII in logs
output_summary: str,
human_reviewed: bool,
decision_made: str,
risk_level: str,
) -> str:
entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"session_id": session_id,
"model_id": model_id,
"model_version": model_version,
"input_hash": input_hash,
"output_summary": output_summary,
"human_reviewed": human_reviewed,
"decision_made": decision_made,
"risk_level": risk_level,
"schema_version": "eu-ai-act-art29-v1",
}
log_id = hashlib.sha256(
f"{session_id}{entry['timestamp']}".encode()
).hexdigest()[:16]
log_file = self.log_dir / f"{datetime.now(timezone.utc).date()}.jsonl"
with open(log_file, "a") as f:
f.write(json.dumps(entry) + "\n")
return log_id
def hash_input(self, raw_input: str) -> str:
"""Hash PII-containing inputs before logging."""
return hashlib.sha256(raw_input.encode()).hexdigest()
Art.29(6): Reporting Serious Incidents
If a high-risk AI system causes (or contributes to) a serious incident, the deployer must report it to the provider and the relevant national authority without undue delay.
"Serious incident" means an incident that has, or may reasonably have, a direct or indirect effect on health, safety, fundamental rights, or significant property damage.
The Provider-to-Deployer Information Chain
One of the AI Act's structural assumptions is that providers supply deployers with the information needed to comply. Art.13 (transparency to deployers) and Art.16(d) (technical documentation obligations) create this pipeline.
In practice, this means:
| Provider must supply | Deployer uses it for |
|---|---|
| Instructions for use | Art.29(1) adherence |
| Performance metrics and limitations | Art.29(2) oversight design |
| Training data characteristics | Art.29(3) input data assessment |
| Incident reporting process | Art.29(6) incident escalation |
| EU Declaration of Conformity | Audit and market surveillance evidence |
What to do today: Pull your provider's current documentation for each model you deploy in a high-risk context. If the provider hasn't published an EU Declaration of Conformity or technical summary yet, use Art.28(3) to formally request it. The clock for this starts now — the August 2026 enforcement date applies equally to deployers.
When You're Both Provider and Deployer
Many SaaS products sit in both roles simultaneously:
- You're a deployer relative to OpenAI or Anthropic (you call their API)
- You're a provider relative to your own customers who integrate your API
If your product exposes AI functionality through an API that other companies build on, you carry provider obligations toward those downstream integrators — even though you're a deployer in the upstream direction.
This creates a compliance chain:
Anthropic (GPAI Provider)
↓
Your SaaS Product (Deployer + Downstream Provider)
↓
Your Customer's Application (Deployer)
↓
End User
Each transition point has documentation obligations. The AI Act's enforcement model assumes this chain is legible at every step — which means your technical documentation must be transmittable to downstream deployers who request it.
Compliance Calendar: August 2, 2026
The GPAI chapter (Art.51–Art.63) and Art.50 transparency obligations apply from August 2, 2026. Art.28 and Art.29 obligations for high-risk AI systems apply on the same date for systems placed on the market after August 2026. Systems already deployed before that date get a 12-month grandfathering period until August 2, 2027 — but only if they don't receive a substantial modification in the interim.
| Now (May 2026) | June 2026 | July 2026 | August 2, 2026 |
|---|---|---|---|
| Audit your AI feature inventory against Annex III | Request provider technical documentation | Finalize Art.29(4) FRIA if applicable | Enforcement begins |
| Classify each feature: deployer only vs downstream provider | Implement Art.29(5) logging | Test human oversight flows | Incident reporting active |
| Map your provider information chain | Draft your instructions compliance documentation | Complete deployer obligations register |
The 12-Item Art.28/29 Deployer Checklist
Inventory and classification:
- List every AI feature in your product
- Map each to an Annex III category (or confirm it's not high-risk)
- Determine: are you a deployer, downstream provider, or both?
Documentation:
- Pull provider's instructions for use and EU Declaration of Conformity
- If not available: send formal Art.28(3) request and document the response
- Maintain a deployer obligations register mapping each feature to its obligations
High-risk specific:
- Design genuine human oversight (Art.29(2)) — not checkbox theater
- Document input data quality controls (Art.29(3))
- If public body or regulated sector: complete FRIA (Art.29(4))
Operations:
- Implement structured logging for Art.29(5) — minimum 6 months retention
- Set up serious incident detection and escalation path (Art.29(6))
- If you expose an API: prepare downstream deployer documentation package
Infrastructure: Where You Host Matters
EU AI Act compliance is a legal obligation — but it lives on top of a technical stack. If your AI feature processes personal data (which most do, since user inputs are personal data under GDPR Art.4), you're simultaneously subject to:
- EU AI Act (deployer obligations)
- GDPR (lawful basis, data minimization, erasure rights)
- CLOUD Act exposure if your infrastructure has a US parent company
CLOUD Act Section 2(a) allows US authorities to compel disclosure from "electronic communications services" with US parent companies — regardless of where the data is stored. If your AI logging infrastructure (Art.29(5) data) is on AWS, Azure, or GCP, that log data is theoretically accessible under a CLOUD Act order.
For EU-regulated industries (banking, healthcare, critical infrastructure), where your AI compliance logs live is a risk factor that regulators are beginning to scrutinize.
sota.io is incorporated in the EU with no US parent company, meaning CLOUD Act compelled disclosure doesn't apply to data hosted there. For developers building AI features in regulated EU sectors, that's one fewer compliance surface to defend.
Summary
EU AI Act Art.28 and Art.29 impose real obligations on SaaS developers using third-party AI models:
- Art.28 defines when you become a provider — own brand, substantial modification, or purpose change
- Art.29 sets the deployer floor — instruction adherence, human oversight, input data quality, logging, incident reporting, and (for regulated entities) a Fundamental Rights Impact Assessment
- The information chain between providers and deployers is legally structured — use Art.28(3) to request documentation if providers haven't supplied it
- If you expose an AI API: you're simultaneously a deployer (to your model provider) and a provider (to your customers)
The August 2, 2026 enforcement date is 90 days away. The compliance work for Art.28/29 is not a six-month project — but it's also not something you can start in late July.
The 12-item checklist above is the starting point. Start with the inventory.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.