2026-05-04·14 min read·sota.io team

EU AI Act Art.28 & Art.29: What SaaS Developers Owe When They Use Someone Else's AI Model

You didn't train the model. You didn't write the safety documentation. You signed up for an API key and started shipping features.

But under EU AI Act Articles 28 and 29, you may have inherited a stack of compliance obligations that your model provider never warned you about.

This guide covers exactly what transfers from AI provider to deployer, when a SaaS developer crosses the line from "just using an API" to becoming a downstream provider with full regulatory accountability — and what your compliance calendar looks like heading into the August 2, 2026 enforcement window.


The Two Roles: Provider vs Deployer

The EU AI Act defines two primary actors in any AI system deployment chain:

Provider (Recital 80, Art.3(3)): The entity that develops an AI system or general-purpose AI model and places it on the market — either for sale or for own use. Providers bear the heaviest documentation and conformity obligations.

Deployer (Art.3(4)): A natural or legal person using an AI system under their own authority for a professional activity. Deployers accept a narrower but real obligation set.

If you're a SaaS developer calling the OpenAI, Anthropic, or Google AI APIs to power features in your product, you are a deployer by default.

Here's the problem: deployer status does not mean zero compliance obligations. It means a different set — and Art.28 governs exactly when those obligations escalate toward provider-level accountability.


Article 28: When You Inherit Provider Obligations

Art.28 is the critical rule. It defines the conditions under which a deployer becomes subject to provider obligations.

Art.28(1): The Three Triggers

A deployer becomes subject to provider obligations when they:

  1. Place an AI system on the market under their own name or trademark — even if the underlying model is third-party
  2. Make a substantial modification to a high-risk AI system already placed on the market
  3. Modify the intended purpose of a system in a way that makes it high-risk (or more high-risk) when it was not designed for that purpose

What Each Trigger Means in Practice

Trigger 1 — Own name/trademark:

If you're building a product (call it "TalentScreen.io") that uses an LLM to rank job applicants — and you're selling TalentScreen.io as your product — you have placed a high-risk AI system (Annex III, Point 4: employment decisions) on the market under your own brand.

The fact that the LLM underneath is GPT-4 or Claude doesn't insulate you. You're the provider of TalentScreen.io, not Anthropic.

Trigger 2 — Substantial modification:

If you take a foundation model and fine-tune it on your own dataset, extend it with a retrieval-augmented generation (RAG) system that changes how it processes personal data, or wrap it in an agentic loop that makes autonomous decisions — you may have made a "substantial modification" under Art.28(2).

The AI Act does not define "substantial modification" with mathematical precision, but Art.28(2) refers to a modification that affects the AI system's compliance status. The GPAI Code of Practice (expected August 2026) will offer more guidance, but the working interpretation is: if your modification changes the risk profile of the system, you may have crossed the line.

Trigger 3 — Intended purpose change:

Suppose you take an image classification model cleared for retail product identification and deploy it to flag "suspicious" faces at building entrances. That's a purpose change that may trigger high-risk status (Annex III, Point 6: biometric categorisation). Under Art.28(1)(c), you'd carry provider obligations — even though you didn't modify a line of model code.

Art.28(3): The Information Right

Even when a deployer doesn't cross the Art.28(1) threshold, Art.28(3) gives deployers the right to request technical documentation from providers. If a provider refuses or can't supply it within 30 days, the deployer is entitled to report this to the national market surveillance authority.

This is practically useful: before deploying a model in a high-risk use case, request the provider's EU Declaration of Conformity and technical file. The answer (or refusal) tells you a great deal about your compliance exposure.


Article 29: The Baseline Deployer Obligations

For deployers who haven't triggered Art.28(1) — i.e., standard API usage for non-high-risk use cases — Art.29 sets the floor.

Art.29(1): Follow the Instructions

Deployers must use high-risk AI systems "in accordance with the instructions for use accompanying the system." This sounds trivial, but it has real implications:

Document which version of the model you're using, what the provider's instructions specify for your use case, and how your implementation adheres to those instructions.

Art.29(2): Human Oversight

Deployers of high-risk AI systems must ensure genuine human oversight, not theater. Art.29(2) requires:

The key word is genuine: a checkbox in a UI that users click past without reading doesn't satisfy Art.29(2). The GPAI Code of Practice drafts describe "meaningful human control" as requiring that the reviewing human has both the capability and the actual authority to override the AI.

Art.29(3): Input Data Quality

Deployers who control the input data fed to a high-risk AI system bear responsibility for ensuring that data's relevance and representativeness. If you're feeding personal data from your user base into a model that makes consequential decisions, you need:

Art.29(4): Fundamental Rights Impact Assessment (High-Risk, Public Bodies + Private Deployers)

Art.29(4) applies specifically to deployers of high-risk AI systems that are:

These deployers must conduct a Fundamental Rights Impact Assessment (FRIA) before deployment. The FRIA covers:

The FRIA must be documented and submitted to the relevant national competent authority before the system goes live.

Art.29(5): Logging and Monitoring

Deployers must retain logs generated by high-risk AI systems for the period specified by applicable law — at least 6 months minimum under the AI Act, longer where sector-specific regulation requires more.

# Minimal Art.29(5) logging implementation
import json
import hashlib
from datetime import datetime, timezone
from pathlib import Path

class AIActDeployerLogger:
    def __init__(self, log_dir: str, retention_days: int = 180):
        self.log_dir = Path(log_dir)
        self.log_dir.mkdir(parents=True, exist_ok=True)
        self.retention_days = retention_days

    def log_decision(
        self,
        session_id: str,
        model_id: str,
        model_version: str,
        input_hash: str,  # hash, not raw input — avoid PII in logs
        output_summary: str,
        human_reviewed: bool,
        decision_made: str,
        risk_level: str,
    ) -> str:
        entry = {
            "timestamp": datetime.now(timezone.utc).isoformat(),
            "session_id": session_id,
            "model_id": model_id,
            "model_version": model_version,
            "input_hash": input_hash,
            "output_summary": output_summary,
            "human_reviewed": human_reviewed,
            "decision_made": decision_made,
            "risk_level": risk_level,
            "schema_version": "eu-ai-act-art29-v1",
        }
        log_id = hashlib.sha256(
            f"{session_id}{entry['timestamp']}".encode()
        ).hexdigest()[:16]
        log_file = self.log_dir / f"{datetime.now(timezone.utc).date()}.jsonl"
        with open(log_file, "a") as f:
            f.write(json.dumps(entry) + "\n")
        return log_id

    def hash_input(self, raw_input: str) -> str:
        """Hash PII-containing inputs before logging."""
        return hashlib.sha256(raw_input.encode()).hexdigest()

Art.29(6): Reporting Serious Incidents

If a high-risk AI system causes (or contributes to) a serious incident, the deployer must report it to the provider and the relevant national authority without undue delay.

"Serious incident" means an incident that has, or may reasonably have, a direct or indirect effect on health, safety, fundamental rights, or significant property damage.


The Provider-to-Deployer Information Chain

One of the AI Act's structural assumptions is that providers supply deployers with the information needed to comply. Art.13 (transparency to deployers) and Art.16(d) (technical documentation obligations) create this pipeline.

In practice, this means:

Provider must supplyDeployer uses it for
Instructions for useArt.29(1) adherence
Performance metrics and limitationsArt.29(2) oversight design
Training data characteristicsArt.29(3) input data assessment
Incident reporting processArt.29(6) incident escalation
EU Declaration of ConformityAudit and market surveillance evidence

What to do today: Pull your provider's current documentation for each model you deploy in a high-risk context. If the provider hasn't published an EU Declaration of Conformity or technical summary yet, use Art.28(3) to formally request it. The clock for this starts now — the August 2026 enforcement date applies equally to deployers.


When You're Both Provider and Deployer

Many SaaS products sit in both roles simultaneously:

If your product exposes AI functionality through an API that other companies build on, you carry provider obligations toward those downstream integrators — even though you're a deployer in the upstream direction.

This creates a compliance chain:

Anthropic (GPAI Provider)
    ↓
Your SaaS Product (Deployer + Downstream Provider)
    ↓
Your Customer's Application (Deployer)
    ↓
End User

Each transition point has documentation obligations. The AI Act's enforcement model assumes this chain is legible at every step — which means your technical documentation must be transmittable to downstream deployers who request it.


Compliance Calendar: August 2, 2026

The GPAI chapter (Art.51–Art.63) and Art.50 transparency obligations apply from August 2, 2026. Art.28 and Art.29 obligations for high-risk AI systems apply on the same date for systems placed on the market after August 2026. Systems already deployed before that date get a 12-month grandfathering period until August 2, 2027 — but only if they don't receive a substantial modification in the interim.

Now (May 2026)June 2026July 2026August 2, 2026
Audit your AI feature inventory against Annex IIIRequest provider technical documentationFinalize Art.29(4) FRIA if applicableEnforcement begins
Classify each feature: deployer only vs downstream providerImplement Art.29(5) loggingTest human oversight flowsIncident reporting active
Map your provider information chainDraft your instructions compliance documentationComplete deployer obligations register

The 12-Item Art.28/29 Deployer Checklist

Inventory and classification:

Documentation:

High-risk specific:

Operations:


Infrastructure: Where You Host Matters

EU AI Act compliance is a legal obligation — but it lives on top of a technical stack. If your AI feature processes personal data (which most do, since user inputs are personal data under GDPR Art.4), you're simultaneously subject to:

CLOUD Act Section 2(a) allows US authorities to compel disclosure from "electronic communications services" with US parent companies — regardless of where the data is stored. If your AI logging infrastructure (Art.29(5) data) is on AWS, Azure, or GCP, that log data is theoretically accessible under a CLOUD Act order.

For EU-regulated industries (banking, healthcare, critical infrastructure), where your AI compliance logs live is a risk factor that regulators are beginning to scrutinize.

sota.io is incorporated in the EU with no US parent company, meaning CLOUD Act compelled disclosure doesn't apply to data hosted there. For developers building AI features in regulated EU sectors, that's one fewer compliance surface to defend.


Summary

EU AI Act Art.28 and Art.29 impose real obligations on SaaS developers using third-party AI models:

The August 2, 2026 enforcement date is 90 days away. The compliance work for Art.28/29 is not a six-month project — but it's also not something you can start in late July.

The 12-item checklist above is the starting point. Start with the inventory.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.