2026-04-08·8 min read·sota.io team

EU AI Act Hosting Compliance: Why Where You Run Your AI Is an Article 9 Problem (2026)

On August 2, 2026, the high-risk AI provisions of the EU AI Act (Title III, Chapter 2, Articles 8–15) begin applying to new high-risk AI systems. Across Europe, developers are auditing their model architectures, documentation practices, and risk management processes. Almost nobody is auditing their cloud provider.

That is a compliance gap with a deadline attached.

The Connection Nobody Is Making

Search any EU AI Act compliance guide written in the last eighteen months and you will find detailed coverage of Article 9's risk management requirements, Article 13's transparency obligations, and Article 15's robustness requirements. You will find tool recommendations: TLA+ for protocol verification, SPARK Ada for embedded systems, Frama-C for C code. You will rarely find the following observation:

Running a high-risk AI system on US-incorporated cloud infrastructure creates a foreseeable legal risk that directly engages Article 9(4) of the EU AI Act.

This is not a GDPR argument (though GDPR applies). It is an EU AI Act argument. The mechanism is the CLOUD Act.

How CLOUD Act Exposure Becomes a Foreseeable Risk

The US Clarifying Lawful Overseas Use of Data Act (2018) authorises US law enforcement to compel US-incorporated companies to produce data stored anywhere in the world — including EU data centres. A US provider operating a Frankfurt region cannot refuse a lawful CLOUD Act request by pointing to the data's physical location. The legal obligation travels with the US incorporation, not with the server.

For high-risk AI systems, this creates three categories of foreseeable legal risk:

1. Inference data exposure. When an EU citizen interacts with a high-risk AI system — a medical diagnostic tool, a credit scoring service, an employment screening system — the inference inputs and outputs flow through your cloud provider's infrastructure. If that provider is US-incorporated, those records are legally accessible to US authorities without a European judicial order.

2. Training data exposure. If you trained your model on EU-regulated data (patient records, financial data, biometric data) and your training pipeline ran on US infrastructure, that data was exposed to potential compelled disclosure during training. Depending on your sector, that exposure may itself constitute a violation — but even where it does not, it is a risk that was foreseeable and that a reasonable risk management system should have identified.

3. Model weight exposure. Your trained model reflects your training data. The weights are not the data, but they are derived from it. A US provider can be compelled to produce model weights under CLOUD Act. For high-risk systems trained on sensitive EU data, this is a foreseeable risk to the system's integrity and to the confidentiality of the data used to build it.

Article 9(4): Foreseeable Risks Include Legal Risks

Article 9(4) of the EU AI Act is explicit:

"The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and the possible interaction resulting from the combined application of the requirements set out in this Chapter 2, with a view to minimising risks while maximising accuracy, robustness and cybersecurity."

But the operative paragraph is 9(2)(b):

"the identification and analysis of the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse."

"Fundamental rights" is the key phrase. The right to data protection (EU Charter, Article 8) and the right to privacy (Article 7) are fundamental rights. A high-risk AI system that processes EU citizens' data on infrastructure subject to foreign government access orders creates a reasonably foreseeable risk to those fundamental rights.

This is not a novel legal theory. The Court of Justice of the EU applied exactly this reasoning in Schrems II (Case C-311/18), striking down the EU-US Privacy Shield on the grounds that US surveillance law created structural risks to EU citizens' fundamental rights that could not be adequately mitigated by contractual safeguards. The CJEU did not require proof of an actual surveillance event — the foreseeable possibility was sufficient.

Article 9 does not require you to prevent every conceivable risk. It requires you to identify foreseeable risks and adopt appropriate measures to address them. Choosing EU-incorporated infrastructure is one of those appropriate measures.

Three Articles That Make the Hosting Question Explicit

Article 9 — Risk Management System

The risk management system required by Article 9 must be "continuous" and "iterative" across the system's lifecycle. Vendor selection is a lifecycle decision. Selecting a US cloud provider for a high-risk AI system, knowing that CLOUD Act exposure is a foreseeable legal risk, and not documenting that decision and its mitigations, creates an audit gap.

If your conformity assessment documentation does not address the legal jurisdiction of your cloud provider, a notified body reviewing your technical documentation will notice.

Article 13 — Transparency and Provision of Information

Article 13 requires high-risk AI systems to be designed to operate "transparently." Article 13(3)(b) requires that the instructions for use include "the characteristics, capabilities and limitations of performance of the high-risk AI system, including... its level of accuracy."

For AI systems processing personal data, transparency includes transparency about where that processing occurs and under what legal framework. An instruction manual that omits the fact that inference data transits US-jurisdiction infrastructure is not complete.

Article 17 — Quality Management System

Article 17 requires providers of high-risk AI systems to implement a quality management system covering, among other things, "the examination, test and validation procedures to be carried out before, during and at appropriate intervals after the development of the high-risk AI system, and the frequency thereof." It also explicitly includes "the supplier and sub-contractor management strategy."

Your cloud provider is a sub-contractor. Your quality management system must address their legal jurisdiction as part of supplier management.

What "EU-Native" Actually Means for Compliance

Not all "EU data centres" are equal for EU AI Act purposes. The relevant distinction is not where the data is stored — it is where the company is incorporated.

ProviderIncorporationCLOUD Act ExposureEU AI Act Risk
AWS EU (Frankfurt)US (Delaware)YesForeseeable
Azure EU (Amsterdam)US (Washington)YesForeseeable
Google Cloud EUUS (Delaware)YesForeseeable
OVHcloudFranceNoMitigated
Clever CloudFranceNoMitigated
ScalingoFranceNoMitigated
HetznerGermanyNoMitigated
sota.ioGermanyNoMitigated

"EU-native" for Article 9 purposes means EU-incorporated, operating under EU law, with no structural obligation to comply with US surveillance orders. A German company operating German servers has no CLOUD Act obligation. It does have GDPR obligations and EU law obligations — which is exactly what EU AI Act compliance requires you to be able to demonstrate.

The AWS Sovereign Cloud Misconception

AWS announced its "AWS European Sovereign Cloud" initiative in 2022, targeting regulated industries and public sector in Europe. This is a separate AWS infrastructure operated specifically for EU regulatory requirements, with EU-resident operators and EU-controlled access.

However, AWS European Sovereign Cloud does not change AWS's US incorporation. The structural CLOUD Act obligation remains with the parent company. Whether a specific CLOUD Act request would in practice reach AWS European Sovereign Cloud data is a complex legal question — but for EU AI Act Article 9 purposes, "complex legal question" is not the same as "no foreseeable risk."

We covered the AWS European Sovereign Cloud architecture in detail in our post on AWS Sovereign Cloud and CLOUD Act exposure. The short version: it reduces some operational risks but does not eliminate the jurisdictional question.

What This Means If You Are Deploying High-Risk AI Before August 2026

If you are building a system that falls under EU AI Act Annex III (the high-risk AI list includes biometric identification, critical infrastructure management, educational assessment, employment screening, credit scoring, law enforcement risk assessment, migration and border control, and administration of justice), you have approximately four months to achieve compliance.

For your hosting decision specifically, the practical steps are:

1. Identify your data categories. What types of data does your AI system process during inference? If EU citizens' personal data — especially special category data under GDPR Article 9 (health, biometrics, political opinions, religious beliefs) — flows through your inference pipeline, the legal risk category is highest.

2. Map your provider's incorporation. Not their data centre location. Their legal entity. AWS GmbH Frankfurt is a subsidiary of Amazon.com, Inc., Delaware. Find the entity you have a contract with and trace its ultimate parent's jurisdiction.

3. Document the foreseeable risk and your mitigation. Even if you decide to accept some level of risk, Article 9 requires documentation. "We chose AWS because of existing contracts and have accepted CLOUD Act exposure as a residual risk" is a documented decision. An undocumented choice is an audit finding.

4. Consider migration before August 2026. For new systems, selecting EU-incorporated infrastructure from the start is significantly simpler than migrating an existing deployment. For existing systems that will continue operating post-August 2026 and fall under high-risk categories, assess whether the technical cost of migration is comparable to the compliance risk of remaining on US infrastructure.

Deploying a High-Risk AI Service on EU-Native Infrastructure

On sota.io — an EU-native PaaS incorporated in Germany — deploying an inference service looks like this:

# sota.yml
name: high-risk-ai-inference
runtime: python3.12
region: eu-central-1

env:
  MODEL_PATH: /app/models/production
  INFERENCE_LOG_RETENTION: 90d
  GDPR_DATA_RESIDENCY: eu

resources:
  cpu: 4
  memory: 16gi
  gpu: optional

services:
  web:
    start: uvicorn app:app --host 0.0.0.0 --port 8080
    health_check: /health
# app.py — inference endpoint with Article 13 transparency headers
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
import logging

app = FastAPI()

@app.middleware("http")
async def add_ai_act_transparency_headers(request: Request, call_next):
    response = await call_next(request)
    # Article 13: make AI system identifiable and document data location
    response.headers["X-AI-System-Type"] = "high-risk-medical-diagnostic"
    response.headers["X-Data-Jurisdiction"] = "EU-DE"  # Germany, GDPR applies
    response.headers["X-Inference-Log-Retained"] = "90d"
    return response

@app.post("/inference")
async def run_inference(payload: dict):
    # Inference data stays in EU jurisdiction
    result = model.predict(payload["inputs"])
    
    # Article 17: log for quality management system audit trail
    logging.info({
        "event": "inference",
        "jurisdiction": "EU-DE",
        "provider": "sota.io",
        "incorporation": "DE",
        "cloud_act_exposure": False
    })
    
    return {"result": result, "jurisdiction": "EU-DE"}

The infrastructure you choose determines the legal context your Article 9 risk management documentation operates in. Deploying on EU-incorporated infrastructure does not guarantee compliance — it removes a foreseeable risk so you can focus documentation effort on the model properties that remain.

Complementary Reading

This post focuses on the hosting and infrastructure layer. Two companion posts cover the model verification layer:

For the GDPR angle on infrastructure design specifically:

The Timeline

DateRequirement
NowDocument foreseeable risks including hosting jurisdiction in your Article 9 risk register
Q2 2026Finalize technical documentation including supplier management (Article 17)
August 2, 2026EU AI Act Title III Chapter 2 applies to new high-risk AI systems
September 2026EU Cyber Resilience Act vulnerability reporting obligations begin
December 2027Full CRA Annex I compliance required

EU AI Act compliance is not a model audit. It is a system audit — and that system includes the cloud infrastructure your model runs on. The foreseeable risk of CLOUD Act exposure on US-hosted infrastructure is something Article 9 requires you to address before you deploy, not after a notified body asks about it.

EU-native infrastructure is one of the clearest, most direct mitigations available. It is available today.