2026-04-29·15 min read·
Amazon Bedrock is the fastest way to add generative AI to your application — access Claude, Llama, Mistral, and Titan through a single unified API without managing GPU infrastructure. For EU developers, that simplicity hides a dual compliance trap: every user prompt, every inference response, and every model invocation log passes through US-jurisdiction AWS infrastructure, triggering both GDPR Art.28 obligations and — from August 2026 — EU AI Act Art.50 transparency requirements. This guide maps exactly where your data goes inside Bedrock, why inference logs are personal data under GDPR, and what the EU AI Act requires of you as a Bedrock-powered deployer. ## The Bedrock Data Flow: Where Your EU User Prompts Actually Go When a user in Berlin sends a message to your Bedrock-powered chatbot, the inference request follows this path: 1. **Your application server** (hopefully in the EU) receives the user message 2. **Bedrock API call** — the prompt is transmitted to `bedrock-runtime.[region].amazonaws.com` 3. **AWS foundation model inference** — the model processes the prompt on AWS hardware 4. **Inference response** — streamed back to your application 5. **Bedrock logging** (if enabled) — full prompt + response stored in S3, CloudWatch, or CloudTrail Each of these steps creates a data record under US jurisdiction. **The region question:** AWS offers `eu-west-1` (Ireland), `eu-central-1` (Frankfurt), and `eu-north-1` (Stockholm) Bedrock endpoints. This matters for latency. It does **not** make the data EU-only. AWS is a US company. The CLOUD Act grants US law enforcement the ability to compel AWS to produce data stored on its infrastructure **regardless of where the physical servers are located**. EU server location ≠ EU jurisdiction. ## Why Inference Logs Are Personal Data Under GDPR GDPR Art.4(1) defines personal data as "any information relating to an identified or identifiable natural person." User prompts to your chatbot almost certainly qualify: - **Direct identifiers in prompts:** Users often include their name, email, account ID, order number, or address in natural language ("My order #12345 hasn't arrived") - **Indirect identifiers:** The combination of device fingerprint + IP address + session ID that accompanies your API call is sufficient for identification - **Sensitive category data (Art.9):** Users frequently disclose health information, political views, or religious beliefs in conversational AI interactions — triggering stricter processing obligations - **Behavioral patterns:** A user's query history with your AI assistant reveals preferences, concerns, and routines that courts have consistently treated as personal data If Bedrock processes personal data on your behalf — and it does — then AWS is your **Data Processor** under GDPR Art.28. You are the **Data Controller**. The legal obligations that follow are significant. ## GDPR Art.28: The Data Processor Contract Requirements Under Art.28, you must have a written contract (Data Processing Agreement, DPA) with every processor handling personal data on your behalf. AWS provides a standard DPA. Before signing, verify it covers: **What the AWS Bedrock DPA covers:** - AWS commits to processing data only on your documented instructions - Technical and organizational measures (TOMs) for data security - Subprocessor notification obligations - Data subject rights assistance **What the AWS Bedrock DPA cannot fix:** - **CLOUD Act jurisdiction:** No contractual clause overrides US law. A National Security Letter or FISA order does not require notice to you or your users. AWS cannot notify you of secret government data requests. This is not AWS's fault — it is US law. - **Subprocessor chain:** AWS uses third-party components in its inference stack. The DPA lists subprocessors, but the chain includes hardware manufacturers, firmware, and data center operators — not all of which are EU entities. - **Inference model provenance:** Foundation models available on Bedrock were trained on data of mixed jurisdictional origin. Training data deletion requests under GDPR Art.17 are structurally impossible for weights already embedded in a model. **Art.28(3)(h) — the auditing problem:** Your DPA must give you audit rights over AWS's processing. In practice, AWS satisfies this with SOC 2 Type II and ISO 27001 reports. These certifications tell you about information security. They do **not** audit whether AWS has received or responded to government data requests. ## GDPR Art.32: Technical Security Measures for AI Inference Art.32 requires "appropriate technical and organisational measures" to ensure security appropriate to the risk. For AI inference, the key risks are: **Prompt injection attacks:** A malicious user crafts a prompt that causes the model to output other users' data (through RAG poisoning or context manipulation). This is a data breach under Art.33 requiring 72-hour notification to your supervisory authority. **Inference log exposure:** If Bedrock model invocation logging is enabled and logs are stored in S3, a misconfigured bucket policy exposes all user prompts. Unlike a database breach, AI inference logs often contain unstructured PII that is hard to audit post-breach. **Cross-session contamination:** Foundation models do not have memory across inference calls by default. However, RAG architectures that store user context in shared vector databases can leak information between users if access controls are misconfigured. **What Bedrock provides:** Encryption at rest (AES-256), encryption in transit (TLS 1.2+), VPC endpoints for private connectivity, and IAM access controls. These are necessary but not sufficient for GDPR Art.32 compliance — they address confidentiality but not the jurisdictional exposure. ## EU AI Act Art.50: Transparency Obligations from 2 August 2026 **The activation date:** The EU AI Act's transparency obligations under Art.50 apply from 2 August 2026 — 95 days from now. This is not delayed by the Omnibus Trilogue (which covers Art.6 high-risk systems). Art.50 applies to all AI systems that interact with natural persons, regardless of risk tier. **What Art.50 requires for Bedrock-powered chatbots:** **Art.50(1) — Disclosure to users:** If you deploy an AI system that interacts with natural persons, you must inform users that they are interacting with an AI — unless this is "obvious from the context." The "obvious from context" exception is narrow. A chatbot on your e-commerce site is **not** obviously an AI to all users. You need visible, persistent disclosure. **Art.50(2) — Synthetic content labeling:** If you use Bedrock to generate images, video, audio, or text that "could falsely appear to a person to be authentic," the output must be machine-readable labeled as AI-generated. This applies to marketing copy, product descriptions, and synthetic imagery generated via Bedrock. **Art.50(3) — Emotion recognition:** If your application uses Bedrock or any AI system for emotion recognition (sentiment analysis that classifies emotional states), you must inform users before exposure. Bedrock's Comprehend integration and some Claude-based sentiment pipelines may trigger this. **Art.50(4) — Deep fakes:** AI-generated video and audio content must be labeled. Marketing video, AI-generated spokesperson videos, and synthetic voice assistants all qualify. **Who is the obligated party?** For Art.50 purposes, you are the **deployer** — the entity that puts the AI system into use. AWS (as the model provider infrastructure) and the model providers (Anthropic for Claude, Meta for Llama) are **providers**. The deployer carries the transparency obligation toward end users. Your DPA with AWS does not transfer this obligation to AWS. ## The Dual-Compliance Trap: GDPR + EU AI Act Interaction This is the issue most compliance guides miss: GDPR and EU AI Act create **overlapping obligations** that don't simply add up — they interact in complex ways. **Scenario 1 — Deletion requests:** A user exercises GDPR Art.17 (right to erasure) and asks you to delete all their data. You can delete their records from your database. You cannot delete their prompts from Bedrock inference logs if you have logging enabled — and you cannot remove information that has influenced model weights (it hasn't for API inference, but RAG context may persist). **Scenario 2 — Art.50 disclosure + GDPR Art.13 information:** Your AI disclosure under Art.50 must be provided at first interaction. Your GDPR privacy notice under Art.13 must list all processors. Both must name Bedrock and describe the inference data flow. These can be combined, but the legal basis for each is different (EU AI Act compliance vs. GDPR transparency) — maintain separate records of what disclosure satisfies which obligation. **Scenario 3 — Data subject access requests:** A user requests all data you hold about them (GDPR Art.15). This includes Bedrock inference logs if you retain them. If you use Bedrock with logging disabled but your application logs the request/response for debugging, those logs are in scope. If you use Knowledge Base / RAG with persistent user embeddings, the embedding vectors may need to be included in the response. **Scenario 4 — Cross-border transfer legality:** Using Bedrock for EU user data constitutes a transfer to the US under GDPR Chapter V. The AWS EU Standard Contractual Clauses (SCCs) cover this. However, after the Privacy Shield invalidation history and ongoing Schrems III litigation risk, relying solely on SCCs exposes you to regulatory uncertainty. The EDPB's 2023 additional safeguards recommendations suggest supplementary technical measures (encryption + key control outside the US) — which Bedrock's architecture does not support for inference data. ## AWS Bedrock Pricing for EU Regions (April 2026) For context on the commercial cost of Bedrock in EU regions: | Model | Input (per 1M tokens) | Output (per 1M tokens) | EU Region Available | |-------|----------------------|------------------------|---------------------| | Claude 3.5 Sonnet (Anthropic) | $3.00 | $15.00 | eu-west-1, eu-central-1 | | Claude 3 Haiku (Anthropic) | $0.25 | $1.25 | eu-west-1, eu-central-1 | | Llama 3.3 70B (Meta) | $0.72 | $0.72 | eu-west-1 | | Mistral Large 2 | $2.00 | $6.00 | eu-west-1 | | Titan Text Express | $0.20 | $0.60 | eu-west-1, eu-central-1 | These prices are competitive. The compliance cost is separate and harder to quantify — legal review, DPA negotiation, Art.50 disclosure implementation, DPIA documentation. ## EU-Native AI Inference Alternatives for 2026 If Bedrock's jurisdictional exposure is unacceptable for your use case, these alternatives process inference data under EU jurisdiction: **Scaleway Generative APIs (France):** Offers Mistral Large, Llama 3.x, and Qwen inference APIs from Paris (FR-PAR-1). Scaleway is a French company, subject to EU law only. Pricing comparable to Bedrock for open models. Best for: replacing Bedrock Mistral/Llama inference with zero jurisdictional exposure. **OVHcloud AI Endpoints (France/Germany):** OpenAI-compatible API serving open models from OVH data centers. OVH is French, EU-jurisdiction. SOC 2 + ISO 27001 + HDS (French health data hosting certification). Best for: healthcare applications where French CNIL guidance applies. **Mistral AI Platform (France):** Direct inference from the model creator. Mistral is a French company, training and serving from EU infrastructure. Mistral Large 2 is available directly. Pricing: comparable to AWS. Best for: cutting out the middleman for Mistral-based applications. **Aleph Alpha (Germany):** German AI company, models trained on EU data with EU-jurisdiction inference. Specialized in German/EU regulatory compliance. Luminous model family. Best for: German government applications or cases requiring DSGVO (German GDPR) alignment documentation. **Self-hosted open models via sota.io:** Deploy Ollama, vLLM, or Hugging Face TGI on EU infrastructure. Full control of inference data with no third-party processing. Models: Llama 3.3 70B, Mistral 7B, Mixtral 8x7B, Phi-3. Trade-off: GPU cost + operational overhead vs. zero jurisdictional exposure. ## Practical Compliance Checklist for Bedrock Users If you are currently using Bedrock for EU user data, here is the minimum viable compliance posture: **Immediate (before next deployment):** - [ ] Verify your AWS DPA covers Bedrock inference (not just S3/EC2 — ensure "AI services" are in scope) - [ ] Document the legal basis for transferring EU personal data to Bedrock (SCCs + EDPB supplementary measures TIA) - [ ] Disable Bedrock model invocation logging unless you have a retention justification and Art.13 disclosure **Before 2 August 2026 (Art.50 deadline):** - [ ] Add AI interaction disclosure to all user-facing interfaces that use Bedrock - [ ] If using Bedrock for synthetic content generation, implement machine-readable labeling - [ ] Update your privacy policy to include Bedrock as a named subprocessor under Art.28(2)(c) - [ ] Complete a DPIA under GDPR Art.35 if your AI processing is "high risk" (large scale processing of sensitive data, systematic profiling) **For new AI features:** - [ ] Run a Data Protection Impact Assessment (DPIA) before deploying Bedrock-powered features that process EU personal data at scale - [ ] Evaluate whether an EU-native alternative (Scaleway, OVH, Mistral) achieves equivalent functionality with lower regulatory risk - [ ] Add Bedrock API response logging to your incident response plan — if a breach occurs, inference logs are in scope ## The Architecture Decision Using Bedrock is not inherently illegal under GDPR — it is a legal basis + DPA + supplementary measures question. What Bedrock cannot provide, by design, is: 1. **Immunity from CLOUD Act compelled disclosure** (no contractual fix exists) 2. **Art.17 erasure of inferred model knowledge** (model weights cannot be targeted for deletion) 3. **Certainty over US government access history** (NSLs are secret by law) For most SaaS applications with moderate data sensitivity, Bedrock with proper DPA and Art.50 disclosure is defensible. For healthcare, legal, financial, and government applications processing sensitive personal data at scale, the jurisdictional exposure warrants a cost-benefit analysis against EU-native alternatives. The EU AI Act adds a new dimension: from August 2026, the compliance cost of Bedrock includes not just the GDPR engineering burden but also the Art.50 transparency infrastructure. If you are building that infrastructure anyway, evaluate whether it is cheaper to build it once on an EU-native stack. --- *This guide covers the state of AWS Bedrock, GDPR, and EU AI Act obligations as of April 2026. Legal requirements are subject to change — verify current EDPB guidance and AWS DPA terms before making compliance decisions. This is not legal advice.*

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.