2026-03-31·6 min read·sota.io team

Deploy an AI Agent to Europe — EU Infrastructure for Claude and LangChain Agents

AI agents are no longer experimental. They run in production: processing user requests, storing conversation history, calling external APIs, and handling personal data. For European teams, this raises an immediate question most tutorials skip: where is the compute running?

A Claude agent or LangGraph workflow that handles EU user data is subject to GDPR — regardless of how it was built. If your agent runs on AWS us-east-1 or a US-region PaaS, you are operating an EU-data processor outside European jurisdiction.

This guide shows how to deploy Python-based AI agents to EU infrastructure using sota.io, with managed PostgreSQL for agent memory and no DevOps configuration required.

Why AI Agents Specifically Need EU Infrastructure

Most web apps can use cookie consent and a DPA agreement to handle US-based hosting. AI agents are different because of how they process data:

Persistent memory: Agents store conversation history, user preferences, and extracted entities. This is personal data under GDPR Article 4 — it requires storage in EU-compliant infrastructure.

Unstructured data processing: Agents read documents, emails, and free-text inputs. The moment this includes names, addresses, or behavioral data, GDPR applies.

LLM API calls: Sending EU user data to OpenAI or Anthropic APIs is a third-country transfer. Your EU infrastructure layer does not exempt the LLM call, but it keeps your compute, memory, and logs in Europe — minimizing exposure.

For teams building agents on Claude Code, LangChain, or LangGraph: the infrastructure you deploy to is as important as the agent architecture itself.

Deploy a LangGraph Agent to sota.io

LangGraph agents are Python applications with a requirements.txt. sota.io deploys them identically to any other Python app.

Step 1: Structure your agent as a web API

Wrap your LangGraph agent in a FastAPI or Flask endpoint:

# main.py
from fastapi import FastAPI
from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent
import os

app = FastAPI()
model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
agent = create_react_agent(model, tools=[])

@app.post("/run")
async def run_agent(payload: dict):
    result = await agent.ainvoke({"messages": [{"role": "user", "content": payload["input"]}]})
    return {"output": result["messages"][-1].content}

Step 2: Add a Procfile

web: uvicorn main:app --host 0.0.0.0 --port $PORT

Step 3: Deploy to EU infrastructure

npm install -g sota-cli
sota auth login
sota deploy

sota.io detects Python, reads requirements.txt, and deploys to Frankfurt in under 60 seconds. Your agent is live at a *.sota.app domain with TLS.

Agent Memory with Managed PostgreSQL

Stateful agents need persistent memory. The most common pattern is storing conversation history and extracted context in PostgreSQL.

Provision a managed EU database in one command:

sota db create --type postgres

sota.io injects DATABASE_URL into your environment. Connect from your agent:

import os
from sqlalchemy import create_engine

engine = create_engine(os.environ["DATABASE_URL"])

The PostgreSQL instance runs in Frankfurt — the same datacenter as your agent. No cross-region latency, no separate database service to manage, and all personal data stays within EU jurisdiction.

For LangGraph agents using the built-in checkpoint system:

from langgraph.checkpoint.postgres import PostgresSaver

checkpointer = PostgresSaver.from_conn_string(os.environ["DATABASE_URL"])
agent = create_react_agent(model, tools=[], checkpointer=checkpointer)

This gives your agent persistent conversation memory with automatic GDPR-compliant storage.

Deploy a Claude Code Agent to sota.io

Claude Code agents — autonomous coding and task agents built on Anthropic's Agent SDK — follow the same deployment pattern.

# agent.py
import anthropic
import os
from fastapi import FastAPI

app = FastAPI()
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

@app.post("/task")
async def run_task(payload: dict):
    response = client.messages.create(
        model="claude-opus-4-6",
        max_tokens=4096,
        messages=[{"role": "user", "content": payload["task"]}],
    )
    return {"result": response.content[0].text}

Set your API key as an environment variable:

sota env set ANTHROPIC_API_KEY=sk-ant-...

Deploy:

sota deploy

Your Claude agent runs on EU infrastructure, with all logs and processing staying in Europe.

Environment Variables and Secrets

AI agents typically need multiple API keys. sota.io handles these as encrypted environment variables:

sota env set ANTHROPIC_API_KEY=sk-ant-...
sota env set OPENAI_API_KEY=sk-...
sota env set LANGSMITH_API_KEY=ls__...

Variables are injected at runtime and never stored in your code or container image.

Comparison: AI Agent Hosting in Europe

Featuresota.ioRailwayRenderAWS Lambda
Default EU regionYes (Frankfurt)No (opt-in)No (opt-in)No (opt-in)
Managed PostgreSQL (agent memory)IncludedExtra costExtra costNot included
Python / FastAPI supportYesYesYesRequires Docker
Environment variable encryptionYesYesYesYes
GDPR DPA availableYesYes (US entity)NoYes (complex)
No Dockerfile requiredYesYesYesNo

For EU teams, the key difference is the default: sota.io runs in Frankfurt without configuration. Other platforms require deliberate opt-in to EU regions — a step that is easy to miss in agent deployment workflows.

What This Looks Like in Production

A typical EU-compliant AI agent stack on sota.io:

  1. Agent API — FastAPI + LangGraph, deployed as a sota.io service in Frankfurt
  2. Agent memory — sota.io managed PostgreSQL with LangGraph checkpointer
  3. LLM calls — Claude or GPT-4 API (third-country transfer, covered by Standard Contractual Clauses)
  4. Logs — stored in sota.io's EU infrastructure, accessible via sota logs

This architecture keeps all personal data processing and storage within the EU. The only third-country exposure is the LLM API call itself — the same situation regardless of where you host.

Getting Started

npm install -g sota-cli
sota auth login
sota deploy

Your AI agent is running in Frankfurt in under two minutes.


Further reading:

Start deploying on sota.io → — free tier available, no credit card required.