AWS ECS EU Alternative 2026: Container Orchestration, Task Definitions, and GDPR Under US Jurisdiction
Post #691 in the sota.io EU Compliance Series
AWS Elastic Container Service (ECS) is the native AWS container orchestration layer, offering two launch modes: EC2-backed clusters where you manage the underlying instances, and Fargate where AWS manages the compute. ECS integrates tightly with the rest of the AWS ecosystem — IAM for access control, CloudWatch for logs and metrics, Secrets Manager for sensitive configuration, Service Connect and Cloud Map for service discovery, and Application Load Balancer for traffic routing.
The compliance challenge with ECS is not the containers themselves — it is the orchestration layer that surrounds them. Task definitions encode your application architecture. Container logs flow through CloudWatch. Service configurations reveal your scaling behavior and operational patterns. ECS Anywhere extends AWS's control plane reach to your own data centers. In each case, the data generated by running your workloads accumulates in AWS infrastructure under US jurisdiction.
This article examines what AWS ECS stores and processes, the GDPR and regulatory implications of running containerized workloads on ECS, and the EU-native alternatives for organizations that need container orchestration without US-entity control over their operational data.
What AWS ECS Stores and Processes
ECS is a control-plane service — it schedules containers, maintains desired state, and coordinates the supporting services that make containers useful in production. Each of these functions generates data that resides in AWS infrastructure.
Task definitions as application architecture. A task definition specifies which container images to run, their CPU and memory allocations, environment variables, secrets references, networking mode, and IAM execution role. Task definitions are revision-controlled by ECS — every change creates a new revision. This means AWS stores not just your current configuration but your complete task definition history. That history reveals your application's evolution: which images were deployed at what time, which secrets they accessed, how resources were allocated. For regulated organizations, task definition history is operational data with potential audit significance, stored under AWS control.
Environment variables and secrets references. Task definitions can inline environment variables directly — a pattern that, while discouraged for sensitive values, remains in use. They reference secrets from AWS Secrets Manager or SSM Parameter Store, which means the task definition metadata records which secrets your application needs, even if it does not store the secret values themselves. The association between a task and its required secrets is itself sensitive configuration data: it reveals what credentials your application depends on and therefore what systems it accesses.
CloudWatch Logs for container output. The standard ECS logging driver sends container stdout/stderr to Amazon CloudWatch Logs. Application logs frequently contain user identifiers, request parameters, IP addresses, error messages with PII fragments, and other data that constitutes personal data under GDPR Art.4(1). Every container log line sent to CloudWatch is processed and stored under AWS infrastructure, with AWS as a GDPR processor. For Fargate tasks, CloudWatch Logs is the only native logging destination — there is no direct log driver access to external syslog or Fluentd endpoints at the host level. Fargate Task Storage (ephemeral) disappears with the task; persistent logging requires CloudWatch.
Container Insights metrics. Amazon CloudWatch Container Insights provides cluster-level and task-level metrics: CPU utilization per container, memory consumption per task, network throughput, storage read/write rates. For Fargate, Container Insights requires an additional CloudWatch agent sidecar. These metrics reveal the behavioral pattern of each container — its typical load, its spikes, its failure modes. At scale, this performance profile is operational intelligence about your application stored in AWS's telemetry infrastructure.
ECS Service Connect and Cloud Map. ECS Service Connect provides service-to-service discovery within a cluster using AWS Cloud Map as the underlying DNS registry. Cloud Map stores your service namespace, service names, and endpoint metadata. Service names often mirror internal microservice names — revealing your application's internal architecture to the cloud provider. Service Connect metrics (request count, error rate, latency per service pair) are emitted to CloudWatch, creating a service dependency map under AWS's observability infrastructure.
ECS Anywhere: on-premises workloads under AWS control plane. ECS Anywhere extends ECS to customer-managed infrastructure — on-premises servers, co-location facilities, or non-AWS clouds. The ECS agent running on ECS Anywhere instances communicates with the AWS ECS control plane over HTTPS: registering instances, receiving task placement decisions, reporting task status. Even if the containers run on your own hardware in your own data center, the orchestration instructions and status updates flow through AWS's US-jurisdiction control plane. Task placement decisions — which container runs on which host — are made by AWS's scheduler infrastructure, not your own.
CloudTrail ECS API events. All ECS control plane operations are logged in CloudTrail: ecs:RunTask, ecs:UpdateService, ecs:StopTask, ecs:RegisterTaskDefinition, ecs:DeleteCluster. These events record the IAM principal performing each action, the source IP, and the parameters. In organizations where developers manually trigger tasks — aws ecs run-task for one-off jobs or debugging — the CloudTrail record ties individual IAM identities to specific container executions at specific times. This is personal data: a log of what a named individual ran and when.
Fargate's abstracted jurisdiction risk. AWS Fargate removes the need to manage EC2 instances, but it does not remove the jurisdictional footprint. Fargate compute runs on AWS-managed infrastructure. Task network interfaces (ENIs) are provisioned in your VPC but managed by AWS. VPC flow logs for Fargate task ENIs record the network traffic pattern of your containers. For organizations that chose Fargate specifically to avoid infrastructure management complexity, the trade-off is deeper reliance on AWS-managed compute — no SSH access, no host-level visibility — combined with full US-jurisdiction control of the compute environment.
GDPR Analysis of AWS ECS
Container logs as primary personal data risk. The most direct GDPR exposure from ECS is the default logging configuration. Applications running in ECS that log user requests, session identifiers, error messages, or any user-correlated data are sending that personal data to CloudWatch Logs — a US-entity-controlled service. Under GDPR Art.28, AWS acts as a processor. The controller (your organization) must ensure the processing agreement covers this data flow. Third-country transfers to the US require Art.46 safeguards. The CLOUD Act means AWS can be compelled to produce CloudWatch logs for ECS tasks in response to US law enforcement requests — without notifying the data subject or the controller.
Task definition history as sensitive operational data. The historical record of task definitions in ECS reveals your deployment cadence, your secret dependencies, your resource allocation choices, and the trajectory of your application's configuration. For organizations in regulated sectors, this history may be relevant to audit requests. Its storage under AWS's control — outside the organization's direct custody — creates a gap in the chain of evidence for compliance purposes.
ECS Anywhere and the hybrid jurisdiction problem. ECS Anywhere creates a particularly complex GDPR situation. An organization may use ECS Anywhere precisely because they want containers running in their EU data center — maintaining data residency for the container workload. But the ECS control plane coordinating those containers resides in AWS's US-jurisdiction infrastructure. Task placement commands originate from AWS. Task status is reported to AWS. The container runtime is local, but the orchestration is US-based. A GDPR Data Protection Impact Assessment for ECS Anywhere must account for this architectural split: the data stays local, but the control signals traverse the AWS control plane.
IAM execution roles and credential exposure. ECS tasks assume IAM execution roles to pull secrets from Secrets Manager and to call other AWS services. The credentials are issued by AWS STS (Security Token Service). Short-lived credentials are fetched from the ECS agent, which communicates with AWS's internal metadata endpoint. The IAM permission boundary for each task — what AWS services and resources each container can access — is defined in AWS's IAM infrastructure. Changes to execution role permissions are logged in CloudTrail. The complete picture of what each container is authorized to do is stored and managed by a US-entity.
NIS2, DORA, and CRA Implications
NIS2 Art.21(2)(j): Use of secure development and production environments. NIS2 requires essential and important entities to implement security measures covering the security of network and information systems, including supply chain and development environments. Container orchestration sits at the intersection of development and production: ECS runs the containers built by your CI/CD pipeline and deployed to your production environment. If NIS2 auditors examine your container orchestration infrastructure as part of a supply chain security review, the fact that task placement, service configuration, and operational logging are controlled by a US-entity cloud provider is a supply chain dependency that requires risk assessment and documentation.
DORA Art.28: ICT third-party risk management. For financial entities in scope of the Digital Operational Resilience Act (applicable from January 2025), ECS represents an ICT third-party dependency at the infrastructure layer. DORA requires financial entities to maintain a register of ICT third-party service providers, assess concentration risk, and ensure exit strategies exist. ECS creates architectural lock-in: task definitions use ECS-specific JSON format, Service Connect uses ECS-native service discovery, Fargate does not have a direct equivalent in non-AWS container platforms. DORA's ICT concentration risk provisions (Art.29) specifically address the risk of over-reliance on a single cloud provider. Organizations running primary container workloads on ECS should document the exit path to self-hosted Kubernetes or EU-native orchestration in their DORA third-party risk register.
CRA Art.13: SBOM and software composition tracking. The Cyber Resilience Act requires manufacturers of products with digital elements to identify and document software components. Container orchestration on ECS integrates with the broader software supply chain: task definitions reference container images (from ECR or other registries), specify the version tags in use, and record when new versions were deployed via service update events. This deployment history, stored in ECS and CloudTrail, is part of the evidence chain for CRA compliance. Retaining that chain under EU-controlled infrastructure — rather than AWS — strengthens the audit defensibility of your SBOM documentation.
EU-Native Container Orchestration Alternatives
Option 1: Self-Hosted Kubernetes on EU Infrastructure
Kubernetes is the industry-standard container orchestration platform. Self-hosting Kubernetes on EU infrastructure — Hetzner (Germany), Scaleway (France), OVHcloud (France), or sota.io — gives you full control over the orchestration layer: the API server, etcd (cluster state store), scheduler, and controllers all run on your infrastructure.
The Kubernetes equivalents for ECS concepts:
- Task definitions → Kubernetes Deployments, StatefulSets, Jobs — stored in etcd on your cluster
- ECS Services → Kubernetes Deployments with HPA (Horizontal Pod Autoscaler)
- ECS Service Connect → Kubernetes Services + CoreDNS + optional service mesh (Linkerd, Cilium)
- ECS Fargate → Kubernetes on managed node groups (Hetzner Managed Kubernetes, Scaleway Kapsule, OVH MKS)
- CloudWatch Logs → Fluentd/Fluent Bit → self-hosted OpenSearch or Loki + Grafana
- Container Insights → Prometheus + Grafana (kube-state-metrics, node-exporter, cAdvisor)
- Cloud Map → Kubernetes DNS (CoreDNS) + Kubernetes Services
The operational overhead of self-managed Kubernetes is real — upgrades, etcd backups, control plane HA, certificate rotation. EU-based managed Kubernetes services (Hetzner Managed Kubernetes, Scaleway Kapsule, OVH Managed Kubernetes Service) provide managed control planes under EU-entity jurisdiction, reducing operational burden while maintaining jurisdictional control.
Jurisdictional status: Self-hosted on EU infrastructure of your choice. Zero US-entity involvement in orchestration.
Option 2: HashiCorp Nomad (Self-Hosted)
Nomad is HashiCorp's container orchestration platform, simpler than Kubernetes while supporting containers (Docker, containerd, Podman), VMs, and raw executables in a single scheduler. For organizations that find Kubernetes's operational complexity excessive for their workload, Nomad offers a lower-overhead alternative.
Key Nomad capabilities relevant to ECS migration:
- Job specifications (HCL format) replace ECS task definitions — stored in Nomad's own distributed state store (Raft)
- Nomad Service Discovery replaces Cloud Map/Service Connect
- Nomad's integration with HashiCorp Vault provides secrets management equivalent to ECS's Secrets Manager integration — with Vault self-hosted on EU infrastructure
- Nomad Variables store non-sensitive configuration (equivalent to SSM Parameter Store)
- Nomad ACL system provides IAM-equivalent access control without AWS IAM dependency
Nomad clusters can be deployed on Hetzner, Scaleway, or any EU-jurisdiction infrastructure. The entire orchestration layer — scheduler, state store, service discovery — runs on your machines. No external cloud control plane. Operational data (job definitions, allocation logs, metrics) stays within your infrastructure.
For organizations migrating from ECS, the conceptual mapping is closer than ECS-to-Kubernetes: Nomad jobs map to ECS task definitions more directly, and Nomad's operational model is closer to ECS's simplicity than Kubernetes's feature density.
Jurisdictional status: Self-hosted on EU infrastructure. Zero US-entity cloud control plane.
Option 3: EU-Native Managed PaaS (sota.io, Railway, Fly.io EU)
For development teams that adopted ECS primarily for its managed convenience — not for Kubernetes-level feature richness — EU-native PaaS platforms provide the abstraction without the jurisdictional footprint.
sota.io is an EU-native PaaS built on European infrastructure. Container deployments on sota.io run on EU-jurisdiction hardware. There is no AWS, Azure, or GCP infrastructure in the data path. Logs, metrics, and deployment metadata stay within EU infrastructure under EU-entity control. For GDPR compliance, the processor relationship is with a European entity operating under GDPR directly.
Railway allows EU-region deployments for container workloads, though Railway itself is incorporated in the US. For organizations where legal entity jurisdiction matters (CLOUD Act exposure), Railway's US incorporation creates the same category of risk as AWS even with EU data residency.
Fly.io has EU-region infrastructure and allows workloads to run in European regions, but Fly.io is a US-incorporated entity. The same legal jurisdiction analysis applies.
For organizations where CLOUD Act risk is the driver, the relevant criterion is not data residency but legal entity jurisdiction. Only EU-incorporated providers operating EU-only infrastructure eliminate the CLOUD Act risk category.
Jurisdictional status (sota.io): EU entity, EU infrastructure, GDPR-direct controller relationship.
Option 4: Docker Swarm (Self-Hosted)
Docker Swarm is Docker's native clustering and orchestration mode, significantly simpler than Kubernetes but sufficient for many production workloads. For organizations that run containerized applications without complex service mesh requirements, Swarm provides a lightweight self-hosted option.
Swarm's key characteristics for ECS migration:
- Service definitions (Docker Compose format with deploy stanzas) replace ECS task definitions
- Swarm services with replicas replace ECS service desired count
- Swarm's built-in DNS provides service discovery (equivalent to Cloud Map for simple cases)
- Docker logs stay local to the host — no CloudWatch dependency
- Swarm state is stored in the Raft consensus across manager nodes — all on your infrastructure
Swarm's limitations are real: no built-in horizontal pod autoscaling, limited secrets management (Docker secrets, not Vault-grade), and no equivalent to ECS Anywhere for hybrid deployments. For teams needing those capabilities, Nomad or Kubernetes is a better fit. For teams that want simple "run N replicas of this container on these machines," Swarm on Hetzner dedicated servers is a legitimate production architecture.
Jurisdictional status: Self-hosted on EU infrastructure. Zero external cloud dependency.
Comparison Table
| Dimension | AWS ECS | Self-Hosted K8s | Nomad | sota.io | Docker Swarm |
|---|---|---|---|---|---|
| Jurisdiction | US (CLOUD Act) | Your EU infra | Your EU infra | EU entity | Your EU infra |
| Task definitions | AWS-controlled (history retained) | etcd on your cluster | Nomad state on your cluster | Platform-managed EU | Swarm Raft on your cluster |
| Container logs | CloudWatch (US-entity) | Self-hosted Loki/OpenSearch | Self-hosted | EU-hosted | Local Docker logs |
| Service discovery | Cloud Map (US-entity) | CoreDNS (self-hosted) | Nomad DNS (self-hosted) | Platform-managed | Docker Swarm DNS |
| Secrets | Secrets Manager (US-entity) | Vault / external-secrets | Vault / Nomad Variables | Platform-managed | Docker Secrets |
| Fargate equivalent | Yes (AWS Fargate) | Managed K8s node groups | No (you manage nodes) | Yes (managed compute) | No |
| Hybrid deployments | ECS Anywhere (US control plane) | Cluster API, k3s | Nomad multi-region | N/A | Multi-host Swarm |
| Operational complexity | Low (managed) | High | Medium | Low (managed) | Low |
| CLOUD Act exposure | Yes | No | No | No | No |
| Kubernetes compatible | No | Yes | No (own job format) | Yes (Kubernetes backend) | No |
Migration Strategy
Step 1: Audit Your ECS Workloads
Inventory your ECS clusters, services, and task definitions to understand the scope:
# List all ECS clusters
aws ecs list-clusters --region eu-central-1
# List services per cluster
aws ecs list-services --cluster your-cluster-name --region eu-central-1
# Export task definition to understand the application configuration
aws ecs describe-task-definition \
--task-definition your-task-definition-name \
--region eu-central-1 \
--query 'taskDefinition' > task-definition-export.json
# Check CloudWatch Log Groups for ECS tasks
aws logs describe-log-groups \
--log-group-name-prefix /ecs/ \
--region eu-central-1 \
--query 'logGroups[*].logGroupName'
Categorize workloads by: data sensitivity (containers processing personal data vs. infrastructure services), operational complexity (simple stateless APIs vs. stateful workloads), and AWS integration depth (workloads that call other AWS services vs. self-contained containers).
Step 2: Convert Task Definitions to Kubernetes Manifests or Nomad Jobs
For Kubernetes migration, the Kompose tool converts Docker Compose files to Kubernetes manifests. ECS task definitions are not Docker Compose format, but the concepts map:
# ECS task definition container → Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-api
namespace: production
spec:
replicas: 2
selector:
matchLabels:
app: your-api
template:
spec:
containers:
- name: api
image: registry.yourdomain.eu/your-api:v1.2.3
resources:
requests:
cpu: "256m" # ECS: 256 CPU units → K8s: 256m
memory: "512Mi" # ECS: 512 MiB → K8s: 512Mi
limits:
cpu: "512m"
memory: "1Gi"
envFrom:
- secretRef:
name: your-api-secrets # Replaces Secrets Manager reference
ports:
- containerPort: 8080
For Nomad migration, the job spec HCL format is more readable than Kubernetes YAML for teams familiar with ECS task definitions:
# Nomad job spec equivalent to ECS task definition
job "your-api" {
datacenters = ["eu-west"]
type = "service"
group "api" {
count = 2
task "api" {
driver = "docker"
config {
image = "registry.yourdomain.eu/your-api:v1.2.3"
ports = ["http"]
}
resources {
cpu = 256
memory = 512
}
template {
data = <<EOF
{{ with secret "secret/your-api" }}
DATABASE_URL={{ .Data.data.database_url }}
API_KEY={{ .Data.data.api_key }}
{{ end }}
EOF
destination = "secrets/env"
env = true
}
}
}
}
Step 3: Replace CloudWatch Logs with Self-Hosted Observability
The most impactful GDPR improvement is redirecting container logs away from CloudWatch. For Kubernetes, Fluent Bit runs as a DaemonSet:
# Fluent Bit DaemonSet sends logs to self-hosted Loki
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
spec:
selector:
matchLabels:
k8s-app: fluent-bit
template:
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:latest
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
hostPath:
path: /var/log
Configure Fluent Bit to output to Grafana Loki running on your EU infrastructure. For Nomad, configure the log driver in the task spec to ship to your logging endpoint directly.
Step 4: Replace Container Insights with Prometheus and Grafana
Deploy kube-prometheus-stack on your Kubernetes cluster:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install kube-prometheus-stack \
prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set grafana.adminPassword=your-secure-password \
--set prometheus.prometheusSpec.retention=30d
This provides Prometheus metrics for CPU, memory, network, and storage per container — equivalent to Container Insights — running entirely on your infrastructure. No metrics leave your cluster.
Step 5: Replace ECS Service Connect with Kubernetes Services and CoreDNS
Kubernetes services provide automatic DNS-based service discovery through CoreDNS:
# Kubernetes Service replaces ECS Service Connect
apiVersion: v1
kind: Service
metadata:
name: your-api
namespace: production
spec:
selector:
app: your-api
ports:
- port: 80
targetPort: 8080
Inside the cluster, containers reach your-api.production.svc.cluster.local — no Cloud Map dependency. For inter-service observability, Cilium or Linkerd provide service mesh capabilities equivalent to ECS Service Connect's latency metrics and circuit breaking.
What This Means for Your GDPR Article 30 Record
Migrating container orchestration from ECS to self-hosted Kubernetes or Nomad changes the Art.30 record of processing substantially:
Before (ECS with CloudWatch Logs):
- Controller: Your organization
- Personal data: Container logs containing user identifiers, request data, IP addresses
- Recipients: Amazon Web Services, Inc. (US processor, DPA + SCCs)
- Third-country transfers: USA under Standard Contractual Clauses
- Residual risk: CLOUD Act exposure for container logs, task definition history, operational metadata
After (Self-Hosted Kubernetes + Loki on EU infra):
- Controller: Your organization
- Personal data: Container logs — stored in self-hosted Loki on your EU infrastructure
- Recipients: EU-infrastructure provider (Hetzner/Scaleway/OVHcloud as infrastructure processor only)
- Third-country transfers: None
- Residual risk: Operational (cluster availability, backup) — not jurisdictional
For organizations preparing DPIAs for their container infrastructure, removing the ECS control plane and CloudWatch from the processing chain eliminates the most complex Art.46 transfer in a typical containerized application stack.
Conclusion
AWS ECS's primary GDPR risk is not the compute — it is the orchestration data layer that surrounds your containers: task definition history that encodes your application architecture, CloudWatch Logs that capture personal data from container output, Container Insights metrics that profile your workloads, and ECS Anywhere that keeps on-premises containers tethered to AWS's US-jurisdiction control plane.
Self-hosted Kubernetes on EU infrastructure is the complete feature-equivalent replacement: it matches ECS's capabilities for production workloads while eliminating the US-entity control plane. Nomad is the operationally simpler alternative for teams that find Kubernetes's complexity excessive. EU-native PaaS platforms like sota.io provide managed convenience without the CLOUD Act footprint.
The broader AWS compliance picture continues: containers orchestrated by ECS run images from AWS ECR, access credentials from AWS Secrets Manager, write logs to AWS CloudWatch, and operate under AWS IAM roles. Container orchestration is the runtime layer — the point where all these dependencies converge in production. Migrating it to EU-jurisdiction infrastructure addresses not just the orchestration layer's own GDPR exposure, but the entire application runtime's connection to US-entity infrastructure.
See Also
- AWS EKS EU Alternative 2026 — ECS's sibling orchestrator: managed Kubernetes with the same CLOUD Act exposure via AWS-controlled etcd and KMS key custody
- AWS ECR EU Alternative 2026 — container images that ECS pulls are stored in ECR under US jurisdiction; Harbor on EU infra is the replacement
- AWS Secrets Manager EU Alternative 2026 — credentials injected into ECS tasks at runtime via Secrets Manager remain under US-entity control
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.