2026-04-29·13 min read·

AWS EKS EU Alternative 2026: Managed Kubernetes, etcd Under US Jurisdiction, and the CLOUD Act Risk

Post #692 in the sota.io EU Compliance Series

AWS Elastic Kubernetes Service (EKS) is Amazon's managed Kubernetes offering: AWS provisions and operates the Kubernetes control plane — the API server, etcd, the controller manager, and the scheduler — while customers manage worker nodes or delegate node management to AWS Fargate. EKS has become the dominant enterprise Kubernetes deployment model because it eliminates the most operationally demanding part of Kubernetes: keeping the control plane healthy, patched, and highly available.

The compliance challenge with EKS is precisely what makes it operationally attractive. The managed control plane that AWS operates on your behalf stores the complete state of your Kubernetes cluster — every Pod definition, every Deployment spec, every Secret, every ConfigMap, every RBAC binding — in etcd running on AWS-controlled infrastructure in US-entity jurisdiction. EKS Anywhere extends this model to on-premises clusters, connecting your data-center Kubernetes to an AWS-managed control plane. IRSA (IAM Roles for Service Accounts) issues AWS credentials directly to Kubernetes pods via AWS STS.

This article examines what AWS EKS stores and processes, the GDPR and regulatory implications of running managed Kubernetes under US-entity control, and the EU-native managed Kubernetes alternatives for organizations that need Kubernetes without US jurisdiction over their cluster state.

What AWS EKS Stores and Processes

EKS is fundamentally a managed etcd and managed Kubernetes control plane service. Every object stored in your Kubernetes cluster — every resource created with kubectl apply — is persisted in AWS-managed etcd.

Kubernetes etcd as the primary data store. etcd is the distributed key-value store that is Kubernetes's source of truth. In EKS, etcd runs on AWS-managed infrastructure. Every Kubernetes resource type is stored in etcd:

EKS API server as the control plane for all cluster operations. Every kubectl command — every kubectl get, kubectl apply, kubectl exec, kubectl logs — passes through the EKS API server. The API server authenticates the request (via AWS IAM Authenticator or OIDC), authorizes it against RBAC policies stored in etcd, processes it, and returns the result. The EKS API server endpoint is hosted on AWS infrastructure. For organizations with strict data residency requirements, the API server is the point where all cluster management operations pass through AWS-controlled compute.

AWS CloudTrail audit log for EKS API calls. EKS integrates with CloudTrail for control plane audit logging. Every API server call — CreatePod, UpdateDeployment, GetSecret, ExecCommand — is logged with the IAM principal, the source IP, the request parameters, and the timestamp. kubectl exec events are particularly sensitive: they log which user opened an interactive shell in which container, and when. For organizations where developers use kubectl exec for debugging production issues, CloudTrail records tie named IAM identities to specific container access events — personal data (employee identity correlated with production access patterns) stored under AWS.

EKS worker node metadata. EKS worker nodes — EC2 instances in managed node groups — generate instance metadata, CloudWatch metrics, and VPC flow logs. Worker nodes running application containers send container stdout/stderr to CloudWatch Logs via the AWS CloudWatch agent or Fluent Bit configured to ship to CloudWatch. The same container logging risk that applies to ECS applies to EKS: application logs containing personal data (user identifiers, request IDs, session tokens in error messages) flow to CloudWatch under AWS jurisdiction.

IRSA: IAM Roles for Service Accounts. IRSA is EKS's mechanism for granting AWS credentials to Kubernetes workloads. A ServiceAccount is annotated with an IAM role ARN. When a Pod using that ServiceAccount starts, the EKS Pod Identity Webhook mutates the Pod spec to inject an OIDC token. The AWS SDK in the container exchanges this token with AWS STS for temporary IAM credentials. The IRSA model means:

For workloads that call AWS services from Kubernetes — S3, RDS, DynamoDB, Secrets Manager — IRSA creates a dependency on AWS STS for every pod startup. This is a US-entity choke point in the credential lifecycle of your Kubernetes workloads.

EKS Anywhere: on-premises Kubernetes with AWS control plane. EKS Anywhere provisions Kubernetes clusters on your own infrastructure (bare metal, VMware vSphere, Nutanix, Snow Edge) using Amazon's EKS-Distro (EKS-D). EKS Anywhere clusters connect to AWS for:

Unlike ECS Anywhere (where the control plane is definitively AWS-hosted), EKS Anywhere can be deployed fully disconnected — the control plane runs on your hardware. However, the curated package dependency on ECR and the EKS Connector feature create optional-but-common AWS data flows. Organizations that deploy EKS Anywhere and enable EKS Connector are making their on-premises cluster state visible to AWS's console infrastructure.

EKS Fargate profiles. EKS Fargate runs Kubernetes Pods on AWS-managed serverless compute. Fargate eliminates worker node management but deepens jurisdictional dependency: Fargate Pod compute runs on AWS-managed infrastructure, Pod networking uses AWS-managed ENIs in your VPC, and Pod logs flow to CloudWatch (Fargate does not support the DaemonSet pattern, so log shipping must use sidecar containers or the AWS-native logging). For Fargate-only EKS clusters, there are no EC2 instances to manage — but there is also no alternative to AWS-managed compute and AWS-native logging.

Amazon EKS add-ons. EKS add-ons install and manage Kubernetes components — CoreDNS, kube-proxy, VPC CNI, EBS CSI driver, EFS CSI driver, AWS Load Balancer Controller — as cluster add-ons managed by EKS. Add-on version updates are pushed by AWS and applied by the EKS cluster management API. The components that manage your cluster networking (VPC CNI), storage (EBS CSI), and traffic routing (ALB Controller) are under EKS lifecycle management — version selection and update timing are gated by AWS's add-on release schedule.

GDPR Analysis of AWS EKS

Cluster state as comprehensive operational intelligence. The full content of EKS's managed etcd constitutes a detailed map of your application architecture. Pod specs reveal which container images you run and which versions. Deployment histories reveal your release cadence. ConfigMaps reveal your application's configuration model. RBAC resources reveal your authorization design. For organizations in regulated sectors, this operational intelligence stored under AWS's control — outside the organization's direct custody — represents a category of data that was not traditionally considered a privacy risk but which, under GDPR's broad definition of personal data, may contain data subject-correlated information (user-facing configuration, feature flag states tied to user cohorts, A/B test configurations).

Kubernetes Secrets encryption and KMS key custody. EKS's envelope encryption for etcd uses AWS KMS. The security model is: etcd is encrypted with a data encryption key (DEK), and the DEK is encrypted with a customer master key (CMK) in AWS KMS. The CMK is stored and managed by AWS's KMS infrastructure — a US-entity service. For GDPR Art.32 compliance, this means the technical measure protecting Kubernetes Secrets (encryption at rest) relies on a key managed by a US entity under CLOUD Act jurisdiction. A US government compelled disclosure of the KMS CMK decrypts all Kubernetes Secrets in that EKS cluster. Organizations that assess CLOUD Act risk as material to their GDPR compliance need to consider whether CMK custody in AWS KMS meets their residual risk threshold.

kubectl exec and interactive container access. When engineers use kubectl exec to open shell sessions in production containers for debugging — a common operational practice — each session is logged in CloudTrail as an EKS API call. The log entry records the IAM principal (tied to the engineer's AWS identity), the Pod name, the namespace, and the timestamp. This creates a detailed record of who accessed which containers and when — effectively a session log of production access by named individuals. That log is personal data (employee data) stored under AWS's CloudTrail infrastructure.

Admission webhooks and API server data flows. Many Kubernetes security tools (OPA/Gatekeeper, Kyverno, Falco, Vault Agent Injector) operate as admission webhooks — the Kubernetes API server calls out to the webhook on every relevant API request to validate or mutate the resource. In EKS, these webhooks run in your cluster (on your worker nodes), but the admission webhook calls originate from the EKS API server (AWS-managed) and pass the full resource being created or modified. For sensitive resources (Secrets being created, Pods with environment variables), the API server passes that content to the webhook endpoint in your VPC — creating a data flow from AWS-managed API server infrastructure to your cluster.

NIS2, DORA, and CRA Implications

NIS2 Art.21(2)(d): Supply chain security. NIS2 requires essential and important entities to implement security measures covering supply chain security, including the security of relationships between entities and their ICT service providers. EKS add-ons represent a supply chain dependency: AWS controls the version and release timing of CoreDNS, kube-proxy, VPC CNI, EBS CSI, and AWS Load Balancer Controller in managed add-on form. An EKS add-on version that introduces a vulnerability enters your cluster on AWS's update schedule. The supply chain risk model for EKS includes AWS as a privileged supplier with the ability to push changes into the cluster's core networking, storage, and routing components.

DORA Art.28 and Art.29: ICT concentration risk for Kubernetes. For financial entities, EKS creates a significant ICT concentration risk. The managed control plane means cluster availability depends on AWS's EKS control plane availability — not just worker node availability. EKS outages (including the 2021 us-east-1 IAM/STS outage that cascaded to EKS clusters) demonstrate that control plane availability is a dependency distinct from data plane availability. DORA Art.29 requires financial entities to assess concentration risk and ensure exit strategies exist. EKS exit strategy complexity is high: Kubernetes manifests are portable, but EKS-specific integrations (IRSA, AWS Load Balancer Controller, EBS CSI with EBS volumes, EKS add-on versions) create migration friction. A DORA-compliant exit strategy for EKS requires documented migration procedures to self-managed Kubernetes or an EU-native managed Kubernetes service.

CRA Art.13 and Art.15: Software updates and vulnerability management. The Cyber Resilience Act requires manufacturers of products with digital elements to implement vulnerability handling processes and provide security updates. For organizations that ship software as container images on EKS, the container supply chain includes the EKS platform itself. CRA auditors may examine whether the Kubernetes version in use is within the supported window, whether add-ons are at patched versions, and whether the organization can demonstrate timely response to Kubernetes CVEs. EKS's managed control plane provides automatic control plane patching — a CRA operational advantage — but the extended support model (EKS charges for extended support for Kubernetes versions beyond the standard support window) creates a financial incentive that may conflict with timely migration off deprecated versions.

EU-Native Managed Kubernetes Alternatives

Option 1: Hetzner Managed Kubernetes (Hetzner Cloud)

Hetzner Cloud's managed Kubernetes service (Hetzner Cloud Kubernetes / HCloud Managed Kubernetes) provides a managed control plane on Hetzner's EU-jurisdiction infrastructure. Hetzner Cloud GmbH is incorporated in Germany. The control plane — API server, etcd, controller manager — runs on Hetzner's German infrastructure under German and EU legal jurisdiction.

Key characteristics for EKS migration:

For teams migrating from EKS, the primary operational difference is the absence of AWS-native integrations. IRSA has no direct equivalent — cloud-agnostic secret management (HashiCorp Vault, External Secrets Operator with a non-AWS backend) is the replacement pattern. EBS volumes become Hetzner Volumes via the Hetzner CSI driver. AWS Load Balancer Controller is replaced by Hetzner's cloud controller manager for load balancers.

Jurisdictional status: German entity (Hetzner Cloud GmbH), German and EU-jurisdiction infrastructure. Zero US-entity involvement in cluster control plane.

Option 2: Scaleway Kapsule (Scaleway)

Scaleway Kapsule is Scaleway's managed Kubernetes service, operated by Scaleway SAS, a French company incorporated in Paris. The managed control plane runs on Scaleway's French data centers (Paris region, Amsterdam region available). Scaleway is a wholly-owned subsidiary of Iliad Group — a French telecommunications company — with no US corporate ownership.

Key Kapsule capabilities:

Kapsule's EU-native ecosystem provides a more complete AWS-alternative stack than Hetzner: compute, registry, databases, object storage, and load balancing all under EU-entity control. For organizations migrating from EKS who also used RDS, ECR, and S3, Scaleway offers a more complete ecosystem replacement.

Jurisdictional status: French entity (Scaleway SAS), French data centers. Zero US-entity involvement.

Option 3: OVHcloud Managed Kubernetes (OVH)

OVH Managed Kubernetes Service (OVH MKS) provides managed Kubernetes on OVHcloud's European infrastructure. OVH SAS is incorporated in France (Roubaix). OVH operates its own data centers across France, Germany, Poland, and other EU countries. OVH's Kubernetes service provides CNCF-certified Kubernetes with a managed control plane.

OVH MKS features:

OVH's bare metal option is particularly relevant for GDPR and DORA compliance: financial entities or healthcare organizations that need physical isolation can run Kubernetes on OVH-owned dedicated hardware in EU data centers, with OVH's managed Kubernetes control plane — combining physical isolation with managed control plane convenience.

Jurisdictional status: French entity (OVH SAS), EU data centers on OVH-owned infrastructure. Zero US-entity involvement.

Option 4: Self-Hosted Kubernetes (k3s / k0s / kubeadm) on EU Infrastructure

For organizations that need maximum control — including control over the Kubernetes distribution, version, and patch timing — self-hosted Kubernetes on EU infrastructure eliminates all external dependency on a managed control plane.

k3s (Rancher Labs / SUSE) is a lightweight Kubernetes distribution designed for resource-constrained environments. A k3s cluster can run its control plane on a single node or in HA mode with an embedded or external database. k3s replaces etcd with SQLite (single-node) or an external database (PostgreSQL, MySQL, or etcd) — giving you the choice of database for cluster state storage. On Hetzner, a 3-node k3s HA cluster (3x CX22 nodes for control plane, worker nodes as needed) provides a complete self-managed Kubernetes cluster for under €30/month.

k0s (Mirantis) is another lightweight Kubernetes distribution with embedded etcd, designed for easy self-hosting. k0s's single binary deployment model simplifies control plane management.

kubeadm is the upstream Kubernetes tool for provisioning standard Kubernetes clusters. A kubeadm-provisioned cluster uses standard etcd and Kubernetes components, giving the closest match to EKS's feature set at the cost of higher operational overhead.

For all self-hosted options, the operational pattern is:

Jurisdictional status: Entirely under your control on EU infrastructure of your choice.

Option 5: EU-Native PaaS (sota.io)

For development teams that adopted EKS primarily for convenience — not for Kubernetes's feature richness per se — EU-native PaaS provides container deployment without the Kubernetes control plane operational burden and without the AWS jurisdictional footprint.

sota.io is an EU-native PaaS built on European infrastructure. Container deployments on sota.io run on EU-jurisdiction hardware. Cluster state (equivalent to etcd), deployment metadata, and operational logs stay within EU infrastructure under EU-entity control. For GDPR Art.28 compliance, the processor is a European entity operating under GDPR directly — not an entity subject to US CLOUD Act jurisdiction.

For teams that use EKS primarily as a "run my containers reliably" platform — not for Kubernetes-specific features like CRDs, operators, or multi-tenant clusters — sota.io eliminates both the EKS operational overhead and the jurisdictional risk.

Jurisdictional status: EU entity, EU infrastructure, GDPR-direct processor relationship.

Comparison Table

DimensionAWS EKSHetzner K8sScaleway KapsuleOVH MKSSelf-Hosted k3ssota.io
JurisdictionUS (CLOUD Act)DE entityFR entityFR entityYour EU infraEU entity
etcd custodyAWS-managedHetzner-managedScaleway-managedOVH-managedSelf-managedPlatform-managed
Kubernetes SecretsKMS (US entity)ManagedManagedManagedSelf-managedPlatform-managed
Container registryECR (US entity)External (Harbor)Scaleway RegistryOVH RegistrySelf-hostedPlatform-managed
Fargate equivalentEKS FargateNoNoNoNoYes
IRSA equivalentNativeNone (Vault/ESO)None (Vault/ESO)None (Vault/ESO)None (Vault/ESO)N/A
kubectl exec logCloudTrail (US)Control plane logControl plane logControl plane logYour audit logN/A
Bare metal optionNoDedicated (Robot)NoYesYes (any infra)No
GPU supportYesYes (CCX)YesYesYesNo
CLOUD Act exposureYesNoNoNoNoNo
Managed control planeYesYesYesYesNoYes

Migration Strategy

Step 1: Audit EKS Cluster State and IRSA Dependencies

Begin by inventorying what is in your EKS cluster's etcd and what workloads use IRSA:

# Export all non-system Kubernetes resources (cluster state inventory)
kubectl api-resources --verbs=list --namespaced -o name \
  | xargs -I{} kubectl get {} --all-namespaces -o yaml \
  > cluster-state-export.yaml

# List all ServiceAccounts with IRSA annotations
kubectl get serviceaccounts --all-namespaces \
  -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}: {.metadata.annotations.eks\.amazonaws\.com/role-arn}{"\n"}{end}' \
  | grep -v ": $"

# List EKS managed add-ons
aws eks list-addons --cluster-name your-cluster-name --region eu-central-1

# List Kubernetes Secrets (names only — do not export values)
kubectl get secrets --all-namespaces \
  -o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,TYPE:.type'

Categorize workloads by IRSA dependency depth: workloads that call many AWS services require more migration effort than workloads that only need AWS credentials for ECR pulls.

Step 2: Replace ECR with an EU-Native Container Registry

The first migration step is moving container images off ECR:

# Install Harbor on your target EU infrastructure using Helm
helm repo add harbor https://helm.goharbor.io
helm install harbor harbor/harbor \
  --namespace harbor \
  --create-namespace \
  --set expose.type=loadBalancer \
  --set externalURL=https://registry.yourdomain.eu \
  --set harborAdminPassword=your-secure-password \
  --set persistence.persistentVolumeClaim.registry.size=200Gi

# Migrate images from ECR to Harbor
# For each image:
docker pull 123456789.dkr.ecr.eu-central-1.amazonaws.com/your-image:tag
docker tag 123456789.dkr.ecr.eu-central-1.amazonaws.com/your-image:tag \
  registry.yourdomain.eu/your-image:tag
docker push registry.yourdomain.eu/your-image:tag

Update image references in your Kubernetes manifests and Helm charts to point to your EU registry.

Step 3: Replace IRSA with Vault Agent Injector or External Secrets Operator

For workloads that use IRSA to call AWS services (S3, DynamoDB, SQS), the migration strategy depends on whether the AWS service itself is being migrated:

If migrating off AWS services entirely (replacing S3 with Scaleway Object Storage, RDS with OVH Managed Databases), IRSA credentials are no longer needed after the data migration. Deploy External Secrets Operator with a non-AWS backend (Vault, 1Password, Doppler, or GitLab CI variables) to manage application secrets.

If retaining some AWS services during migration, use AWS SDK's credential provider chain with access keys stored in Kubernetes Secrets (or Vault) rather than IRSA:

# External Secrets Operator pulling from Vault
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: your-app-secrets
  namespace: production
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: vault-backend
    kind: ClusterSecretStore
  target:
    name: your-app-secrets
  data:
  - secretKey: database-url
    remoteRef:
      key: secret/production/your-app
      property: database_url
  - secretKey: api-key
    remoteRef:
      key: secret/production/your-app
      property: api_key

Step 4: Replace CloudWatch Logs with Fluent Bit → Loki

Deploy the kube-prometheus-stack and Grafana Loki on your EU-infrastructure cluster:

# Install kube-prometheus-stack (Prometheus + Grafana + Alertmanager)
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install kube-prometheus-stack \
  prometheus-community/kube-prometheus-stack \
  --namespace monitoring --create-namespace \
  --set grafana.adminPassword=your-secure-password

# Install Loki (log aggregation)
helm repo add grafana https://grafana.github.io/helm-charts
helm install loki grafana/loki-stack \
  --namespace monitoring \
  --set promtail.enabled=true \
  --set grafana.enabled=false  # Use existing Grafana

Promtail (or Fluent Bit as an alternative) runs as a DaemonSet and ships all container logs to Loki. No logs leave your cluster.

Step 5: Provision Target Cluster and Test Workloads

For Hetzner managed Kubernetes using the Hetzner Cloud Controller:

# Using hcloud CLI to create a managed Kubernetes cluster
hcloud context create your-project
hcloud kubernetes-cluster create \
  --name production-cluster \
  --location nbg1 \
  --kubernetes-version 1.30 \
  --node-pool name=workers,server-type=cx32,min-nodes=2,max-nodes=10

# Get kubeconfig
hcloud kubernetes-cluster get-kubeconfig production-cluster > ~/.kube/config-hcloud
export KUBECONFIG=~/.kube/config-hcloud

# Verify cluster
kubectl get nodes
kubectl get pods --all-namespaces

Migrate workloads progressively: start with stateless services (API servers, background workers), then stateful workloads (with persistent volume migration), and finally platform components (ingress, cert-manager, monitoring).

What This Means for Your GDPR Article 30 Record

Before (EKS with CloudWatch Logs and AWS KMS):

After (Hetzner/Scaleway/OVH Managed Kubernetes + Loki + Vault):

The most significant Art.30 change from EKS migration is the elimination of the KMS key custody risk for Kubernetes Secrets. On self-hosted or EU-managed Kubernetes, Secrets are encrypted at rest using keys under EU-entity control (or fully self-managed on self-hosted clusters). The CLOUD Act risk category for cluster state is eliminated.

Conclusion

AWS EKS's GDPR risk is concentrated in three layers: the managed etcd that stores your complete cluster state under AWS-controlled infrastructure, the AWS KMS-based encryption whose key custody is subject to US law, and the CloudWatch integration that captures container logs and cluster audit events under US-entity infrastructure.

EU-native managed Kubernetes — Hetzner Managed Kubernetes, Scaleway Kapsule, OVH Managed Kubernetes — provides the operational convenience of a managed control plane with the jurisdictional profile of a European cloud provider. Self-hosted k3s or k0s on EU infrastructure provides maximum control for organizations with the operational capacity to manage Kubernetes themselves.

The AWS container platform picture is now complete: images stored in AWS ECR, orchestrated by EKS or ECS, accessing credentials from AWS Secrets Manager via IRSA, writing logs to AWS CloudWatch, secured by AWS IAM roles. Migrating EKS to an EU-native managed Kubernetes service addresses the orchestration control plane — but the full CLOUD Act risk elimination requires migrating the supporting AWS services as well.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.