2026-05-05·11 min read·
AWS Elastic File System solves a genuine problem: applications that need a shared, scalable file system without managing NFS server infrastructure. Multiple EC2 instances, Lambda functions, or ECS containers can mount the same EFS file system simultaneously. It grows and shrinks automatically. It handles replication and availability. For teams running stateful workloads that require shared access to the same files, EFS removes significant operational burden. It also inherits the same structural GDPR problem as every other AWS managed storage service: the encryption keys live in AWS Key Management Service, operated by a US corporation subject to the CLOUD Act. This guide covers the four GDPR exposure points specific to EFS, why the shared-access model creates erasure complications that block storage does not, and which EU-native alternatives provide genuinely equivalent capability without the legal risk. ## What AWS EFS Actually Does EFS implements the NFSv4 protocol on a fully managed, elastic backend. You create a file system, attach mount targets to your VPC subnets, and mount it on any compute resource that has network access to those targets. The file system grows and shrinks automatically as you add and remove files — you pay for what you use rather than provisioning a fixed size. EFS comes in two storage classes: Standard (high throughput, low latency, multi-AZ durable) and Infrequent Access. Lifecycle management policies automatically move files not accessed within a configurable window into the Infrequent Access class, which costs roughly 90 percent less per GB. Intelligent-Tiering handles this automatically. EFS also supports replication to a second AWS region for disaster recovery purposes. This replication is one-way and asynchronous, maintaining a replica file system in a target region that can be promoted to a primary if the source region becomes unavailable. Each of these features — lifecycle tiering, cross-region replication, multi-account access, and managed encryption — creates a distinct GDPR consideration. ## The Four GDPR Exposure Points ### 1. Art.28 — The CLOUD Act Applies to AWS as an Organization, Not to Regions The foundational issue is the same for EFS as for every other AWS service: AWS is a US-incorporated corporation subject to CLOUD Act Section 2713 (18 U.S.C. § 2713). This provision compels US cloud providers to produce data stored anywhere on their infrastructure in response to a valid government order, regardless of which physical data center the data sits in. Your EFS file system may be mounted in eu-central-1. Your data may never have crossed a national border in the ordinary sense. None of that changes the legal authority a CLOUD Act order confers over Amazon as a corporate entity. Your Art.28 Data Processing Agreement with AWS is necessary. It defines responsibilities, establishes security obligations, and provides the contractual basis for the processor relationship. It does not protect you from lawful government compelled disclosure. No DPA can do that — it addresses voluntary data sharing, not legal compulsion. European developers who select AWS EFS because their data stays in an EU region are solving a different problem than the one GDPR creates. GDPR's risk is not physical location of bits. It is legal jurisdiction over the entity controlling those bits. ### 2. Art.32 — Encryption With Keys Under US Jurisdiction EFS encrypts data at rest and in transit. At-rest encryption uses AES-256 with keys managed by AWS KMS. In-transit encryption uses TLS. Both are enabled and enforced by default in current EFS configurations. The encryption is real. The problem is not the algorithm — it is the key hierarchy. EFS encryption uses an AWS-managed key by default. You can configure a Customer Managed Key — a key you create in KMS, to which you control access via key policy. Using a CMK is strongly recommended for GDPR compliance because it gives you the ability to delete the key and render encrypted data permanently inaccessible, which supports your Art.17 erasure obligations. But here is what CMKs do not change: while the EFS file system is operational, AWS KMS must have access to the key material to perform the envelope encryption that protects your data. The data encryption keys that protect individual files are wrapped by your CMK. When a compute resource reads a file, AWS KMS unwraps the data key, the file is decrypted, and the data is returned. This KMS call happens transparently on every file access. That KMS call is what a CLOUD Act order targets. The order compels Amazon — not an individual server — to use its KMS infrastructure to decrypt and produce data. Your CMK key policy controls which AWS services and IAM principals can use the key under normal operations. It does not control what AWS can be legally compelled to do under a valid government order. Encryption via AWS KMS satisfies Art.32's requirement for technical measures. It does not eliminate CLOUD Act risk. ### 3. Art.17 — Erasure Is Structurally More Complex for Shared File Systems EBS erasure is conceptually straightforward: delete the volume and all snapshots, delete the CMK, data is gone. The linear relationship between volume, snapshot, and key makes auditable erasure achievable. EFS erasure is more complex because the file system is designed for shared concurrent access by multiple applications, services, and compute instances. Several factors complicate complete erasure: **Access Points:** EFS Access Points provide application-specific entry points into a shared file system, each with a defined root directory and POSIX identity. Multiple applications can share one EFS file system using separate Access Points. When a user requests erasure of personal data under Art.17, you must identify all files containing that user's data across all Access Points on the file system — a non-trivial discovery problem for applications that were not designed with per-user data mapping. **Lifecycle Tiering:** Files moved to the Infrequent Access storage class are still encrypted with the same KMS key and remain on the file system. Lifecycle transitions happen automatically based on access recency. A file containing personal data that has not been accessed in 90 days has been silently tiered. Erasure requires scanning both Standard and IA tiers. **Replication:** If EFS replication is enabled, every file is copied to a second region. When you delete a file from the primary file system, the deletion is replicated. But during the replication lag window, the replica still contains the file. The replication also moves personal data to a second AWS region — in most disaster recovery architectures, this is a different EU region, which does not itself create a GDPR Chapter V transfer issue, but must be documented in your ROPA under Art.30 as a second processing location. ### 4. Art.5(1)(e) — Storage Limitation and the Elastic Accumulation Problem Art.5(1)(e) GDPR requires personal data to be kept in a form that permits identification no longer than necessary for the purpose for which it is processed. EFS is specifically designed to accumulate data elastically — you pay for what you store, the file system expands automatically, and there is no forced cleanup mechanism. Application logs, upload staging directories, exported reports, and temporary processing files all tend to accumulate on shared file systems. Each file that contains personal data that outlives its processing purpose is a storage limitation violation. Automated lifecycle tiering moves old files to cheaper storage, which extends their retention rather than eliminating it. Compliance requires an active data retention policy enforced at the file level, not just at the file system level. EFS does not provide this — you must build it into your application or operational tooling. ## EFS Replication as a GDPR Chapter V Consideration EFS replication to another EU region does not create a GDPR Chapter V international transfer problem — data stays in the EEA. But replication to any AWS region outside the EEA (for example, us-east-1 or ap-southeast-1 as a disaster recovery target) is a restricted transfer under Art.44-49 GDPR. The EU Standard Contractual Clauses in your AWS DPA cover this transfer, but CLOUD Act jurisdiction applies to the replica as much as to the primary. If your disaster recovery architecture requires an EFS replica outside the EEA, you have created a legitimate-basis transfer under SCCs that is simultaneously subject to CLOUD Act compelled disclosure. ## EU-Native Alternatives to AWS EFS The alternatives fall into two categories: managed NFS services from EU-native providers, and self-hosted distributed file systems on EU infrastructure. **Hetzner Storage Box:** Hetzner provides networked storage accessible via SMB, FTP, SFTP, and rsync — not NFS in the traditional sense, but suitable for many workloads that use EFS as a staging or backup target. Hetzner is a German company incorporated in Bayern with no US parent. CLOUD Act does not apply. Pricing is substantially cheaper than EFS for equivalent capacity. **Hetzner Volume with NFS Server:** For workloads that require NFS specifically, a common EU-native pattern is to deploy a dedicated NFS server VM on Hetzner with attached Hetzner Volumes, then share the filesystem over NFS to your application nodes. This requires managing one additional VM but eliminates managed service risk entirely. Encryption at rest uses your own keys with LUKS or similar. **CephFS on EU Infrastructure:** Ceph provides a distributed file system with POSIX semantics suitable for high-throughput workloads. Running CephFS on a cluster of Hetzner Dedicated servers gives you genuine data sovereignty: your keys, your hardware, no third-party control plane with a CLOUD Act obligation. Rook-Ceph provides a Kubernetes-native operator for managing Ceph clusters within your own infrastructure. **Longhorn for Kubernetes:** If your EFS workload is primarily providing persistent volumes for Kubernetes pods, Longhorn provides a distributed block storage solution that runs entirely within your Kubernetes cluster. Individual Longhorn volumes can be shared across pods using ReadWriteMany access modes via NFS. Runs on Hetzner, Scaleway, OVHcloud, or any EU-based Kubernetes cluster. **Scaleway Block Storage with ReadWriteMany:** Scaleway's Kubernetes-native block storage supports ReadWriteMany when combined with an NFS provisioner. Scaleway is a French company with no US parent. **OVHcloud Shared File System:** OVHcloud provides managed NFS storage as part of their Private Cloud and bare-metal offerings. OVHcloud is a French corporation with operations entirely in the EU. **GlusterFS:** An alternative to CephFS for distributed file storage, GlusterFS is simpler to operate for smaller deployments and can be self-hosted on any EU infrastructure. ## Migration Considerations Moving from EFS to a EU-native alternative involves three technical steps that are straightforward, and one operational step that requires planning. **Data migration:** rsync from EFS to the target storage, with a final cutover sync during a maintenance window. EFS supports AWS DataSync for managed migrations, but using rsync directly to a target on EU infrastructure keeps the migration entirely within your control. **Mount point changes:** Applications that use EFS via NFSv4 mount points will need configuration updates to point to the new NFS target. This is a configuration change, not a code change, for most workloads. Kubernetes persistent volume claims using the EFS CSI driver will need new storage class definitions. **Access Point replacement:** If you use EFS Access Points for multi-tenant isolation within a shared file system, you will need to replicate that isolation model in the target system. CephFS subvolumes and GlusterFS volumes-within-volumes provide equivalent capability. **Encryption key handover:** Decide whether to migrate data as plaintext during the transfer (acceptable if the migration happens within your controlled network) or to encrypt in the target system before the source is decommissioned. Either approach requires documenting the handover in your ROPA. ## The sota.io Approach sota.io is an EU-native managed platform that provides persistent storage for deployed applications without US-parent jurisdiction. Applications deployed on sota.io can mount persistent volumes that never touch AWS infrastructure. Encryption keys are managed within the EU. There is no CLOUD Act obligation. For teams that want the operational simplicity of a managed platform without running their own Ceph or NFS infrastructure, sota.io provides the equivalent of EFS's zero-ops model under European legal jurisdiction. ## Practical Migration Priority Not every EFS workload has the same GDPR risk profile. Prioritize migration based on data sensitivity: **Migrate first:** File systems that store personal data directly — user uploads, exported reports, application data with PII. These create the highest Art.32 and Art.17 exposure. **Migrate second:** File systems used for application logs that may contain IP addresses, session identifiers, or user-generated content. Log data is frequently underestimated as personal data under GDPR's broad definition. **Migrate last:** File systems used purely for infrastructure artifacts — container layer caches, build outputs, static assets without personal data. These still carry CLOUD Act structural risk but have lower immediate GDPR impact. ## Summary AWS EFS provides genuinely useful shared file system infrastructure. It also creates four specific GDPR exposure points: the CLOUD Act applies to AWS as a US-incorporated entity regardless of region; KMS encryption does not eliminate CLOUD Act risk; the shared-access model complicates Art.17 erasure audits; and elastic accumulation creates persistent Art.5(1)(e) violations for teams without active file-level retention enforcement. EU-native alternatives exist across the spectrum from fully managed (Hetzner Storage Box, OVHcloud NFS) to self-hosted distributed (CephFS, GlusterFS, Longhorn). The right choice depends on your throughput requirements, team operational capacity, and whether you need POSIX semantics or can adapt to object storage patterns. The common thread is that EU-native alternatives eliminate CLOUD Act structural risk entirely rather than managing it through contract language that cannot override legal compulsion.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.