2026-05-02·15 min read·sota.io team
# AWS Aurora EU Alternative 2026: PITR Erasure Conflicts, Global Database CLOUD Act Risk, and EU-Sovereign Postgres AWS Aurora is the most widely deployed managed relational database on AWS — a MySQL and PostgreSQL-compatible engine that promises up to five times the throughput of standard MySQL at a fraction of the operational overhead. For many AWS-native teams, Aurora is simply the default choice: it handles replication, failover, patching, and backups automatically. That automation is precisely the problem for GDPR compliance. Every Aurora feature that reduces operational burden — point-in-time recovery, automated snapshots, Global Database cross-region replication, Backtrack change history, Aurora Serverless auto-scaling — creates a parallel data residency and retention obligation that your application layer cannot see, cannot modify, and in some cases cannot delete. And because Aurora is owned and operated by Amazon, a US company subject to the CLOUD Act, the entire cluster is accessible to US law enforcement through compelled disclosure — regardless of which AWS region you deploy in. This article analyses six GDPR failure vectors in AWS Aurora and maps the best EU-sovereign alternatives for 2026. --- ## What AWS Aurora Does (and Why GDPR Compliance Is Non-Trivial) Aurora separates compute from storage. Your database instances connect to a distributed storage layer spread across three Availability Zones within a region. This architecture enables sub-second failover, six-way replication of all writes, and continuous backup to S3 — all without any action on your part. The continuous backup to S3 is the first compliance complication. Aurora maintains a continuous stream of database changes (the redo log) in S3 for the duration of your PITR window — between 1 and 35 days. Every INSERT, UPDATE, and DELETE your application performs is captured in this log. When a user exercises their Article 17 right to erasure, your application deletes the row from the live Aurora cluster. But the redo log retains a record of every version that row ever had, for the full PITR window. The data is not gone. It is in S3, outside your application's reach, managed by AWS. --- ## GDPR Failure Vector 1: Article 17 Right to Erasure vs. Point-in-Time Recovery **Article 17(1) GDPR** requires personal data to be erased "without undue delay" when the data subject withdraws consent, the data is no longer necessary for the purpose it was collected, or the data subject objects and no overriding legitimate interest exists. Aurora's PITR system is architecturally incompatible with this requirement. When you delete a row — a user account, a health record, a financial transaction — the deletion is recorded as a new event in the redo log. The previous versions of that row remain in the redo log for the full retention window. An operator with Aurora snapshot access can restore the database to any point before the deletion and retrieve the "erased" data in full. **The enforcement consequence:** GDPR does not distinguish between live storage and backup storage. Personal data is personal data, regardless of whether it is in your active database or in a recovery mechanism. If an Aurora snapshot or PITR restore point contains a user's data that should have been erased, the retention of that recovery point is a violation of Article 17. The only compliant approach within Aurora is to **shorten your PITR window to 1 day** — effectively eliminating meaningful disaster recovery — or to implement a separate erasure-tracking system that redacts data from restored databases before any human or system accesses them. Neither option is satisfactory. --- ## GDPR Failure Vector 2: Automated Snapshots and Storage Limitation Aurora takes automated snapshots at configurable intervals. The default retention is 1 day; the maximum is 35 days. Unlike application-level backups you control, Aurora snapshots are taken at the cluster level. A snapshot is a point-in-time copy of the entire Aurora cluster — every table, every row, every personal data field. **Article 5(1)(e)** requires storage limitation: personal data should be kept "no longer than is necessary for the purposes for which the personal data are processed." If your application-layer data retention policy deletes user data after 180 days of inactivity, but your Aurora snapshot retention is 35 days, every deletion creates a 35-day gap where deleted data persists in snapshots. Multiply this across your user base and the snapshots become a systematic retention violation at scale. **Manual snapshots make this worse.** Aurora allows manual snapshots with no automatic expiry. Snapshots created for a migration, a debugging session, or a load test will persist indefinitely unless explicitly deleted. These manual snapshots capture the full personal data footprint of your Aurora cluster at a point in time. Without active governance, they accumulate silently. --- ## GDPR Failure Vector 3: Backtrack — A Change History Aurora Keeps for You Aurora MySQL offers **Backtrack** — the ability to rewind an Aurora cluster to a previous point in time *without restoring from a snapshot*. Backtrack works by retaining the change record log for a configurable window (up to 72 hours). Backtrack is not backup. It is a live change history stored in the Aurora storage layer. Your application has no visibility into it and no API to inspect or delete individual rows from it. **The GDPR implication is identical to PITR but more immediate:** a user's data deleted from the live cluster at 14:00 is recoverable via Backtrack until the 72-hour window expires — whether at 14:01, 24 hours later, or 71 hours later. During this entire window, the "deleted" data exists in a form that a database administrator can retrieve without any audit log entry in your application. For Article 17 compliance, any organisation using Aurora MySQL Backtrack must either disable it entirely or document in their Article 30 records that PITR/Backtrack windows constitute a justified retention period under a specific legal basis — a position that is difficult to defend for standard application data. --- ## GDPR Failure Vector 4: Aurora Global Database and Cross-Jurisdiction Replication Aurora Global Database enables near-real-time replication to secondary clusters in up to five additional AWS regions, with typical replication lag under one second. Teams use it for disaster recovery, low-latency reads across geographies, and planned region failover. The GDPR consequence is immediate: if your primary cluster is in eu-west-1 and your secondary is in us-east-1, every write your application makes is replicated to the United States within one second. This is a transfer of personal data to a third country under **Article 44 GDPR**. **The Schrems II problem:** Transfers to the US require an appropriate transfer mechanism under Article 46 — typically Standard Contractual Clauses (SCCs). But the ECJ's Schrems II ruling (C-311/18) held that SCCs are insufficient where the receiving party is subject to US surveillance law that overrides contractual protections. AWS is subject to the CLOUD Act. An Aurora Global Database secondary in us-east-1 is covered by CLOUD Act compelled disclosure orders. **The practical compliance path** — stopping Global Database replication to US regions and limiting it to EU regions only — negates most of the disaster recovery and latency benefits that justify Global Database's cost premium over standard Aurora Multi-AZ. --- ## GDPR Failure Vector 5: CLOUD Act and Aurora as a Whole Even without Aurora Global Database, every Aurora cluster in every AWS EU region is operated by Amazon Web Services, Inc., a US company. US authorities can issue a **CLOUD Act warrant** compelling Amazon to disclose data from any AWS service — including Aurora clusters in Frankfurt, Ireland, Stockholm, or Milan — without notifying the affected EU company or providing an opportunity to contest the order in EU courts. **Article 48 GDPR** is explicit: judgments or decisions of third-country authorities (including US courts issuing CLOUD Act warrants) do not have direct legal effect in the EU and cannot compel data transfers without going through mutual legal assistance treaty procedures. In practice, CLOUD Act warrants bypass these procedures. The result is a structural conflict: your Aurora cluster in eu-central-1 complies with GDPR data residency requirements by keeping data in Germany, while simultaneously being subject to extraterritorial US jurisdiction that can access that data without your knowledge or consent. For any organisation processing data subject to GDPR — particularly health data under Article 9, financial data, or data belonging to EU public sector entities — this structural CLOUD Act exposure is not a theoretical risk. It is an inherent characteristic of any AWS-operated service. --- ## GDPR Failure Vector 6: Aurora Serverless v2 and Compute Placement Uncertainty Aurora Serverless v2 automatically scales database capacity from 0.5 to 128 ACUs (Aurora Capacity Units) in response to load. Unlike provisioned Aurora where you know exactly which EC2 instances run your database, Serverless v2 abstracts the compute layer entirely. AWS guarantees data residency at the region level — your Aurora data stays in your selected region. But the underlying compute instances handling query execution scale dynamically across the region's AZ infrastructure. For organisations required to demonstrate data processing locality at the AZ or hardware level (common in financial services and healthcare under sector-specific regulations), Serverless v2's opacity creates documentation challenges. **The Article 32 implication:** Appropriate technical and organisational measures must be implemented to ensure security of processing. Demonstrating that those measures are effective requires knowing where processing occurs. Aurora Serverless v2's dynamic scaling makes this audit trail incomplete. --- ## EU-Sovereign Aurora Alternatives for 2026 ### Neon — Serverless Postgres with EU Regions Neon is a serverless PostgreSQL service that separates storage and compute, similar to Aurora's architecture. It offers branches (copy-on-write database forks for dev/staging), point-in-time restore, and autoscaling. Neon is incorporated and operated in the EU with deployments in Frankfurt and Amsterdam. Unlike Aurora, Neon is not subject to CLOUD Act compelled disclosure through a US parent company. **EU alternative fit:** High for teams that value Aurora Serverless-style auto-scaling and want serverless Postgres with EU data sovereignty. ### CloudNativePG — Kubernetes-Native PostgreSQL Operator CloudNativePG is a CNCF-sandbox operator that runs production-grade PostgreSQL clusters on any Kubernetes infrastructure. You control the complete stack: compute, storage, network, backup. Deployed on EU-sovereign infrastructure (Hetzner, OVHcloud, IONOS, Scaleway), there is no US parent company in the data path. Supports streaming replication, logical replication, automated failover, point-in-time recovery via WAL archiving, and connection pooling via PgBouncer. **EU alternative fit:** Highest for teams with Kubernetes infrastructure and operational capacity for PostgreSQL management. ### CrunchyData Postgres (Crunchy Bridge) Crunchy Bridge is a managed PostgreSQL service from Crunchy Data, built on the open-source Patroni HA framework. It supports EU regions and offers direct, dedicated PostgreSQL without the Aurora compatibility layer. Crunchy Data is US-incorporated, so US customers using US infrastructure do not benefit from CLOUD Act avoidance — but EU organisations deploying in EU regions via Crunchy Bridge have more contractual clarity than with AWS. **EU alternative fit:** Good for teams wanting managed Postgres without full self-hosting. ### Aiven for PostgreSQL — EU-Resident Managed Postgres Aiven is a Finnish managed open-source database company that offers PostgreSQL, MySQL, Redis, Kafka, and ClickHouse on infrastructure across AWS, GCP, Azure, and EU-only cloud providers. Aiven is EU-incorporated (Helsinki) and specifically designed for EU data residency requirements. Its PostgreSQL service includes automated backups, high availability, read replicas, and connection pooling. **EU alternative fit:** Strong for teams wanting a familiar managed Postgres SLA without self-hosting, with EU legal jurisdiction. ### Patroni + HAProxy on EU Infrastructure Patroni is the open-source HA solution for PostgreSQL used by Crunchy Data, Zalando, and hundreds of organisations running production databases. Deployed on EU-sovereign infrastructure with a custom backup solution (pgBackRest, Barman) and a load balancer (HAProxy, pgBouncer), Patroni delivers Aurora-equivalent HA, failover, and replication — with complete control over retention policies, erasure, and audit. **EU alternative fit:** Maximum sovereignty for organisations with the operational maturity to manage PostgreSQL infrastructure directly. ### sota.io — Deploy Your EU Database Without AWS Lock-In [sota.io](https://sota.io) is a European PaaS that deploys containerised workloads — including PostgreSQL via Docker or Kubernetes — exclusively on EU infrastructure without AWS, Google, or Microsoft in the supply chain. You bring your CloudNativePG, Patroni, or standard Postgres container; sota.io handles deployment, scaling, and EU-sovereign hosting. Unlike Aurora, there is no US parent company with CLOUD Act exposure, no automated cross-region replication to US infrastructure, and full control over backup retention and erasure workflows. Developers migrating from Aurora get a familiar deployment model — without the GDPR liability. --- ## The Migration Path from AWS Aurora to EU-Sovereign Postgres The practical migration from Aurora to an EU-sovereign alternative depends on your Aurora flavour: **From Aurora PostgreSQL:** 1. Enable logical replication on the Aurora cluster (`rds.logical_replication = 1`) 2. Use `pg_dump` or `pglogical` for live migration with minimal downtime 3. Switch connection strings in the application (Aurora PostgreSQL is wire-compatible with standard PostgreSQL) 4. Disable Aurora after verification; the DPA boundary is now EU-controlled **From Aurora MySQL:** 1. Use `mysqldump` or AWS DMS for initial data export 2. Import to a MySQL-compatible EU alternative (Percona, MariaDB) 3. Validate application compatibility; Aurora MySQL extends standard MySQL in minor ways **Key compliance steps during migration:** - Document the migration in your Article 30 records as a change in processor (Article 28) - Ensure PITR/snapshot windows on Aurora are expired or deleted after cutover - Run a post-migration erasure test: delete a test user and verify the data is gone from all new backup systems within your defined SLA --- ## Summary AWS Aurora's managed database capabilities come with six GDPR failure vectors that are structural, not configurable: PITR retention conflicts with Article 17 erasure, automated snapshot accumulation violates Article 5(1)(e) storage limitation, Backtrack creates an opaque change history outside application control, Global Database replicates personal data to US infrastructure by default, the CLOUD Act gives US authorities compelled access to all Aurora clusters regardless of region, and Aurora Serverless v2's dynamic compute placement makes Article 32 audit trails incomplete. For EU organisations processing personal data under GDPR — particularly in sectors subject to additional sector-specific regulation such as healthcare (EHDS), financial services (DORA), or critical infrastructure (NIS2) — running AWS Aurora represents a structural compliance liability that cannot be resolved by configuration alone. EU-sovereign alternatives — Neon, CloudNativePG, Aiven, Patroni-based deployments on EU infrastructure, or managed Postgres via [sota.io](https://sota.io) — eliminate the CLOUD Act exposure and give you full control over the retention, erasure, and audit workflows that GDPR requires.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.