AWS S3 Glacier EU Alternatives 2026: Long-Term Archival Without CLOUD Act Exposure
AWS S3 Glacier and S3 Glacier Deep Archive are the go-to choices for long-term data retention in AWS environments: audit logs, financial records, backup archives, compliance artifacts, and historical databases that must be retained for years or decades. The combination of low cost (as low as €0.00099 per GB per month for Deep Archive), durability guarantees, and integration with existing S3 workflows makes Glacier the path of least resistance for teams who need to store data cheaply and rarely retrieve it.
There are two structural problems for EU organizations:
First, Glacier archives are stored under US jurisdiction — Amazon Web Services, Inc. is a Delaware corporation subject to the CLOUD Act, regardless of which AWS region hosts the vault. Archived personal data, no matter how cold, remains accessible to US law enforcement via compelled disclosure without notification to the data controller.
Second, Glacier Vault Lock — the WORM (Write Once, Read Many) feature that makes Glacier attractive for regulatory compliance — creates a direct conflict with GDPR Article 17. Once a Vault Lock policy is applied and locked, data cannot be deleted for the duration of the lock period. Data subjects exercising their right to erasure cannot have their personal data deleted from a locked Glacier vault. This is not a configuration choice you can work around — it is the fundamental property of Vault Lock by design.
This guide explains both problems and the EU-native alternatives that let you satisfy archival retention requirements without CLOUD Act exposure and without the GDPR erasure conflict.
What AWS S3 Glacier Provides
The Glacier family covers three storage classes with different cost/retrieval tradeoffs:
- S3 Glacier Instant Retrieval — millisecond access, higher cost (~€0.005/GB/month). Used for compliance data that needs to be accessible quickly during audits.
- S3 Glacier Flexible Retrieval — retrieval times from minutes to 12 hours (~€0.005/GB/month storage, plus retrieval fees). The original Glacier offering.
- S3 Glacier Deep Archive — retrieval times of 12–48 hours (~€0.001/GB/month). For data that will rarely if ever be accessed.
Organizations use S3 Lifecycle policies to automatically transition objects from standard S3 to Glacier tiers after a defined period:
{
"Rules": [
{
"ID": "archive-logs-after-90-days",
"Status": "Enabled",
"Filter": {
"Prefix": "application-logs/"
},
"Transitions": [
{
"Days": 90,
"StorageClass": "GLACIER_IR"
},
{
"Days": 365,
"StorageClass": "DEEP_ARCHIVE"
}
],
"Expiration": {
"Days": 2555
}
}
]
}
This lifecycle configuration transitions application logs to Glacier Instant Retrieval after 90 days, to Deep Archive after 365 days, and deletes them after 7 years. For application logs containing personal data — IP addresses, user IDs, session tokens — this means 7 years of personal data retained in US-jurisdiction storage.
Vault Lock: The GDPR Art. 17 Conflict
S3 Glacier Vault Lock policies provide WORM storage with legal force. Once a Vault Lock policy is "locked" (committed), it cannot be modified or deleted for the duration specified in the policy. AWS does not allow exceptions — even the root account cannot override a locked Vault Lock policy.
A typical Vault Lock configuration for financial records retention:
import boto3
glacier = boto3.client('glacier', region_name='eu-central-1')
# Step 1: Create the vault
glacier.create_vault(
vaultName='financial-records-2019-2026'
)
# Step 2: Initiate the policy lock
response = glacier.initiate_vault_lock(
vaultName='financial-records-2019-2026',
policy={
'Policy': json.dumps({
"Version": "2012-10-17",
"Statement": [
{
"Sid": "deny-based-on-archive-age",
"Effect": "Deny",
"Principal": "*",
"Action": "glacier:DeleteArchive",
"Resource": "arn:aws:glacier:eu-central-1:123456789012:vaults/financial-records-2019-2026",
"Condition": {
"NumericLessThan": {
"glacier:ArchiveAgeInDays": "2555"
}
}
}
]
})
}
)
lock_id = response['lockId']
print(f"Lock ID: {lock_id}")
# Step 3: Complete the lock (IRREVERSIBLE after 24-hour test window)
glacier.complete_vault_lock(
vaultName='financial-records-2019-2026',
lockId=lock_id
)
After Step 3, the vault is locked. No DeleteArchive API call will succeed for any object younger than 7 years (2555 days). This applies to all objects in the vault without exception, including objects containing personal data about individuals who submit GDPR Art. 17 erasure requests.
Why This Conflicts With GDPR Art. 17
GDPR Article 17(1) grants data subjects the right to obtain erasure of their personal data without undue delay when:
- The data is no longer necessary for the purposes for which it was collected
- The data subject withdraws consent and there is no other legal basis
- The data subject objects and there is no overriding legitimate interest
- The personal data was unlawfully processed
The key legal conflict arises when personal data archived in a locked Glacier vault belongs to an individual who exercises their Art. 17 right. The data controller cannot fulfill the erasure request: the Vault Lock policy prevents deletion at the technical level.
The attempted justification: Art. 17(3)(b)
GDPR Art. 17(3)(b) provides an exception to the right to erasure for compliance with a legal obligation — for example, financial record retention requirements. If a financial institution is legally required to retain transaction records for 7 years (as required by EU anti-money laundering directives), it may argue that Vault Lock implements this legal obligation.
This argument is defensible for the specific records subject to the retention obligation. It does not extend to:
- Co-mingled personal data not covered by the retention obligation (e.g., behavioral analytics, marketing data, session logs archived alongside financial records)
- Metadata retained alongside the mandated records that is not itself required for the compliance purpose
- Data subjects who are not parties to the mandated retention obligation (e.g., employees, website visitors whose data ended up in the same archive)
In practice, Glacier vaults often contain mixed data: compliance-mandated records that justify the retention period plus other personal data that ended up in the same archive for operational convenience. The Art. 17(3)(b) exception covers the mandated portion; it does not provide a blanket exemption for the entire vault.
The Vault Lock Design Is Intentionally Inflexible
AWS designed Vault Lock to satisfy US regulatory requirements like SEC Rule 17a-4 (WORM storage for financial records) and CFTC Rule 1.31. These US regulations require that records be retained in a non-rewritable, non-erasable format.
US financial regulators and GDPR Art. 17 are in structural tension: one requires that data cannot be deleted, the other requires that it can be. Vault Lock, optimized for the former, is incompatible with the latter for co-mingled personal data.
The solution for GDPR-compliant WORM archival is architectural segregation, not a single locked vault: archive compliance-mandated records (which may use WORM) separately from personal data that is subject to erasure rights. This requires purpose-specific archive design from the beginning, not retrofitted after Vault Lock is applied.
The CLOUD Act Problem for Archival Data
The CLOUD Act concern for Glacier archives is straightforward: Amazon Web Services, Inc. is a US corporation. A CLOUD Act order can compel Amazon to produce archived objects, including objects in locked Glacier vaults, if the order meets the legal threshold.
Implications specific to archival data:
-
Retention increases exposure duration. Data retained for 7 years in Glacier means 7 years of CLOUD Act exposure. A data breach in 2033 of records archived in 2026 is still a CLOUD Act-accessible event.
-
Vault Lock does not prevent government access. Vault Lock prevents the data controller from deleting the data. It does not prevent Amazon from being compelled by law enforcement to provide access. The policy statement
"Effect": "Deny", "Action": "glacier:DeleteArchive"restricts the data controller's DeleteArchive API calls; it does not restrict Amazon's own internal access capabilities used to fulfill law enforcement orders. -
Deep Archive retrieval times do not impede law enforcement. A CLOUD Act order gives Amazon time to comply. The 12–48 hour retrieval window for Deep Archive is not a meaningful obstacle to a legal order.
-
Archives often contain the most sensitive historical data. Audit logs capture everything that happened. Financial records capture every transaction. Backup archives contain database snapshots with the most complete personal data profiles in your organization. Archiving to Glacier concentrates personal data sensitivity without reducing jurisdiction risk.
GDPR Art. 5(1)(e): Storage Limitation and Archival Scope
GDPR Article 5(1)(e) requires that personal data be "kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed" — the storage limitation principle.
Organizations often treat Glacier as a "store everything, worry later" solution. The low cost makes it easy to archive data indefinitely rather than making decisions about what to retain. This creates systematic violations of Art. 5(1)(e): personal data retained in Glacier for years longer than any documented business or legal purpose.
A Glacier archive containing 7 years of application logs typically includes personal data that was:
- Collected for user authentication — IP addresses, session tokens — with a processing purpose that ended when the user session closed
- Collected for debugging — stack traces with user identifiers — with a processing purpose that ended when the bug was fixed
- Collected for analytics — user behavior events — with a retention period defined by a "30-day analytics window" that expired 6 years and 11 months ago
The existence of a financial records retention obligation does not extend to the application logs archived in the same Glacier vault. Each data category has its own retention justification and its own maximum retention period under Art. 5(1)(e).
The practical implication: Before implementing Glacier archival for personal data, you need a retention schedule that maps each data category to a specific retention period with a documented legal basis. The S3 Lifecycle policy should implement those specific periods, not a single blanket archival policy applied to everything.
EU-Native Alternatives for Long-Term Archival
Hetzner Object Storage
Hetzner Online GmbH is a German-incorporated company headquartered in Gunzenhausen, Bavaria. Hetzner Object Storage uses the S3-compatible API, making migration straightforward, and stores data in Hetzner's Falkenstein, Nuremberg, and Helsinki data centers — outside CLOUD Act jurisdiction.
import boto3
# Hetzner Object Storage - S3-compatible API
hetzner_s3 = boto3.client(
's3',
endpoint_url='https://fsn1.your-objectstorage.com', # Falkenstein
aws_access_key_id='<hetzner-access-key>',
aws_secret_access_key='<hetzner-secret-key>',
region_name='eu-central'
)
# Archive with lifecycle policy (Hetzner S3 API compatible)
hetzner_s3.put_bucket_lifecycle_configuration(
Bucket='financial-records-archive',
LifecycleConfiguration={
'Rules': [
{
'ID': 'delete-after-7-years',
'Status': 'Enabled',
'Filter': {'Prefix': ''},
'Expiration': {'Days': 2555}
}
]
}
)
# Upload with custom retention metadata
hetzner_s3.put_object(
Bucket='financial-records-archive',
Key='2026/01/transaction-log-2026-01-15.gz',
Body=compressed_data,
Metadata={
'gdpr-retention-basis': 'amld-article-40',
'gdpr-retention-expires': '2033-01-15',
'gdpr-data-categories': 'financial-records',
'gdpr-erasure-applicable': 'false'
}
)
Hetzner Object Storage does not currently offer a native Glacier-equivalent cold storage tier with separate pricing. All objects are stored at the standard object storage rate. For deep archival cost optimization, use object compression and batching before upload.
The key advantage over Glacier: Hetzner does not offer Vault Lock with immutable WORM semantics. This means you retain the ability to fulfill GDPR Art. 17 erasure requests at any time, while achieving data durability through Hetzner's replication architecture.
OVHcloud Cold Archive
OVHcloud (OVH SAS) is a French company headquartered in Roubaix, France. OVHcloud's Cold Archive offering specifically targets the Glacier Deep Archive use case: very low per-GB cost for data that is accessed infrequently or for regulatory compliance.
# OVHcloud Cold Archive via OpenStack Swift API (S3-compatible with adapter)
# Cold Archive stores to Ceph tape-based storage
# Using rclone with OVHcloud S3-compatible endpoint
rclone config create ovhcloud-archive s3 \
provider=OVH \
access_key_id="<access-key>" \
secret_access_key="<secret-key>" \
endpoint="https://s3.gra.perf.cloud.ovh.net" \
region="gra"
# Sync archive data to OVHcloud Cold Archive
rclone sync /local/archive/ ovhcloud-archive:cold-archive-bucket/ \
--transfers=4 \
--checkers=8 \
--log-level INFO
OVHcloud Cold Archive uses tape storage for the lowest-cost tier, similar to Glacier Deep Archive's economics. Retrieval takes hours rather than milliseconds. For regulatory compliance archives accessed annually for audits, this is acceptable.
OVHcloud has no US parent company (Iliad Group, French parent). CLOUD Act does not apply. OVHcloud's data processing agreement covers GDPR Article 28 processor requirements.
MinIO on Hetzner Dedicated Servers
For organizations that require full control over the archival stack, MinIO running on Hetzner dedicated servers provides S3-compatible object storage on infrastructure you fully control.
# docker-compose.yml for MinIO archival cluster on Hetzner
version: '3.8'
services:
minio:
image: minio/minio:RELEASE.2026-04-01T00-00-00Z
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: ${MINIO_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${MINIO_SECRET_KEY}
command: server /data --console-address ":9001"
volumes:
- /mnt/hetzner-ssd-1:/data/disk1
- /mnt/hetzner-ssd-2:/data/disk2
- /mnt/hetzner-ssd-3:/data/disk3
- /mnt/hetzner-ssd-4:/data/disk4
deploy:
resources:
limits:
cpus: '4'
memory: 8G
MinIO supports object locking with WORM semantics (via the x-amz-object-lock-mode header), which is the per-object equivalent of Glacier Vault Lock. The critical difference for GDPR compliance is that MinIO object locks can be applied at the object level rather than the vault level, and with explicit expiry dates. This allows you to:
- Lock only the specific records subject to mandatory retention obligations
- Leave co-mingled personal data without locks, preserving Art. 17 erasure capability
- Implement retention-period-specific locks per data category
# MinIO object locking: per-object WORM with expiry
s3_client = boto3.client(
's3',
endpoint_url='https://minio.your-eu-server.de',
aws_access_key_id=os.environ['MINIO_ACCESS_KEY'],
aws_secret_access_key=os.environ['MINIO_SECRET_KEY'],
)
# Lock financial record for 7-year regulatory retention
s3_client.put_object(
Bucket='financial-records',
Key='2026/01/transaction-log-2026-01-15.gz',
Body=compressed_data,
ObjectLockMode='COMPLIANCE',
ObjectLockRetainUntilDate=datetime(2033, 1, 15, tzinfo=timezone.utc)
)
# Store non-mandatory personal data WITHOUT lock (erasure possible)
s3_client.put_object(
Bucket='application-archives',
Key='2026/01/app-log-2026-01-15.gz',
Body=compressed_data
# No ObjectLockMode - can be deleted on Art. 17 erasure request
)
This architecture satisfies both regulatory retention requirements (locked records cannot be deleted prematurely) and GDPR Art. 17 (unlocked records can be deleted on request).
Wasabi EU (Frankfurt)
Wasabi Technologies has a data center in Frankfurt, Germany. Wasabi offers S3-compatible storage with no egress fees and pricing competitive with Glacier Instant Retrieval for frequently-accessed archives.
Note that Wasabi Technologies, Inc. is headquartered in Boston, Massachusetts — a US corporation. The CLOUD Act applies to US companies regardless of where their servers are located. Wasabi Frankfurt is not CLOUD Act-exempt. For data sovereignty, Wasabi EU does not provide the same protection as Hetzner Object Storage or OVHcloud Cold Archive.
For teams whose primary concern is cost rather than CLOUD Act exposure, Wasabi EU is an improvement over AWS Glacier on regional reliability and egress costs. For teams whose primary concern is EU data sovereignty, use a EU-incorporated provider.
Building a GDPR-Compliant Archival Architecture
Principle 1: Segregate by Erasure Eligibility
Do not archive personal data subject to Art. 17 erasure rights in the same vault/bucket as data subject to mandatory retention obligations.
archive-structure/
├── mandatory-retention/ # WORM lock applied, Art.17(3)(b) exception
│ ├── financial-records/
│ ├── audit-logs-compliance/
│ └── legal-hold/
└── operational-retention/ # No WORM lock, Art.17 erasure possible
├── application-logs/
├── user-behavioral-data/
└── backup-snapshots/
Principle 2: Document Retention Justification per Object Class
Each data category in your archive needs a documented retention justification. Store this as metadata on the bucket or as a JSON manifest:
{
"bucket": "operational-retention",
"data_categories": [
{
"prefix": "application-logs/",
"personal_data": true,
"data_types": ["ip_address", "user_id", "session_id"],
"retention_basis": "gdpr_art_6_1_f_legitimate_interest",
"retention_period_days": 90,
"erasure_applicable": true,
"erasure_mechanism": "s3_delete_by_user_id_prefix"
},
{
"prefix": "financial-records/",
"personal_data": true,
"data_types": ["transaction_id", "account_number", "amount"],
"retention_basis": "legal_obligation_amld",
"retention_period_days": 2555,
"erasure_applicable": false,
"erasure_exception": "gdpr_art_17_3_b_legal_obligation"
}
]
}
Principle 3: Implement Erasure by Design
For archival data where Art. 17 applies, design the deletion mechanism before you archive:
def fulfill_erasure_request(user_id: str, s3_client, bucket: str, prefix: str):
"""
Fulfill GDPR Art. 17 erasure request for a specific user ID.
Deletes all objects matching the user_id prefix in non-WORM buckets.
"""
paginator = s3_client.get_paginator('list_objects_v2')
pages = paginator.paginate(
Bucket=bucket,
Prefix=f"{prefix}{user_id}/"
)
objects_to_delete = []
for page in pages:
for obj in page.get('Contents', []):
objects_to_delete.append({'Key': obj['Key']})
if objects_to_delete:
s3_client.delete_objects(
Bucket=bucket,
Delete={
'Objects': objects_to_delete,
'Quiet': True
}
)
print(f"Deleted {len(objects_to_delete)} objects for user {user_id}")
return len(objects_to_delete)
If your archival data is not organized by user ID prefix — if it's in bulk logs where user IDs are embedded in record content — your erasure mechanism becomes significantly more complex. The easiest path is to design archival structures that support per-user deletion from the beginning.
Migrating From AWS S3 Glacier to EU-Sovereign Archival
Step 1: Audit Existing Glacier Archives
import boto3
glacier = boto3.client('glacier', region_name='eu-central-1')
# List all vaults
vaults = glacier.list_vaults(accountId='-')['VaultList']
for vault in vaults:
print(f"Vault: {vault['VaultName']}")
print(f" Size: {vault['SizeInBytes'] / 1e9:.2f} GB")
print(f" Archives: {vault['NumberOfArchives']}")
# Check for Vault Lock
try:
lock = glacier.get_vault_lock(
accountId='-',
vaultName=vault['VaultName']
)
print(f" Vault Lock: {lock['State']} (expires: {lock.get('ExpirationDate', 'LOCKED')})")
except glacier.exceptions.ResourceNotFoundException:
print(" Vault Lock: None")
Critical finding: If Vault Lock state is "Locked", you cannot delete data from that vault before the retention period expires. Identify which personal data categories are in locked vaults and assess which are covered by Art. 17(3) exceptions vs. which may require regulatory guidance to address.
Step 2: Restore Archives for Migration
Glacier restores require a restore request before data can be downloaded:
s3 = boto3.client('s3', region_name='eu-central-1')
# Initiate restore for Glacier object
s3.restore_object(
Bucket='my-glacier-bucket',
Key='archived-data/2024-logs.tar.gz',
RestoreRequest={
'Days': 7, # Keep restored copy available for 7 days
'GlacierJobParameters': {
'Tier': 'Standard' # 3-5 hours for standard retrieval
}
}
)
# Check restore status
response = s3.head_object(
Bucket='my-glacier-bucket',
Key='archived-data/2024-logs.tar.gz'
)
restore_status = response.get('Restore', 'Not initiated')
print(f"Restore status: {restore_status}")
For Deep Archive objects, use Bulk retrieval (12-48 hours) to minimize retrieval costs during migration.
Step 3: Transfer to EU-Sovereign Archival
Once restored, use rclone to transfer to your EU-sovereign destination:
# Configure rclone for Hetzner Object Storage
rclone config create hetzner-archive s3 \
provider=Other \
env_auth=false \
access_key_id="<hetzner-key>" \
secret_access_key="<hetzner-secret>" \
endpoint="https://fsn1.your-objectstorage.com"
# Transfer archived data from AWS to Hetzner
rclone copy s3:my-glacier-bucket/ hetzner-archive:eu-archive-bucket/ \
--transfers=8 \
--s3-upload-concurrency=4 \
--progress \
--log-level INFO \
--log-file=/var/log/glacier-migration.log
Deploying EU-Sovereign Archival Infrastructure on sota.io
For teams that want managed archival infrastructure rather than direct object storage accounts, sota.io provides deployment of MinIO and compatible object storage stacks on EU-sovereign compute.
Deploying MinIO on sota.io means:
- The object storage API endpoint, administration, and key management run on EU-incorporated infrastructure
- No US parent company in the data path — CLOUD Act does not apply
- Git-push deployment for MinIO configuration changes
- Persistent volumes for archival data that survive container restarts
- Private networking between MinIO and your application services
For organizations that need to replace Glacier in an existing S3 Lifecycle architecture, MinIO's S3-compatible API means the transition requires only a change to the S3 endpoint URL in your application configuration — no producer code changes.
Summary
AWS S3 Glacier provides cost-effective long-term archival storage with one critical flaw for GDPR compliance: it routes archived personal data through US-jurisdiction infrastructure subject to the CLOUD Act. Glacier Vault Lock compounds this by creating an immutable archive that structurally conflicts with GDPR Art. 17 right to erasure when personal data subject to erasure rights is co-mingled with compliance-mandated records.
EU-native alternatives — Hetzner Object Storage, OVHcloud Cold Archive, and MinIO on EU infrastructure — provide equivalent archival economics without CLOUD Act exposure. MinIO with per-object locking (rather than vault-level locking) enables GDPR-compliant WORM archival by applying immutability only to records genuinely subject to mandatory retention obligations, while preserving erasure capability for co-mingled personal data.
The architectural lesson from Glacier's GDPR conflict is general: archival systems built for US regulatory compliance (SEC 17a-4, CFTC Rule 1.31) are optimized for irrefutability, not for erasure. GDPR compliance requires the opposite capability: the ability to delete specific personal data on request, at any layer in the data stack, including cold archives designed to be permanent.
This post is part of the sota.io AWS EU Alternative Series. Related posts: AWS S3 EU Alternative, AWS Backup EU Alternative, AWS EBS EU Alternative.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.