AWS CloudHSM EU Alternative 2026: Hardware Security Modules, FIPS 140-3, and the GDPR CLOUD Act Gap
Post #754 in the sota.io EU Compliance Series
AWS CloudHSM provides dedicated hardware security modules (HSMs) — tamper-resistant physical devices that generate, store, and manage cryptographic keys. Unlike AWS KMS, where Amazon controls the underlying key material, CloudHSM gives you exclusive control: only you have access to keys within the HSM. The HSM hardware itself runs in dedicated, single-tenant compute in your chosen AWS region. You initialize the HSM, set the administrator credentials, and create the crypto users. Amazon cannot access the keys inside.
This dedicated model makes CloudHSM popular for regulated industries — financial services requiring PCI DSS HSM controls, healthcare organizations protecting encryption keys for medical records, government workloads requiring FIPS 140-3 Level 3 validation, and PKI deployments issuing certificates from hardware-protected root CAs.
AWS CloudHSM is available in European regions: eu-west-1 (Ireland), eu-central-1 (Frankfurt), eu-west-3 (Paris), eu-north-1 (Stockholm). The physical HSM appliances run in Europe. Your cryptographic keys never leave the hardware unencrypted. Many security architects treat this as meeting GDPR Art.32 requirements for appropriate technical measures.
The problem is structural. Amazon Web Services, Inc. is a Delaware corporation headquartered in Seattle, Washington. The CLOUD Act (18 U.S.C. § 2713) compels US companies to produce data stored or controlled anywhere in the world when ordered by US authorities. While the cryptographic key material itself is protected inside the HSM hardware, the surrounding management plane — cluster configurations, audit logs, backup encryption, initialization records, user credential metadata — is managed by the US legal entity. A valid government order can reach this operational data regardless of which AWS region hosts the HSM hardware.
There is a second gap that compliance teams frequently miss: FIPS 140-3 certification and GDPR compliance are orthogonal requirements. FIPS validates cryptographic module security — the hardware randomness, key zeroization on tamper, algorithm correctness. FIPS says nothing about the organizational structure of the company managing the surrounding infrastructure, the legal jurisdiction governing operational data, or the applicability of the CLOUD Act to service configuration metadata. A FIPS-validated module operated by a US company remains subject to US legal process for its surrounding operational state.
What AWS CloudHSM Stores Outside the HSM Hardware
The cryptographic keys inside CloudHSM hardware are genuinely inaccessible to Amazon. The problem lies in what CloudHSM stores outside the tamper-resistant hardware boundary — in AWS-managed service state, CloudWatch, S3, and the management plane — and what that data reveals about your cryptographic architecture.
HSM Cluster Configurations as Art.30 Cryptographic Architecture Records
When you create a CloudHSM cluster, AWS stores the cluster configuration in the CloudHSM service: the cluster ID, the HSM count, the VPC and subnet placement, the IP addresses assigned to each HSM, the ENI (Elastic Network Interface) identifiers, and the cluster state history. These configurations are stored by Amazon Web Services, Inc. under US jurisdiction.
For organizations using CloudHSM to protect personal data — encrypting patient records, signing financial transactions referencing account holders, protecting authentication tokens for identified users — the cluster configuration is an Art.30 processing record. It documents the cryptographic infrastructure protecting personal data: where HSMs are placed, how many exist, which VPCs (and thus which applications handling personal data) have access.
GDPR Art.30(1)(g) requires records of processing activities to include "where possible, a general description of the technical and organisational security measures" — which is exactly what HSM cluster configurations document. The irony: this Art.30 record now exists in AWS-managed service state under CLOUD Act compulsion scope.
A cluster configuration entry includes:
ClusterId: cluster-abc12345
State: ACTIVE
HsmType: hsm1.medium
SubnetMapping:
eu-central-1a: subnet-0abc1234def56789
eu-central-1b: subnet-0fed9876cba54321
Hsms:
- HsmId: hsm-xyz98765
AvailabilityZone: eu-central-1a
EniIp: 10.0.1.42
State: ACTIVE
BackupPolicy: DEFAULT
CreateTimestamp: 2026-01-15T09:32:14Z
This metadata maps the cryptographic infrastructure protecting personal data — stored by a US company.
CloudWatch HSM Audit Logs as Art.32 Security Evidence Under US Jurisdiction
CloudHSM publishes detailed audit logs to Amazon CloudWatch Logs. These logs record every cryptographic and administrative operation performed on the HSM: key generation events, key deletion events, login successes and failures, user management operations (create/delete HSM user), role assignments within the HSM, and session management.
CloudWatch logs are stored in AWS-managed infrastructure under US legal control. For a CloudHSM deployment protecting personal data, these audit logs constitute Art.32 security monitoring records — evidence that appropriate access controls and audit trails exist for cryptographic operations on personal data.
CloudHSM audit log entries include:
{
"EventVersion": "1.0",
"UserType": "CU",
"UserName": "app-encryption-user",
"Action": "generateKey",
"Result": 0,
"OpaqueHandle": "6",
"RequestId": "req-abc123",
"SessionIdentifier": 10,
"Cluster": "cluster-abc12345",
"Time": "2026-05-01T14:23:07.123Z"
}
The UserName field reveals which application service account performed which cryptographic operation — audit trail data for the cryptographic operations protecting personal data, stored in CloudWatch under US jurisdiction. When the HSM manages keys for database encryption protecting medical records, these logs document which application users accessed which cryptographic operations when — GDPR Art.32 security records accessible via CLOUD Act.
HSM Backup Encryption: The KMS Trust Root Problem
CloudHSM automatically creates encrypted backups of HSM state (HSM configuration and cryptographic objects are backed up in an encrypted form). AWS stores these backups in an S3 bucket managed by AWS CloudHSM. Critically: AWS CloudHSM backups are encrypted using an AWS-managed key — a key derived from the HSM's internal wrapping key combined with AWS-controlled backup infrastructure.
The backup encryption architecture means:
- HSM backups are stored in AWS S3 (US company, CLOUD Act applies)
- The encryption wrapping involves AWS-managed key material in the backup chain
- AWS can restore HSM cluster state from backups (this is necessary for cluster recovery), meaning the backup decryption capability exists within AWS infrastructure
This creates an Art.28 processor dependency gap: the processor (AWS) maintains backup decryption capability for your HSM state. In a CLOUD Act compelled production scenario, backup material could be produced, and if the US government can compel both the backup data and the decryption mechanism, the keys inside become accessible — the opposite of what a dedicated HSM is designed to guarantee.
For organizations using CloudHSM to protect Art.9 special category data (medical records, biometric templates, genetic data), this backup architecture gap undermines the security architecture intended to meet GDPR Art.32 requirements.
CloudHSM User Credential Metadata as Art.28 Processor Records
CloudHSM maintains a separate user management system inside the hardware, but the service-level management — connecting your CloudHSM users to your AWS account, managing the cluster IAM policies, tracking which IAM roles have cloudhsm:* permissions — is stored in AWS IAM and the CloudHSM service management plane.
IAM policies governing CloudHSM access include:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudhsm:CreateHsm",
"cloudhsm:DeleteHsm",
"cloudhsm:DescribeClusters"
],
"Resource": "arn:aws:cloudhsm:eu-central-1:123456789012:cluster/cluster-abc12345"
}
]
}
These IAM policies are Art.28 processor-level records: they define who within your organization has authority over the HSM managing encryption keys for personal data. The organizational cryptographic authority structure is documented in AWS IAM — US jurisdiction.
Under Art.28(3)(h), processors must "make available to the controller all information necessary to demonstrate compliance with the obligations" of the article. IAM policies governing HSM access are part of this audit trail — stored by the US legal entity operating the management plane.
HSM Initialization Records and Certificate Authority Structures
CloudHSM initialization requires creating a cluster CSR (Certificate Signing Request) and issuing an HSM certificate that authenticates the hardware to your applications. The initialization workflow generates:
- A cluster CSR identifying the HSM cluster
- AWS-signed certificates authenticating HSM hardware authenticity (signed by an AWS CA)
- Customer HSM certificates for mutual TLS between client SDK and HSM
The certificate chain rooting CloudHSM's hardware attestation goes through AWS Certificate Authority infrastructure — a US company's PKI. When your applications use the CloudHSM client SDK, they validate HSM authenticity against an AWS-issued certificate chain. The trust root for your dedicated HSM hardware is a US corporate PKI.
For GDPR Art.32(1)(a) — "pseudonymisation and encryption of personal data" — the encryption protecting personal data depends on HSM keys whose hardware authenticity is anchored in a US corporate certificate authority. A CLOUD Act order could compel production of CA records that document the certificate issuance for the HSM hardware authenticating your encryption infrastructure.
CloudFormation and Infrastructure-as-Code Templates as Art.25 Design Records
Organizations deploying CloudHSM through infrastructure-as-code (CloudFormation, Terraform with AWS provider, CDK) create template documents specifying the complete HSM architecture: cluster configuration, VPC placement, subnet selection, HSM count, backup policies, IAM roles, and application integration points.
CloudFormation templates stored in S3, deployed via the CloudFormation service, and tracked in CloudFormation stacks are managed by AWS — US jurisdiction. A CloudFormation template for a CloudHSM deployment protecting personal data is an Art.25 Privacy-by-Design documentation artifact: it records the security architecture decisions made to protect personal data through encryption. That documentation now lives in a US company's infrastructure stack management service.
Resources:
PatientDataHSMCluster:
Type: AWS::CloudHSM::Cluster
Properties:
HsmType: hsm1.medium
SourceBackupId: !Ref HsmBackupId
SubnetIds:
- !Ref PrivateSubnetEU1
- !Ref PrivateSubnetEU2
PatientDataHSM:
Type: AWS::CloudHSM::HSM
Properties:
ClusterId: !Ref PatientDataHSMCluster
AvailabilityZone: eu-central-1a
The template name PatientDataHSMCluster signals the personal data being protected — part of the Art.25-by-design record now stored in AWS CloudFormation service state under US jurisdiction.
EU-Native HSM Alternatives for 2026
Genuine GDPR-compliant HSM deployments require hardware or software operated by European legal entities, with key backup and management infrastructure under EU jurisdiction.
Thales Luna Network HSM (Thales Group, France)
Thales Group (headquartered in Paris, France) manufactures the Luna Network HSM series — FIPS 140-3 Level 3 validated, Common Criteria EAL4+ certified. Luna HSMs are widely deployed in European financial services and government.
# Luna HSM client initialization (PedClient for remote PED)
vtl createCert -n my-client
vtl haAdmin listServers
# PKCS#11 application integration
export ChrystokiConfigurationPath=/etc/Chrystoki.conf
pkcs11-tool --module /usr/lib/libCryptoki2_64.so \
--list-slots \
--login --pin <user-pin>
Thales Luna HSMs can be deployed in colocation facilities within EU member states, with remote management through Thales infrastructure operated under French and EU law. No US corporate parent, no CLOUD Act exposure for management plane operations.
Nitrokey HSM2 (Nitrokey GmbH, Germany)
Nitrokey (Berlin, Germany) produces the HSM2 — a USB-attached hardware security module for smaller deployments. FIPS 140-2 validated, OpenSC-compatible, and entirely open-source firmware. Ideal for root CA key protection, code signing keys, and certificate management in EU-sovereign deployments.
# Initialize Nitrokey HSM2
sc-hsm-tool --initialize --so-pin <8-digit-so-pin> --pin <6-digit-pin>
# Generate key on hardware
pkcs11-tool --module /usr/lib/opensc-pkcs11.so \
--login --pin <pin> \
--keypairgen \
--key-type EC:prime256v1 \
--id 01 \
--label "patient-record-signing-key"
# Sign operation stays on hardware
pkcs11-tool --module /usr/lib/opensc-pkcs11.so \
--login --pin <pin> \
--sign \
--id 01 \
--mechanism ECDSA \
-i data.bin -o sig.bin
Nitrokey HSM2 keeps cryptographic operations entirely within EU hardware operated by a German company. Key material never leaves the device. No cloud management plane. No backup infrastructure under US corporate control.
SoftHSM2 (OpenDNSSEC Project) on EU Infrastructure
SoftHSM2 is an open-source software HSM implementing PKCS#11 — suitable for development, testing, and smaller production deployments where physical tamper resistance is not required. Deployed on Hetzner, OVHcloud, or IONOS infrastructure, SoftHSM2 operates under EU jurisdiction with no US corporate involvement.
# Install and initialize SoftHSM2
apt-get install softhsm2
# Initialize token
softhsm2-util --init-token --slot 0 \
--label "patient-data-keys" \
--so-pin <so-pin> \
--pin <user-pin>
# Generate AES-256 key for database encryption
pkcs11-tool --module /usr/lib/softhsm/libsofthsm2.so \
--login --pin <user-pin> \
--keygen \
--key-type AES:32 \
--id 10 \
--label "patient-db-encryption-key"
# Application uses key via PKCS#11 without key extraction
pkcs11-tool --module /usr/lib/softhsm/libsofthsm2.so \
--login --pin <user-pin> \
--encrypt \
--id 10 \
--mechanism AES-CBC \
-i plaintext.bin -o ciphertext.bin
SoftHSM2 tokens are stored as files — back up using LUKS-encrypted volumes on EU infrastructure. No AWS, no US corporate involvement in key protection or backup management.
HashiCorp Vault with EU HSM Backend
HashiCorp Vault (now part of IBM — check your DPA) can be deployed on EU infrastructure using Vault's HSM seal integration. For fully EU-sovereign deployments, combine Vault on Hetzner with a Thales Luna or Nitrokey HSM2 as the seal backend:
# vault.hcl — Vault configuration with HSM seal
seal "pkcs11" {
lib = "/usr/lib/libCryptoki2_64.so"
slot = "0"
pin = "user-pin"
key_label = "vault-seal-key"
hmac_key_label = "vault-hmac-key"
mechanism = "0x1085" # CKM_AES_CBC_PAD
}
storage "raft" {
path = "/opt/vault/data"
node_id = "vault-eu-01"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/vault/tls/vault.crt"
tls_key_file = "/opt/vault/tls/vault.key"
}
This configuration stores Vault's unseal key material in an EU-operated HSM — Thales for FIPS Level 3, SoftHSM2 for development — with Vault data on RAFT storage on EU infrastructure. HashiCorp Vault Community Edition is open-source (BSL license). Full EU-sovereign key management.
Securosys Primus HSM (Securosys SA, Switzerland)
Securosys (Zurich, Switzerland) produces the Primus HSM series — FIPS 140-2 Level 3 and CC EAL4+ certified. The Primus CloudsHSM service offers HSM-as-a-Service with datacenters in Switzerland (Swiss law, not EU law but strong data protection framework). For organizations requiring EU member-state jurisdiction, Securosys offers Primus hardware for on-premises deployment in colocation facilities within EU.
The Securosys REST API enables cloud-native HSM integration without AWS dependency:
import requests
# Securosys Primus REST API (on-premises EU deployment)
PRIMUS_URL = "https://primus-hsm.your-eu-datacenter.eu:2300"
PRIMUS_CREDENTIALS = ("api-user", "api-password")
# Generate RSA-4096 key on hardware
response = requests.post(
f"{PRIMUS_URL}/crypto/v1/generate",
json={
"id": "patient-signing-key-001",
"algorithm": "RSA",
"size": 4096,
"policy": {
"use": ["sign", "verify"],
"extractable": False
}
},
auth=PRIMUS_CREDENTIALS,
verify="/etc/primus/ca.crt"
)
key_id = response.json()["id"]
# Sign operation stays on EU hardware
sign_response = requests.post(
f"{PRIMUS_URL}/crypto/v1/sign",
json={
"keyId": key_id,
"signatureType": "SHA256_WITH_RSA",
"payload": base64_encode(data_to_sign)
},
auth=PRIMUS_CREDENTIALS
)
The FIPS 140-3 vs GDPR Compliance Matrix
FIPS 140-3 validates six properties of a cryptographic module:
| FIPS 140-3 Requirement | GDPR Relevance |
|---|---|
| Level 3: Physical tamper resistance | Art.32(1)(a): Physical protection of cryptographic keys |
| Level 3: Identity-based authentication | Art.32(1)(b): Access controls for processing systems |
| Approved algorithms only | Art.32(1)(a): Appropriate encryption standards |
| Key zeroization on tamper | Art.32(1)(a): Data protection on breach |
| Operational environment controls | Art.32(1)(b): Environmental security |
| Self-tests at startup | Art.32(1)(d): Regular testing of technical measures |
FIPS 140-3 does NOT address:
| Gap | GDPR Article |
|---|---|
| Corporate jurisdiction of the module operator | Art.46 (Third Country Transfers) |
| Legal access by the module operator's government | Art.46 + CLOUD Act |
| Operational metadata under operator's legal control | Art.30, Art.32 |
| Backup encryption under operator's key management | Art.28(3)(g) |
| Certificate authority trust chain jurisdiction | Art.32(1)(a) |
| Audit log storage jurisdiction | Art.30(1)(g) |
The compliance gap is not in the FIPS-validated cryptographic operations inside the HSM. The gap is in everything surrounding the HSM that a US-headquartered company manages.
Migration Path: CloudHSM to EU-Sovereign HSM
Step 1: Inventory Key Usage
import boto3
cloudhsm_client = boto3.client('cloudhsm', region_name='eu-central-1')
clusters = cloudhsm_client.describe_clusters()['Clusters']
for cluster in clusters:
print(f"Cluster: {cluster['ClusterId']}")
print(f"HSMs: {len(cluster['Hsms'])}")
# Document what applications use this cluster via tag inspection
tags = boto3.client('tagging').get_resources(
TagFilters=[{'Key': 'CloudHSMCluster', 'Values': [cluster['ClusterId']]}]
)
print(f"Tagged resources using cluster: {len(tags['ResourceTagMappingList'])}")
Step 2: Export Public Key Material
Key material inside CloudHSM is non-extractable (by design — keys never leave in cleartext). Migration requires generating new keys on the EU HSM and re-encrypting data:
# On target EU HSM: generate replacement keys
pkcs11-tool --module /usr/lib/libCryptoki2_64.so \
--login --pin <pin> \
--keypairgen \
--key-type RSA:4096 \
--id 01 \
--label "migrated-patient-signing-key"
# Export new public key for certificate issuance
pkcs11-tool --module /usr/lib/libCryptoki2_64.so \
--read-object --type pubkey --id 01 \
-o new-public-key.der
Step 3: Re-encrypt Sensitive Data
For AES keys protecting at-rest data, the migration path is decrypt-with-old-key, re-encrypt-with-new-key:
# Re-encryption script using PKCS#11
import pkcs11
from pkcs11 import Attribute, KeyType, Mechanism
lib = pkcs11.lib('/usr/lib/softhsm/libsofthsm2.so')
token = lib.get_token(token_label='patient-data-keys')
with token.open(user_pin='your-pin') as session:
# Load existing CloudHSM-exported wrapped key (requires data re-encryption)
new_key = session.generate_key(
KeyType.AES,
256,
label='patient-db-key-v2',
store=True,
template={Attribute.EXTRACTABLE: False}
)
# Re-encrypt data records using new EU HSM key
for record in patient_records:
decrypted = old_key.decrypt(record.encrypted_data, mechanism=Mechanism.AES_CBC_PAD)
record.encrypted_data = new_key.encrypt(decrypted, mechanism=Mechanism.AES_CBC_PAD)
record.key_version = 'eu-hsm-v2'
record.save()
Conclusion
AWS CloudHSM provides genuine cryptographic key isolation inside tamper-resistant hardware — the key material itself is inaccessible to Amazon. But the surrounding management infrastructure is not isolated from US legal jurisdiction: cluster configurations, CloudWatch audit logs, HSM backup encryption, initialization certificate chains, and infrastructure-as-code templates are all held by Amazon Web Services, Inc. under CLOUD Act scope.
FIPS 140-3 Level 3 validation certifies the cryptographic module security. It does not certify that the organizational and legal environment surrounding the module meets GDPR Art.46 requirements for avoiding unauthorized third-country transfers of operational metadata.
For organizations processing special category data under Art.9 — medical records, biometric data, health information — where the cryptographic architecture protecting that data must itself meet GDPR requirements, the combination of FIPS compliance and US corporate jurisdiction creates a gap that compliance teams often discover only during DPA audits.
EU-sovereign alternatives — Thales Luna (French company), Nitrokey HSM2 (German company), SoftHSM2 on Hetzner, Securosys Primus (Swiss company) — provide FIPS-equivalent security without the US corporate jurisdiction layer over operational metadata. The key management architecture protecting personal data can be designed from the start to stay within European legal frameworks.
Running AWS CloudHSM for key management protecting personal data? sota.io deploys your application on EU-sovereign infrastructure — compute, storage, and networking under European legal entities only, with no US corporate parent. Start free.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.