2026-04-29·13 min read·

AWS EventBridge EU Alternative 2026: Event Buses, CLOUD Act, and GDPR Compliance

Post #704 in the sota.io EU Compliance Series

AWS EventBridge is the event routing layer for modern AWS architectures. It sits between your services and routes events based on rules, filters payloads, connects to external SaaS partners, pipes events between integrations, and schedules future executions. A mid-size SaaS application might use EventBridge to handle order placement events, trigger downstream microservices, replay historical events during incidents, and schedule recurring jobs — all through a single managed event bus.

EventBridge abstracts away the complexity of event routing at the cost of moving your event data into AWS-managed infrastructure. That infrastructure is controlled by an American corporation.

Amazon Web Services, Inc. is a Delaware corporation headquartered in Seattle, Washington. The CLOUD Act (18 U.S.C. § 2713) extends US government jurisdiction to all data stored, routed, or processed by US companies — regardless of where the physical servers are located. An EventBridge bus in eu-west-1 (Ireland) or eu-central-1 (Frankfurt) is AWS infrastructure. When US law enforcement issues a CLOUD Act order to Amazon, the geographic location of the data center is not a defense.

EventBridge introduces GDPR risk in ways that differ from SQS or SNS. The risk is not primarily in subscription lists or transient message payloads. EventBridge's most significant GDPR exposure comes from two features often overlooked in architecture reviews: the Event Archive (which stores a complete history of your event stream, indefinitely if configured that way) and EventBridge Scheduler (which stores future execution payloads, potentially containing personal data, until they fire). Combined with API Destinations that store outbound credentials and Schema Registry that encodes your data model, EventBridge creates a multi-layer personal data footprint that exceeds what most developers expect from a routing service.

This analysis covers what EventBridge stores under US jurisdiction, the GDPR risk surface for each EventBridge component, and the EU-native alternatives that give EU event-driven architectures genuine data sovereignty.

What AWS EventBridge Stores Under US Jurisdiction

Event Buses and Event Payloads in Transit

The default event bus in every AWS account receives all AWS service events — EC2 instance state changes, S3 object creation, CodePipeline stage transitions. Custom event buses receive application events published via PutEvents. Partner event buses receive events from external SaaS partners (Zendesk, Auth0, Stripe, etc.).

While EventBridge is primarily a routing service — not a persistent store like SQS — events are held in EventBridge infrastructure during rule evaluation and target delivery. Failed deliveries are retried with exponential backoff for up to 24 hours. During the retry window, the event payload is held in AWS infrastructure under US jurisdiction.

Application events frequently contain personal data. A user registration event might include the user's email, IP address, and account identifiers. An order event might contain the customer's name, delivery address, and payment method reference. A session event might contain authentication tokens or device fingerprints. All of these pass through EventBridge infrastructure under US jurisdiction on every invocation.

For high-volume applications, the volume of personal data transiting EventBridge can be substantial even though no single event is retained for long in the default path.

Event Archive: The Biggest GDPR Risk in EventBridge

EventBridge Event Archive stores a replay-able history of events. You can configure an archive with:

When an archive is enabled, every matching event published to the bus is written to the archive and held under US jurisdiction for the configured retention period. For teams using indefinite retention — common in organizations that want to replay events during incidents or audit past behavior — this means a complete, persistent history of every application event is held by AWS.

Consider what this means for a SaaS application: every user action encoded as an event, every order placed, every authentication event, every account modification — all in an archive held by a US entity subject to CLOUD Act compulsion. The Archive converts EventBridge from a transient router into a persistent data store for personal data, with indefinite retention being the common configuration.

GDPR Art.5(1)(e) storage limitation requires that personal data be kept no longer than necessary for the stated purpose. An indefinitely retained event archive almost certainly violates this principle for events containing personal data. A GDPR audit of an application using EventBridge with indefinite archive retention would need to justify — for each event type containing personal data — why indefinite retention is necessary. For most application events, it is not.

EventBridge Pipes: Point-to-Point Integrations with Filter State

EventBridge Pipes create point-to-point integrations between event sources (SQS, Kinesis, DynamoDB Streams, Kafka) and targets (Lambda, Step Functions, HTTP APIs, other event buses). Pipes support filtering (to exclude events), enrichment (to add data via Lambda, API Gateway, or Step Functions before delivery), and transformation (to reshape the event payload).

Pipes are stateless routing configurations, but three aspects create personal data exposure:

Filter patterns. Pipe filter patterns are stored configurations that may encode knowledge of your data model — for example, filtering only events where userId is present, or where region equals a specific EU country code. Filter patterns stored in EventBridge reveal the shape of personal data flowing through your pipes.

Enrichment function parameters. When Pipes call Lambda or API Gateway for enrichment, the event payload is passed to the enrichment service. The enrichment service's response is merged into the event before delivery. If enrichment involves a database lookup — fetching additional user attributes to enrich an event before routing it — the full enriched payload (potentially richer in personal data than the original event) is held in EventBridge during the enrichment call.

Dead letter queues. Pipes support dead letter queues for failed event delivery. Events that cannot be delivered after all retries are sent to the dead letter SQS queue. Personal data in these failed events is then held in the DLQ under the SQS retention policy (up to 14 days). DLQ events may require manual intervention, meaning personal data can sit in the DLQ until an operator processes it.

EventBridge Scheduler: Stored Payloads Until Execution

EventBridge Scheduler (launched 2022) replaces CloudWatch Events for scheduled invocations. It supports:

Each scheduled invocation stores a target payload — the data that will be delivered to the target (Lambda, SQS, HTTP endpoint, etc.) when the schedule fires. This payload is stored in EventBridge Scheduler infrastructure under US jurisdiction from the time the schedule is created until it executes.

For applications that use Scheduler with dynamic payloads — for example, scheduling a "send reminder email" invocation that includes the user's email address and reminder content, or scheduling a "process payment" invocation that includes payment method tokens — the scheduler is a persistent store of personal data. A "send reminder 7 days after signup" schedule for each user creates one scheduler entry per user, each holding personal data, stored for up to 7 days in EventBridge Scheduler under US jurisdiction.

GDPR Art.5(1)(b) purpose limitation applies: personal data stored in Scheduler payloads must be stored only as long as necessary for the scheduling purpose. If a user deletes their account before the scheduled invocation fires, the Scheduler entry must be explicitly cancelled — otherwise, personal data remains stored in EventBridge Scheduler for a user who has exercised their right to erasure under GDPR Art.17.

This makes EventBridge Scheduler a privacy engineering challenge: every user-specific scheduled invocation becomes a data subject linkage that must be tracked and cancelled on account deletion. Many implementations overlook this.

API Destinations: Stored Credentials and Outbound Request Logs

EventBridge API Destinations route events to external HTTP endpoints — third-party webhooks, partner APIs, internal services over HTTP. API Destinations support three authentication types:

These credentials are stored in EventBridge Connections — AWS-managed credential stores for API Destinations. While AWS encrypts these credentials at rest, they remain under US jurisdiction and accessible to AWS under CLOUD Act compulsion. If your API Destination authenticates to an EU partner service using credentials stored in EventBridge Connections, those credentials are held by a US entity.

API Destination invocation failures are retried with exponential backoff. The event payload — potentially containing personal data — is held during the retry window. Failed invocations beyond the retry limit can be routed to an SQS dead letter queue, creating SQS-backed storage of undelivered personal data.

Schema Registry: Encoding Your Data Model

EventBridge Schema Registry automatically discovers event schemas from your event bus traffic and stores them as structured schema definitions (OpenAPI 3.0 or JSONSchema draft 4). Schemas are stored under US jurisdiction and describe the shape of your events — including field names, types, and nesting.

While schemas do not directly contain personal data, they encode your data model. A schema for a user registration event reveals that your application processes userEmail, ipAddress, countryCode, and accountId — information that, under Schrems II TIA requirements, could be relevant to assessing the risk of CLOUD Act exposure for that event stream.

Schema Registry also generates code bindings for Go, Java, Python, and TypeScript, which are stored and served from AWS infrastructure. For sensitive data models, maintaining schemas in AWS Schema Registry may not align with internal data governance policies around where data structure definitions are stored.

Chapter V transfers and event volume. Every PutEvents call that includes personal data in the event payload constitutes a transfer of personal data to the United States. For event-driven architectures where application actions are modeled as events, this can mean thousands of GDPR Chapter V transfers per second. AWS SCCs cover these transfers, but each transfer requires a lawful basis and must be documented in your Art.30 ROPA.

Article 5(1)(e): Storage limitation and Event Archive. Indefinite event archive retention is a structural violation of GDPR's storage limitation principle for events containing personal data. If you use EventBridge Archive with personal data events, you must either: set a retention period aligned with the retention schedule for that data category, implement event filtering to exclude personal data from the archive, or justify indefinite retention under a specific legal basis (which most application event archives cannot satisfy).

Article 17: Right to erasure and Scheduler. Every user-specific EventBridge Scheduler entry that holds personal data must be cancelled when the user exercises their right to erasure. This requires maintaining a mapping between user accounts and their Scheduler schedule IDs — an operational requirement often absent from EventBridge-based implementations.

Article 28: Processor obligations. The AWS DPA for EventBridge contains the same CLOUD Act carve-out as all AWS services: AWS is legally required to comply with valid government orders and may be unable to notify you. For EventBridge Archive, this means a complete history of your application's event stream — potentially including every user action encoded as an event — can be produced to US law enforcement without your notification.

Article 30: ROPA scope. A single EventBridge bus powering multiple event-driven workflows requires multiple Art.30 entries — one per event type containing personal data. For applications with 50+ event types, each with different retention profiles and legal bases, the ROPA maintenance burden for EventBridge is substantial.

EU-Native EventBridge Alternatives

NATS with JetStream is the closest functional equivalent to EventBridge for EU-native deployments. NATS provides:

Deploying NATS on Hetzner Cloud (German company, German/Finnish data centers) gives you a complete event routing infrastructure with no US entity in the data path.

# docker-compose.yml — NATS JetStream
services:
  nats:
    image: nats:2.10-alpine
    command:
      - "-js"
      - "-m"
      - "8222"
      - "--store_dir=/data"
    ports:
      - "4222:4222"
      - "8222:8222"
    volumes:
      - nats-data:/data

volumes:
  nats-data:
import asyncio
import nats
import json

async def publish_event(event_type: str, payload: dict):
    nc = await nats.connect("nats://nats:4222")
    js = nc.jetstream()

    # Equivalent to EventBridge PutEvents
    await js.publish(
        f"events.{event_type}",
        json.dumps(payload).encode()
    )
    await nc.drain()

# Consumer equivalent to EventBridge Rule + Target
async def consume_events():
    nc = await nats.connect("nats://nats:4222")
    js = nc.jetstream()

    await js.subscribe(
        "events.order.placed",
        cb=handle_order_placed,
        durable="order-processor"
    )

For EventBridge-style filtering, NATS subject hierarchies provide natural routing: events.user.registered, events.order.placed, events.payment.completed. Consumers subscribe to specific subjects or wildcard patterns (events.order.*).

For retention control — critical for GDPR Art.5(1)(e) compliance — NATS JetStream stream retention is configured per-stream with explicit max-age or max-message limits:

nats stream add EVENTS \
  --subjects "events.>" \
  --retention limits \
  --max-age 7d \  # Retain events for 7 days maximum
  --storage file \
  --replicas 1

RabbitMQ on Hetzner — EventBridge Pipes Equivalent

For Pipes-style point-to-point integrations with filtering and enrichment, RabbitMQ's exchanges and routing keys provide a functionally equivalent model. RabbitMQ is available as a managed service from CloudAMQP (LaVanda AB, Sweden) with EU data centers, or self-hosted on Hetzner.

import pika
import json

# Direct exchange equivalent to EventBridge Pipe
connection = pika.BlockingConnection(
    pika.ConnectionParameters('rabbitmq')
)
channel = connection.channel()

channel.exchange_declare(exchange='app_events', exchange_type='topic')

# Publish with routing key for pattern-based routing
channel.basic_publish(
    exchange='app_events',
    routing_key='order.placed.eu',
    body=json.dumps(order_event),
    properties=pika.BasicProperties(delivery_mode=2)  # persistent
)

# Consumer with selective routing (EventBridge rule equivalent)
channel.queue_declare(queue='eu_order_processor', durable=True)
channel.queue_bind(
    exchange='app_events',
    queue='eu_order_processor',
    routing_key='order.placed.eu'  # only EU orders
)

Temporal.io on EU Infrastructure — EventBridge Scheduler Equivalent

For EventBridge Scheduler use cases — scheduled future executions, delayed processing, cron jobs — Temporal.io provides a more capable EU-deployable alternative. Temporal separates workflow code from execution state and supports:

Temporal Cloud has EU region support (AWS eu-central-1 based — note this introduces the same AWS jurisdiction question). For full sovereignty, self-host Temporal on Hetzner:

# docker-compose.yml — minimal Temporal
services:
  temporal:
    image: temporalio/auto-setup:1.23
    environment:
      - DB=postgres12
      - DB_PORT=5432
      - POSTGRES_USER=temporal
      - POSTGRES_PWD=temporal
      - POSTGRES_SEEDS=postgresql
    ports:
      - "7233:7233"
    depends_on:
      - postgresql

  temporal-ui:
    image: temporalio/ui:2.26
    environment:
      - TEMPORAL_ADDRESS=temporal:7233
    ports:
      - "8080:8080"
from temporalio import workflow, activity
from temporalio.client import Client
from temporalio.worker import Worker
import asyncio
from datetime import timedelta

@activity.defn
async def send_reminder_email(user_email: str, content: str):
    # Send email via EU provider (Brevo etc.)
    pass

@workflow.defn
class ReminderWorkflow:
    @workflow.run
    async def run(self, user_email: str, content: str) -> None:
        # Equivalent to EventBridge Scheduler one-time schedule
        await workflow.execute_activity(
            send_reminder_email,
            args=[user_email, content],
            start_to_close_timeout=timedelta(minutes=5),
        )

# Schedule a reminder (equivalent to Scheduler.create_schedule)
async def schedule_reminder(user_email: str, delay_days: int):
    client = await Client.connect("temporal:7233")
    await client.start_workflow(
        ReminderWorkflow.run,
        args=[user_email, "Your weekly summary"],
        id=f"reminder-{user_email}",
        task_queue="reminders",
        execution_timeout=timedelta(days=delay_days + 1),
    )

# Cancel on user deletion (GDPR Art.17 compliance)
async def cancel_user_schedules(user_email: str):
    client = await Client.connect("temporal:7233")
    handle = client.get_workflow_handle(f"reminder-{user_email}")
    await handle.cancel()

Temporal's workflow handle approach makes GDPR Art.17 (right to erasure) implementable: each user's scheduled workflows are addressable by ID and can be cancelled atomically.

Apache Kafka on Scaleway — EventBridge Archive Equivalent

For EventBridge Archive use cases — event replay, audit logs, historical event access — Apache Kafka on Scaleway (French company) provides EU-resident event stream storage with configurable retention policies.

Scaleway Managed Kafka is available in fr-par (Paris) and nl-ams (Amsterdam) regions. Kafka topics support topic-level retention configuration:

# Create topic with 7-day retention (GDPR-aligned)
kafka-topics.sh --create \
  --topic application-events \
  --partitions 3 \
  --replication-factor 1 \
  --config retention.ms=604800000  # 7 days in ms

# Verify retention configuration
kafka-configs.sh --describe \
  --topic application-events \
  --bootstrap-server kafka:9092

For event replay (the primary justification for EventBridge Archive), Kafka consumer groups support offset management — consumers can seek to any offset and replay from any point in the retained window:

from confluent_kafka import Consumer, TopicPartition

consumer = Consumer({
    'bootstrap.servers': 'kafka.fr-par.scaleway.com:9092',
    'group.id': 'incident-replay',
    'auto.offset.reset': 'earliest'
})

# Replay events from the last 6 hours
from datetime import datetime, timedelta
six_hours_ago = datetime.now() - timedelta(hours=6)
timestamp_ms = int(six_hours_ago.timestamp() * 1000)

topic_partitions = consumer.offsets_for_times([
    TopicPartition('application-events', 0, timestamp_ms)
])
consumer.assign(topic_partitions)
consumer.seek(topic_partitions[0])

sota.io: EU-Native Event-Driven Architecture

Deploying an event-driven architecture on sota.io means running your complete event infrastructure — NATS JetStream, RabbitMQ, Temporal, Kafka — on Hetzner infrastructure with no US corporate entity in the data path.

For the common EventBridge use case — routing application events to multiple downstream services — a NATS deployment on sota.io provides:

The GDPR compliance posture for EU event-driven architectures on sota.io is structurally different from EventBridge: retention is operator-controlled, not vendor-default; credentials are not stored in a US-controlled connection registry; and there is no indefinite archive accumulating personal data under US jurisdiction.

Migration Checklist: AWS EventBridge → EU Alternative

Before migrating, audit your EventBridge usage across all components:

  1. Inventory all event buses: List default bus, custom buses, and partner event buses. Document event volume and personal data presence per bus.
  2. Audit Event Archives: For each archive, document retention period, event filter pattern, and whether archived events contain personal data. Flag indefinite-retention archives for immediate review under GDPR Art.5(1)(e).
  3. List all Scheduler entries: Identify user-specific schedules holding personal data in payloads. Map schedule IDs to user accounts for Art.17 (right to erasure) compliance.
  4. Review API Destination connections: List all connection credentials stored in EventBridge. Rotate credentials stored in EventBridge to invalidate existing copies after migration.
  5. Enumerate EventBridge Pipes: For each pipe, document source, enrichment steps, and target. Identify filter patterns encoding personal data field names.
  6. Check Schema Registry: Review stored schemas for sensitive field names. Remove schemas that encode personal data model details if not needed by development tools.
  7. Update Art.30 ROPA: Create one ROPA entry per event type containing personal data, documenting retention period, legal basis, and transfer mechanism.
  8. Test GDPR Art.17 flows: For every user-specific Scheduler or Pipe configuration, verify cancellation works on account deletion before going live with the EU alternative.

Conclusion

AWS EventBridge is the glue layer of AWS-native event-driven architectures. Its convenience — managed routing, zero-infrastructure pub/sub, built-in scheduling, partner integrations — comes at the cost of routing all application events through US-controlled infrastructure under CLOUD Act jurisdiction.

The GDPR exposure is spread across multiple EventBridge features: event payloads in transit during delivery retries, archives potentially holding indefinite personal data history, Scheduler entries storing personal data until future execution, API Destination credentials stored in AWS connections, and Schema Registry encoding your data model. Each feature requires its own GDPR control, and Event Archive in particular deserves immediate attention in any GDPR compliance review.

The EU-native alternatives address each use case: NATS JetStream for event routing and archiving, RabbitMQ for complex routing patterns, Temporal for scheduled workflows with first-class cancellation support, and Kafka on Scaleway for high-throughput event streaming with EU residency. All are deployable on Hetzner or Scaleway infrastructure — no US corporate entity in the data path.

AWS messaging trilogy complete: AWS SQS EU Alternative 2026AWS SNS EU Alternative 2026AWS EventBridge EU Alternative 2026


sota.io is a European PaaS platform for teams building GDPR-compliant event-driven architectures. Deploy NATS, RabbitMQ, Temporal, and Kafka on EU-resident Hetzner infrastructure — no US-parent entity, no CLOUD Act exposure, full data sovereignty. Start deploying →

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.