AWS Fraud Detector EU Alternative 2026: Art.22 DPIA Mandatory and the Automated Decision-Making Compliance Gap
Post #786 in the sota.io EU Compliance Series
Every time AWS Fraud Detector blocks a transaction, flags an account, or assigns a fraud risk score, it makes an automated decision with significant financial effects on the data subject. Under GDPR Art.22 and Art.35, that triggers a mandatory Data Protection Impact Assessment — an obligation most teams building on AWS Fraud Detector have never fulfilled.
Add the CLOUD Act: Amazon is a US company, and every fraud model you train on European customer transaction data can be compelled by US law enforcement without your knowledge. This is the complete GDPR analysis of AWS Fraud Detector and the best EU-sovereign alternatives for 2026.
What AWS Fraud Detector Does and Why It's a GDPR Hotspot
AWS Fraud Detector is a managed machine learning service that builds fraud detection models from your historical transaction data. You upload labelled fraud examples, Amazon trains a model in the background, and the service exposes an API endpoint that returns a real-time fraud score and outcome (approve, review, block) for each new event.
The architecture is clean. The compliance exposure is not.
Three overlapping GDPR triggers activate simultaneously:
- Automated decision-making (Art.22): scores and block decisions happen without human review
- Profiling (Art.4(4)): building a behavioral model of each individual's transaction patterns
- Sensitive inference (Art.9): payment patterns can reveal medical conditions, religious practices, union membership
No other AWS service combines all three in a single API call.
AWS Fraud Detector and the CLOUD Act
Amazon Web Services, Inc. is incorporated in Delaware and headquartered in Seattle, Washington. The CLOUD Act (18 U.S.C. § 2523) allows US law enforcement to compel data from US-incorporated cloud providers regardless of where that data is stored.
For AWS Fraud Detector, this means the following assets are CLOUD Act-reachable:
- Your fraud model training data — the labelled transaction history you uploaded to train the detector, including customer identifiers, amounts, IP addresses, and device fingerprints
- Stored event data — Fraud Detector retains event records for model retraining and drift detection
- Model artefacts — the trained model parameters themselves can reveal statistical patterns in your customer base
- Prediction logs — every fraud score returned and whether the transaction was ultimately approved or blocked
A National Security Letter targeting Amazon does not require judicial oversight. It arrives with a gag order. You will not be notified. Your customers will not be notified. Their transaction histories will have been disclosed to US intelligence or law enforcement without any GDPR notification obligation being triggered on Amazon's side.
SCCs under Art.46 GDPR provide a transfer mechanism — they do not eliminate the surveillance exposure.
Art.22 — Automated Decision-Making: The Compliance Gap Nobody Talks About
This is the highest-risk GDPR obligation AWS Fraud Detector creates, and it is routinely overlooked.
GDPR Art.22(1) states: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly affects him or her significantly."
When AWS Fraud Detector automatically blocks a payment, denies a loan application, or flags an account for suspension, that decision:
- Produces a significant effect: access to financial services is repeatedly cited by the EDPB as a paradigm example of "significant" impact
- Is based solely on automated processing: the ML model returns a score and an outcome label; no human reviews the decision before it executes
- Constitutes profiling: the model was built by analyzing patterns across the data subject's transaction history
Art.22 applies. The data subject has rights:
- The right to obtain human intervention: they can require a human to review the automated decision
- The right to express their point of view: before the decision is finalised
- The right to contest the decision: after the fact
Unless you have built explicit mechanisms for all three into your fraud workflow, you are in violation of Art.22(3).
The Art.22(2)(b) Exception — What It Actually Requires
There is a fraud-specific exception: Art.22(2)(b) permits automated processing "necessary for entering into, or performance of, a contract between the data subject and a controller." Payment fraud detection to protect a transaction the customer initiated fits this exception — in principle.
But the exception comes with conditions that most implementations miss:
- Suitable safeguards: explicit documentation of what safeguards protect the data subject's interests. A link to your privacy policy is not a safeguard. A defined escalation path to human review is.
- At least the right to obtain human intervention: the data subject must be able to request human review. This requires a real process — a support channel, a documented SLA, a trained agent who can actually override the model.
- Express their point of view: before or immediately after the decision, the subject must have a channel to provide context.
The EDPB's Guidelines 03/2020 on automated decision-making make clear: invoking Art.22(2)(b) without implementing the Art.22(3) safeguards is not compliant. If you use AWS Fraud Detector to block transactions automatically, you need a documented human-override workflow — and it must actually work.
Art.35 — DPIA Is Not Optional
Where Art.22 automated decision-making applies, a DPIA under Art.35 is mandatory. Art.35(3)(a) explicitly names:
"a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person"
AWS Fraud Detector is textbook Art.35(3)(a). Running it without a prior DPIA is a per se GDPR violation — before any fraud decision is made. Supervisory authority fines under Art.83(4) for failure to conduct a required DPIA are capped at €10 million or 2% of global annual turnover.
A compliant DPIA for a fraud detection system must document:
- The categories of personal data processed (transaction amounts, timestamps, IP addresses, device fingerprints, geolocation, behavioural patterns)
- The source of training data and its legal basis
- The logic of the automated decision: what inputs feed the model, what outputs it produces, what threshold determines each outcome
- The accuracy of the model: false positive rate (legitimate transactions blocked), false negative rate (fraudulent transactions passed)
- The impact of false positives on data subjects: financial harm, reputational harm, access to essential services
- Measures to mitigate each identified risk
- The Art.22(3) safeguards in place
- Consultation with the DPO (where appointed) — mandatory under Art.35(2)
None of this documentation can be generated after you deploy. The DPIA must precede deployment — Art.35(1): "prior to the processing."
Art.9 — When Payment Patterns Become Special Category Data
GDPR Art.9 protects special categories of personal data including health data, religious beliefs, and political opinions. AWS Fraud Detector's training data — transaction histories — can implicitly contain all of these:
- Health data: recurring payments to oncology clinics, pharmacy chains, mental health providers
- Religious data: donations to religious organisations, purchases at religious retailers, transaction absences during religious observances
- Union membership: payments to trade union membership portals or associated retailers
- Political opinions: donations to political parties or campaigning organisations
A fraud model trained on this data learns correlations between these patterns and fraud risk. If the model learns that a pattern of payments associated with health difficulties correlates with higher chargeback rates (a real correlation in some datasets), it will systematically disadvantage a protected class.
This creates both an Art.9 processing obligation (processing special categories implicitly requires explicit consent or another Art.9(2) basis, plus the Art.22 safeguards) and a potential Art.21 indirect discrimination risk.
EU Supervisory Authorities — particularly the French CNIL and Dutch AP — have begun examining ML-based scoring systems for discriminatory correlations. A DPIA for AWS Fraud Detector should include explicit analysis of whether training data contains Art.9 signals and what mitigations are in place.
Art.13 and Art.14 — Transparency Obligations
Data subjects have the right to know they are being profiled for fraud. Art.13(2)(f) and Art.14(2)(g) require you to inform individuals, at the time their data is collected:
- That their data will be subject to automated decision-making
- The logic involved
- The significance and envisaged consequences for the data subject
"You may be subject to fraud screening" in your T&Cs does not satisfy this obligation. The EDPB's Guidelines 03/2020 require "meaningful information about the logic involved" — not a technical description of the model, but enough for a reasonable person to understand what factors are evaluated.
AWS Fraud Detector's model cards do not automatically generate this disclosure. You must write and publish it yourself, tailored to your specific model configuration.
Art.5(1)(e) — Storage Limitation on Fraud Data
Fraud Detector retains event data to retrain models and detect drift. This creates an open-ended retention obligation that conflicts with Art.5(1)(e)'s requirement that personal data be "kept in a form which permits identification of data subjects for no longer than is necessary."
The tension: fraud models improve with more data and longer historical windows. GDPR requires deletion when it is no longer necessary. These two imperatives conflict.
A compliant approach requires:
- A documented retention schedule for fraud event data
- Automatic deletion of event records beyond the retention window
- Model retraining documentation that does not require retaining indefinitely identified transaction data
- Pseudonymisation of event data before training — so the model learns patterns without retaining identifiable records beyond their necessity period
AWS Fraud Detector's default behaviour retains all events indefinitely. You must implement deletion workflows manually.
The EU-Sovereign Fraud Detection Alternatives
SEON (Recommended — EU-Based)
SEON (https://seon.io) is a Budapest-based fraud prevention company, founded in 2017. It operates entirely within the EU and processes transaction data under Hungarian and EU law.
Key GDPR advantages:
- EU-incorporated controller/processor — no CLOUD Act exposure
- Transparent model explanations: SEON provides per-decision feature breakdowns for Art.22 compliance
- Built-in Art.22(3) tools: dispute and human-review workflows in the dashboard
- DPA readily available, EU jurisdiction in contract
- DPIA-ready documentation provided to enterprise customers
SEON covers device intelligence, email analysis, IP reputation, social signal enrichment, and custom rule engines. It integrates with any payment stack via REST API:
import requests
response = requests.post(
"https://api.seon.io/SeonRestService/fraud-api/v2/",
headers={
"X-API-KEY": "your-eu-api-key",
"Content-Type": "application/json"
},
json={
"config": {
"ip": True,
"email": True,
"phone": True
},
"ip": "203.0.113.42",
"email": "user@example.com",
"transaction_id": "txn_001",
"amount": 149.99,
"currency": "EUR"
}
)
result = response.json()
# result["data"]["fraud_score"] — 0-100 score
# result["data"]["state"] — APPROVE / REVIEW / DECLINE
# result["data"]["applied_rules"] — human-readable explanation for Art.22
Nethone (EU-Based Behavioural Intelligence)
Nethone (https://nethone.com), now part of Mangopay, is a Warsaw-based fraud prevention company. Mangopay SA is incorporated in Luxembourg — EU law applies throughout.
Nethone specialises in behavioural biometrics: typing cadence, mouse movement, device sensor patterns. Its Profiler SDK collects these signals client-side and scores them server-side.
GDPR advantages:
- Full DPIA template provided as part of enterprise onboarding
- Behavioural signals are pseudonymised before transmission
- EU data residency (AWS Frankfurt or Hetzner) available as contractual requirement
- Model explanations API: each score includes contributing factors in human-readable form
Particularly strong for: e-commerce, digital banking, online gaming — use cases where behavioural biometrics provide a signal without relying on transaction history.
Self-Hosted: Apache Flink + ML Pipeline (Maximum Control)
For organisations with significant data engineering capacity, building a self-hosted fraud detection pipeline on EU infrastructure provides maximum GDPR control:
# Example: Flink + Python ML fraud scorer on Hetzner
# Feature engineering in Flink, scoring in Python
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream.connectors.kafka import KafkaSource
from pyflink.common.serialization import SimpleStringSchema
import joblib
import numpy as np
env = StreamExecutionEnvironment.get_execution_environment()
# Kafka source — all within EU Hetzner cluster
source = KafkaSource.builder() \
.set_bootstrap_servers("kafka.eu.internal:9092") \
.set_topics("transactions") \
.set_value_only_deserializer(SimpleStringSchema()) \
.build()
# Load EU-trained model (scikit-learn IsolationForest or XGBoost)
model = joblib.load("/models/fraud_model_eu.pkl")
class FraudScorer:
def score(self, transaction):
features = self.extract_features(transaction)
risk_score = model.predict_proba([features])[0][1]
explanation = self.explain(features, risk_score)
return {
"transaction_id": transaction["id"],
"risk_score": float(risk_score),
"explanation": explanation, # Art.22 transparency
"outcome": "BLOCK" if risk_score > 0.85 else
"REVIEW" if risk_score > 0.6 else "APPROVE"
}
def explain(self, features, score):
# SHAP values for Art.22 meaningful explanation
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(np.array([features]))
return {k: float(v) for k, v in
zip(self.feature_names, shap_values[1][0])}
Benefits of the self-hosted approach:
- All training data stays on EU infrastructure under your control
- SHAP-based explanations satisfy Art.22 transparency requirements
- Retention schedule enforced by your own database — Art.5(1)(e) compliance via DELETE jobs
- No third-party processor — simplified Art.28 chain
- Model governance under your internal MLOps process — DPIA documents your own system
Suitable ML libraries for EU-hosted fraud detection:
- scikit-learn — IsolationForest for unsupervised anomaly detection
- XGBoost / LightGBM — gradient boosting for supervised classification
- SHAP — model explanation for Art.22 compliance
- Feast (https://feast.dev) — EU-hosted feature store (deploy on Hetzner/Scaleway)
- MLflow — experiment tracking and model registry
Building Art.22(3) Safeguards into Your Fraud Stack
Regardless of whether you use SEON, Nethone, or a self-hosted solution, your fraud workflow needs these Art.22(3) components:
1. Human Intervention Pathway
# Django view — human review request
class FraudDecisionReviewView(APIView):
def post(self, request, transaction_id):
decision = FraudDecision.objects.get(
transaction_id=transaction_id,
data_subject=request.user
)
ReviewRequest.objects.create(
fraud_decision=decision,
reason=request.data.get("reason"),
contact_email=request.user.email,
sla_deadline=now() + timedelta(hours=72) # document SLA
)
# Notify fraud ops team (EU-based support)
notify_fraud_ops.delay(decision.id)
return Response({"status": "review_requested",
"reference": decision.review_reference})
2. Point-of-View Expression Channel
The customer must be able to submit context before a final decision. For a blocked payment flow:
- Show the block reason (high-level — do not expose model weights)
- Provide a form: "Tell us more about this transaction"
- Process the submission within a documented timeframe
- Override the model decision if the human reviewer accepts the explanation
3. Contest Mechanism
A post-decision appeal pathway that:
- Documents the original model score and inputs
- Records the customer's contest reason
- Logs the human reviewer's decision and rationale
- Retains the audit trail for supervisory authority inspection
Art.30 Documentation Template
Processing activity: Real-time fraud scoring and automated transaction decisions
Controller: [Your organisation]
Processor: [SEON Fraud Fighters Kft. / Nethone (Mangopay SA) / self-hosted]
Legal basis: Art.6(1)(b) — performance of contract (fraud protection)
Art.22(2)(b) — automated processing necessary for contract performance
Art.22 safeguards documented: [DPIA reference], human override process [SOP-FRAUD-01]
Special category data (Art.9): Transaction history analysed for fraud patterns.
Mitigations: [pseudonymisation before training, Art.9 signal suppression in features]
DPIA status: Completed [date], DPO consulted [date], supervisory authority pre-consultation:
[N/A / Submitted to [SA] on [date]]
Retention: Event records: 12 months; model artefacts: 36 months or model retirement
Third-country transfers: None (EU infrastructure only) / SCCs in place with [processor]
Art.13/14 disclosures: Published at [URL/privacy-policy#automated-decisions]
The Comparison: AWS Fraud Detector vs EU Alternatives
| Criterion | AWS Fraud Detector | SEON (EU) | Nethone/Mangopay (EU) | Self-Hosted (EU) |
|---|---|---|---|---|
| Jurisdiction | US (Amazon, Delaware) | HU (EU law) | LU (EU law) | Your EU infra |
| CLOUD Act exposure | Yes | No | No | No |
| Art.22 DPIA required | Yes (mandatory) | Yes (but DPIA template provided) | Yes (but template provided) | Yes (you document your own) |
| Art.22(3) safeguards built in | No — must build | Yes (dispute workflow) | Yes (review dashboard) | Must build |
| Model explainability | Basic feature importance | Per-decision rule breakdown | SHAP-style behavioural scores | SHAP (with open-source libs) |
| Art.35 pre-deployment DPIA | Your obligation | Supported | Supported | Your obligation |
| Art.9 signal handling | Manual | Configurable suppression | Pseudonymisation by default | Full control |
| Retention control | Manual (events retained indefinitely by default) | Configurable | Configurable | Full control |
| DPA availability | AWS DPA (US entity) | EU DPA, HU jurisdiction | EU DPA, LU jurisdiction | N/A |
| Training data CLOUD Act risk | Yes | No | No | No |
| Cost model | Per prediction + model training | Per API call (volume pricing) | Per event | Infra + engineering |
Migration Guide: AWS Fraud Detector → SEON
Phase 1: Export and audit your training data (1 week)
import boto3
import pandas as pd
frauddetector = boto3.client("frauddetector", region_name="eu-west-1")
# Export event types and their stored events
events = []
paginator = frauddetector.get_paginator("list_event_predictions")
for page in paginator.paginate(
eventId={"value": "*"},
eventTypeName="transaction_event"
):
events.extend(page["eventPredictionSummaries"])
df = pd.DataFrame(events)
# Audit: which columns contain personal data?
# Before moving to EU provider: pseudonymise customer_id
import hashlib
df["customer_id"] = df["customer_id"].apply(
lambda x: hashlib.sha256(x.encode()).hexdigest()
)
df.to_parquet("fraud_training_eu.parquet")
Phase 2: Configure SEON with equivalent rules
import requests
# Create a SEON ruleset mirroring your AWS Fraud Detector rules
seon_rules = [
{
"name": "high_velocity_check",
"type": "TRANSACTION_FREQUENCY",
"threshold": 10,
"window_minutes": 60,
"action": "REVIEW"
},
{
"name": "new_device_high_amount",
"type": "DEVICE_AGE_AND_AMOUNT",
"device_age_days": 1,
"amount_eur": 500,
"action": "REVIEW"
}
]
response = requests.post(
"https://api.seon.io/SeonRestService/rules/",
headers={"X-API-KEY": "your-key"},
json={"rules": seon_rules}
)
Phase 3: Run in parallel (shadow mode)
def score_transaction(transaction):
# Score with both systems during migration
aws_score = aws_fraud_detector.get_event_prediction(
detectorId="your-detector",
eventId=transaction["id"],
eventTypeName="transaction_event",
entities=[{"entityType": "customer",
"entityId": transaction["customer_id"]}],
eventTimestamp=transaction["timestamp"],
eventVariables=transaction["features"]
)
seon_score = requests.post(
"https://api.seon.io/SeonRestService/fraud-api/v2/",
headers={"X-API-KEY": "your-eu-key"},
json=build_seon_payload(transaction)
).json()
log_comparison(aws_score, seon_score, transaction["id"])
return seon_score # Switch to SEON after validation period
Phase 4: Complete the DPIA for SEON
SEON provides a DPIA template for enterprise customers. At minimum your DPIA must document:
- Purpose: fraud prevention for [product category]
- Legal basis: Art.6(1)(b) + Art.22(2)(b)
- Data flows: transaction data from [payment processor] → SEON API (EU endpoint) → fraud score → your application
- CLOUD Act risk: Eliminated — SEON is EU-incorporated, no US subprocessors in the EU data path
- Art.22(3) safeguards: [Link to your human review process SOP]
- Residual risks and mitigations: [Document any remaining risks]
Conclusion
AWS Fraud Detector is technically capable. Its compliance exposure is substantial. The combination of mandatory DPIA under Art.35(3)(a), Art.22 automated decision-making obligations that require human override workflows most teams have never built, and CLOUD Act exposure of your customers' transaction history makes it a difficult choice for EU-regulated financial services, payments, and e-commerce.
SEON is the simplest migration path: EU jurisdiction, built-in Art.22(3) tools, DPIA template provided. Nethone/Mangopay is the right choice for behavioural biometrics use cases. Self-hosted Flink+ML gives maximum control for organisations with the engineering capacity to maintain it.
The Art.22 safeguards are not optional. Every automated fraud block that fires without a human-override pathway is a potential GDPR violation — multiply that by transaction volume and you have a material regulatory exposure.
sota.io runs entirely on EU infrastructure with no US parent company. Your customer data stays in EU jurisdiction — no CLOUD Act exposure, no surprise compelled disclosures. Sign up and deploy your first project in 60 seconds.
EU-Native Hosting
Ready to move to EU-sovereign infrastructure?
sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.