eIDAS 2.0 × EU AI Act: Digital Identity Wallet High-Risk AI Compliance Developer Guide (2026)
The EU Digital Identity Wallet (EUDIW) is mandatory in every EU Member State by late 2026 under Regulation (EU) 2024/1183 (eIDAS 2.0). Simultaneously, the EU AI Act's high-risk classification under Annex III No. 1 covers AI systems used for biometric identification and categorisation of natural persons. When your application uses the EUDIW to authenticate users — matching a face, verifying a PID (Person Identification Data), or processing biometric attributes — you are operating at the intersection of both frameworks.
This is not a theoretical overlap. Identity verification AI is specifically called out in EU AI Act Annex III, paragraph 1(a): "AI systems intended to be used for the biometric identification of natural persons remotely." The EUDIW's selective disclosure mechanism (presenting specific attributes from a Verifiable Credential) triggers this classification whenever an AI model processes the disclosed attributes.
This guide gives you the engineering runbook for operating an AI-driven relying party application that accepts EUDIW credentials while staying compliant with both eIDAS 2.0 and the EU AI Act.
1. The Regulatory Landscape: Two Frameworks, One Identity Transaction
eIDAS 2.0 (Regulation EU 2024/1183) — What Changed
eIDAS 1.0 (Regulation EU 910/2014) created a framework for national electronic identification schemes. eIDAS 2.0 goes further:
- Art. 5a: Every EU Member State must offer citizens a European Digital Identity Wallet by 26 October 2026
- Art. 5b: Very large platforms (DSA-designated gatekeepers) must accept EUDIW as an authentication method
- Art. 12: Relying parties (you, the developer accepting EUDIW credentials) must register with a national registration authority
- Art. 17: Technical requirements — your relying-party software must implement the EUDIW protocol stack (OpenID4VP + ISO 18013-5 mdoc)
- Art. 6a: EUDIW must support selective disclosure — users reveal only the attributes required
- Security Requirement: EUDIW must be certified at Common Criteria EAL 4+ or equivalent
EU AI Act (Regulation EU 2024/1689) — Annex III High-Risk Classification
Article 6(2) combined with Annex III No. 1 classifies AI systems as high-risk when they are used for:
- (1a) Biometric identification of natural persons remotely (e.g., comparing a facial image against a selfie taken during onboarding)
- (1b) Biometric categorisation based on sensitive attributes (inferring nationality, ethnicity, political opinion from biometric data)
Exception: AI systems used solely for the purpose of confirming whether a person is the person they claim to be (pure identity verification against a known template) are not automatically high-risk under Annex III No. 1 — but fall under Art. 6(2) general requirements if used in a regulated sector (banking, healthcare, public services).
Where EUDIW + AI becomes high-risk:
- Your system uses AI to verify that the face presented during a EUDIW presentation matches the photo in the PID attribute
- Your system uses ML to detect anomalies in the credential presentation (fraud detection model)
- Your system infers additional attributes not explicitly disclosed (e.g., inferring age category from voice during an EUDIW transaction)
2. Dual Compliance Matrix: eIDAS 2.0 vs. EU AI Act
| Requirement | eIDAS 2.0 Reference | EU AI Act Reference |
|---|---|---|
| Registration / Authorization | Art. 12 (relying party registration) | Art. 49 (conformity assessment, notified bodies) |
| Technical documentation | Art. 17 (interoperability specs) | Art. 11 (technical documentation, Annex IV) |
| Logging & audit trails | Art. 12(4) (transaction logs, 5y retention) | Art. 12 (logging, automatic recording) |
| Transparency to users | Art. 6a(6) (attribute disclosure consent) | Art. 13 (transparency to deployers) |
| Human oversight | — | Art. 14 (human oversight measures) |
| Accuracy & robustness | Art. 17 (protocol-level correctness) | Art. 15 (accuracy, robustness, cybersecurity) |
| Data minimisation | Art. 6a(5) (selective disclosure) | Art. 10(5) (data governance, minimal personal data) |
| Fundamental rights impact | — | Art. 9(9) (fundamental rights impact assessment) |
| Incident reporting | eIDAS Art. 12 (report breaches to national authority) | Art. 73 (serious incident reporting to market surveillance) |
| Post-market monitoring | — | Art. 72 (post-market monitoring plan) |
| CE Marking (EUDIW software) | CC EAL 4+ certification | Art. 48 (EU Declaration of Conformity) |
3. eIDAS 2.0 Relying-Party Obligations
Before your application can accept EUDIW presentations, you must complete the eIDAS 2.0 relying-party registration process:
Step 1: National Registration (Art. 12)
Each EU Member State implements its own relying-party registration portal. Requirements typically include:
- Legal entity registration — you must be a legal entity (not an individual developer)
- Attribute request justification — you must state which PID/mDL/custom attestation attributes you request and why
- Purpose limitation binding — your registration is bound to a stated purpose; using attributes beyond your registered purpose violates eIDAS 2.0 and GDPR simultaneously
- Technical endpoint registration — your
openid4vp://orhttps://callback endpoint must be registered
# Example: Registered attribute request manifest (NOT to be sent to wallet directly)
REGISTERED_ATTRIBUTE_REQUESTS = {
"purpose": "Age verification for alcohol retail service",
"attributes": [
{"namespace": "eu.europa.ec.eudi.pid.1", "identifier": "age_over_18", "required": True},
],
"relying_party_id": "RP-DE-2024-0042",
"registered_callback": "https://yourapp.eu/eudiw/callback",
"registration_authority": "BSI (Germany)",
"registration_date": "2026-01-15",
"expiry_date": "2028-01-15",
}
Step 2: OpenID4VP Integration (Art. 17)
The EUDIW protocol uses OpenID for Verifiable Presentations (OID4VP) combined with ISO 18013-5 mdoc format for the PID (Person Identification Data). Your relying-party SDK must implement:
- Authorization Request:
openid4vp://authorize?client_id=...&presentation_definition=... - Presentation Definition: JSON structure defining which credentials and attributes you request
- Response handling: VP Token decryption, mdoc structure parsing, signature verification
- Issuer certificate chain verification: Your code must verify the credential issuer chain back to a nationally-approved Trust Anchor
4. EU AI Act Obligations for AI-Augmented EUDIW Flows
If your EUDIW integration involves any machine-learning model processing the disclosed attributes (face matching, anomaly detection, fraud scoring), you must comply with EU AI Act Art. 9–17 high-risk requirements:
Art. 9: Risk Management System
You must establish, implement, document, and maintain a risk management system throughout the lifecycle of the AI system:
class EUDIWAIRiskManagementSystem:
"""
EU AI Act Art. 9 risk management system for EUDIW AI components.
Must be documented, iterative, and maintained post-deployment.
"""
def __init__(self, system_id: str):
self.system_id = system_id
self.risks = []
self.mitigations = []
self.residual_risks = []
def identify_risk(self, risk_id: str, description: str,
likelihood: str, severity: str) -> dict:
"""Art. 9(2)(a): Identify and analyse known/foreseeable risks."""
risk = {
"risk_id": risk_id,
"description": description,
"likelihood": likelihood, # "low" | "medium" | "high"
"severity": severity, # "low" | "medium" | "high" | "critical"
"identified_at": datetime.utcnow().isoformat(),
}
self.risks.append(risk)
return risk
def add_mitigation(self, risk_id: str, measure: str,
residual_risk: str) -> None:
"""Art. 9(2)(c): Apply risk management measures."""
self.mitigations.append({
"risk_id": risk_id,
"measure": measure,
"residual_risk": residual_risk,
"implemented_at": datetime.utcnow().isoformat(),
})
def is_residual_risk_acceptable(self) -> bool:
"""Art. 9(5): Benefits must outweigh residual risks."""
# Provider decision — must be documented in technical file
critical_unmitigated = [
r for r in self.risks
if r["severity"] == "critical" and
not any(m["risk_id"] == r["risk_id"] for m in self.mitigations)
]
return len(critical_unmitigated) == 0
Art. 10: Data Governance
Training data for your EUDIW-AI model must meet Art. 10(3) requirements — this has direct implications for any face-matching model trained on EU resident data:
- Training data must cover relevant demographic groups — a face-matching model must be evaluated for accuracy across gender, age group, and skin tone to prevent discriminatory error rates
- EU AI Act Art. 10(5): Processing sensitive biometric data in AI development is only allowed under Art. 9(2)(j) GDPR (substantial public interest) or an explicit derogation — and only when strictly necessary for bias detection
- Data lineage documentation — every training dataset must be traceable: source, collection date, annotation method, demographic composition
Art. 12: Automatic Logging
High-risk AI systems must log every inference automatically, with sufficient granularity to enable post-hoc audit:
import hashlib
import json
import logging
from datetime import datetime
from typing import Optional
logger = logging.getLogger("eudiw_ai_audit")
class EUDIWAIAuditLogger:
"""
EU AI Act Art. 12 automatic logging for EUDIW AI inference events.
Logs must be kept for 5 years (relying-party eIDAS requirement) or
the period specified in your conformity assessment — whichever is longer.
"""
def log_inference(
self,
session_id: str,
model_id: str,
model_version: str,
input_attributes: dict,
output_decision: str,
confidence_score: float,
processing_time_ms: int,
user_pseudonym: Optional[str] = None,
) -> str:
"""
Log an AI inference event. Returns the audit record ID.
IMPORTANT: Never log raw biometric data (face images, fingerprints).
Log only derived features or hashes for audit integrity.
"""
# Hash input attributes — never log raw biometric values
input_hash = hashlib.sha256(
json.dumps(input_attributes, sort_keys=True).encode()
).hexdigest()
record = {
"audit_id": hashlib.sha256(
f"{session_id}{model_id}{datetime.utcnow().isoformat()}".encode()
).hexdigest()[:16],
"timestamp_utc": datetime.utcnow().isoformat() + "Z",
"session_id": session_id,
"model_id": model_id,
"model_version": model_version,
"input_hash": input_hash,
"output_decision": output_decision,
"confidence_score": round(confidence_score, 4),
"processing_time_ms": processing_time_ms,
"user_pseudonym": user_pseudonym,
"regulation_basis": "EU AI Act Art.12 + eIDAS 2.0 Art.12(4)",
}
logger.info(json.dumps(record))
return record["audit_id"]
Art. 13: Transparency
Deployers (organisations using your EUDIW AI system) must receive a plain-language instructions for use document covering:
- The system's purpose, intended use, and known limitations
- The demographic groups where accuracy may differ
- Human oversight measures they must implement
- Circumstances under which the system should not be used (e.g., low-light conditions for face matching)
Art. 14: Human Oversight
For EUDIW identity verification, human oversight means:
- A person must be able to intervene and override the AI decision (cannot be a fully automated final decision on identity)
- The system must produce explanations the oversight person can understand (not just a binary accept/reject)
- There must be a defined escalation path when the AI is uncertain (e.g., confidence < threshold → manual review queue)
HUMAN_OVERSIGHT_CONFIG = {
"auto_approve_threshold": 0.97, # Above this: AI decision is logged, auto-approved
"review_queue_threshold": 0.85, # Between 0.85-0.97: human review queue
"auto_reject_threshold": 0.70, # Below 0.70: auto-reject with human notification
"max_queue_wait_seconds": 300, # Art. 14(1): oversight must be feasible in real-time
"escalation_contact": "identity-review@yourorg.eu",
"explanation_required": True, # Art. 13(3)(d): AI must explain its decision
}
5. The CLOUD Act Sovereignty Paradox in EUDIW Deployments
eIDAS 2.0 is built on the premise of EU digital sovereignty: your identity data stays under EU law. But if your EUDIW relying-party backend runs on AWS, Azure (US), or GCP infrastructure operated by US-parent entities, the CLOUD Act (18 U.S.C. § 2713) creates a direct conflict.
The Technical Problem
Under CLOUD Act, US authorities can compel a US-controlled cloud provider to produce data held on EU servers, regardless of EU data residency guarantees. This means:
- The transaction logs required by eIDAS 2.0 Art. 12(4) (5-year retention)
- The AI audit logs required by EU AI Act Art. 12
- The PID attribute data processed during EUDIW presentations
...are all potentially reachable by US law enforcement without going through EU mutual legal assistance treaties (MLATs).
The Legal Conflict
| Requirement | eIDAS 2.0 / EU AI Act | CLOUD Act |
|---|---|---|
| Data location | EU-resident infrastructure | US-parent can access EU servers |
| Data access | EU authority approval via GDPR/NIS2 MLATs | Direct US DOJ/FBI subpoena |
| Sovereignty guarantee | Art. 45 GDPR (data transfer restrictions) | Overrides local law per § 2713 |
| User rights | Art. 6a eIDAS 2.0 (user controls disclosure) | User has no CLOUD Act standing |
The Mitigation: EU-Sovereign Infrastructure
The only technically sound mitigation is running your EUDIW relying-party backend — including all AI inference and audit logging — on EU-incorporated infrastructure with no US-parent company:
# Infrastructure compliance declaration for EUDIW relying-party deployment
INFRASTRUCTURE_COMPLIANCE = {
"cloud_provider": "sota.io", # EU-incorporated, no US parent
"data_processing_region": "EU-West (Frankfurt)",
"cloud_act_exposure": False, # No US parent entity
"gdpr_chapter_v": "No transfer — processing stays in EU",
"eidas2_sovereignty": "Compliant — no foreign law access vector",
"eu_ai_act_art12_logs": "EU-resident, CLOUD Act-free retention",
"audit_log_retention_years": 5,
"relying_party_registration": "BSI Germany / national authority",
}
6. Python: EUDIWAIComplianceValidator
The following class implements dual-compliance checks across both frameworks for a EUDIW relying-party AI system:
import json
import re
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from typing import Optional
@dataclass
class ComplianceCheck:
framework: str
article: str
requirement: str
status: str # "PASS" | "FAIL" | "WARNING" | "UNKNOWN"
finding: str
severity: str # "critical" | "high" | "medium" | "low"
@dataclass
class EUDIWAIComplianceReport:
system_id: str
generated_at: str
checks: list[ComplianceCheck] = field(default_factory=list)
@property
def critical_failures(self):
return [c for c in self.checks if c.status == "FAIL" and c.severity == "critical"]
@property
def overall_status(self):
if self.critical_failures:
return "NON-COMPLIANT"
failures = [c for c in self.checks if c.status == "FAIL"]
if failures:
return "PARTIAL"
return "COMPLIANT"
class EUDIWAIComplianceValidator:
"""
Dual compliance validator for EUDIW relying-party AI systems.
Covers eIDAS 2.0 (Regulation EU 2024/1183) and EU AI Act (Regulation EU 2024/1689).
"""
def validate(
self,
system_config: dict,
risk_management: Optional[dict] = None,
infrastructure: Optional[dict] = None,
) -> EUDIWAIComplianceReport:
report = EUDIWAIComplianceReport(
system_id=system_config.get("system_id", "unknown"),
generated_at=datetime.utcnow().isoformat() + "Z",
)
self._check_eidas2_registration(report, system_config)
self._check_eidas2_protocol(report, system_config)
self._check_ai_act_classification(report, system_config)
self._check_ai_act_risk_management(report, risk_management)
self._check_ai_act_logging(report, system_config)
self._check_cloud_act_exposure(report, infrastructure)
self._check_human_oversight(report, system_config)
return report
def _check_eidas2_registration(self, report: EUDIWAIComplianceReport, config: dict):
has_registration = bool(config.get("relying_party_registration_id"))
report.checks.append(ComplianceCheck(
framework="eIDAS 2.0",
article="Art. 12",
requirement="Relying party registered with national authority",
status="PASS" if has_registration else "FAIL",
finding=(
f"Registration ID: {config.get('relying_party_registration_id')}"
if has_registration
else "No relying-party registration found in config. Required before accepting EUDIW credentials."
),
severity="critical",
))
def _check_eidas2_protocol(self, report: EUDIWAIComplianceReport, config: dict):
protocol = config.get("presentation_protocol", "")
supported_protocols = ["openid4vp", "iso18013-5"]
status = "PASS" if any(p in protocol.lower() for p in supported_protocols) else "FAIL"
report.checks.append(ComplianceCheck(
framework="eIDAS 2.0",
article="Art. 17",
requirement="EUDIW protocol stack (OID4VP + ISO 18013-5 mdoc)",
status=status,
finding=(
f"Protocol: {protocol} — compliant"
if status == "PASS"
else f"Protocol '{protocol}' not recognised. Implement OpenID4VP or ISO 18013-5."
),
severity="critical",
))
def _check_ai_act_classification(self, report: EUDIWAIComplianceReport, config: dict):
uses_biometric_ai = config.get("uses_biometric_ai", False)
is_classified_high_risk = config.get("ai_risk_classification") == "high-risk"
if uses_biometric_ai and not is_classified_high_risk:
status, finding = "FAIL", (
"System uses biometric AI (face matching, liveness detection) but is not classified as high-risk. "
"EU AI Act Annex III No.1(a) requires high-risk classification for biometric identification AI."
)
elif uses_biometric_ai and is_classified_high_risk:
status, finding = "PASS", "Biometric AI correctly classified as high-risk per Annex III No. 1(a)"
else:
status, finding = "PASS", "No biometric AI detected — high-risk classification not required"
report.checks.append(ComplianceCheck(
framework="EU AI Act",
article="Art. 6(2) + Annex III No.1",
requirement="Correct risk classification for biometric AI",
status=status,
finding=finding,
severity="critical",
))
def _check_ai_act_risk_management(self, report: EUDIWAIComplianceReport, rm: Optional[dict]):
has_rm = bool(rm and rm.get("documented", False) and rm.get("iterative", False))
report.checks.append(ComplianceCheck(
framework="EU AI Act",
article="Art. 9",
requirement="Documented iterative risk management system",
status="PASS" if has_rm else "FAIL",
finding=(
"Risk management system documented and marked iterative"
if has_rm
else "Risk management system absent or incomplete. Art.9 requires iterative documentation throughout lifecycle."
),
severity="critical",
))
def _check_ai_act_logging(self, report: EUDIWAIComplianceReport, config: dict):
has_logging = config.get("automatic_logging_enabled", False)
retention_years = config.get("log_retention_years", 0)
min_retention = 5 # eIDAS 2.0 Art.12(4) requirement
if has_logging and retention_years >= min_retention:
status = "PASS"
finding = f"Automatic logging enabled, {retention_years}y retention (minimum {min_retention}y)"
elif has_logging and retention_years < min_retention:
status = "WARNING"
finding = f"Logging enabled but retention {retention_years}y < required {min_retention}y (eIDAS 2.0 Art.12(4))"
else:
status = "FAIL"
finding = "Automatic inference logging not enabled. EU AI Act Art.12 + eIDAS 2.0 Art.12(4) both require audit logs."
report.checks.append(ComplianceCheck(
framework="EU AI Act + eIDAS 2.0",
article="Art. 12 (both)",
requirement="Automatic logging with 5-year retention",
status=status,
finding=finding,
severity="high",
))
def _check_cloud_act_exposure(self, report: EUDIWAIComplianceReport, infra: Optional[dict]):
if not infra:
report.checks.append(ComplianceCheck(
framework="eIDAS 2.0 + GDPR",
article="Art. 44-46 GDPR / eIDAS 2.0 Sovereignty",
requirement="No CLOUD Act exposure for EUDIW transaction data",
status="UNKNOWN",
finding="Infrastructure config not provided. Verify: no US-parent cloud provider.",
severity="high",
))
return
cloud_act_exposure = infra.get("cloud_act_exposure", True)
report.checks.append(ComplianceCheck(
framework="eIDAS 2.0 + GDPR",
article="Art. 44-46 GDPR / eIDAS 2.0 Sovereignty",
requirement="No CLOUD Act exposure for EUDIW transaction data",
status="FAIL" if cloud_act_exposure else "PASS",
finding=(
"Infrastructure has CLOUD Act exposure (US-parent cloud provider). "
"EUDIW transaction logs and AI audit logs may be compelled by US authorities. "
"Migrate to EU-incorporated infrastructure (e.g., sota.io)."
if cloud_act_exposure
else f"No CLOUD Act exposure — {infra.get('cloud_provider', 'unknown')} is EU-incorporated"
),
severity="critical" if cloud_act_exposure else "low",
))
def _check_human_oversight(self, report: EUDIWAIComplianceReport, config: dict):
uses_biometric_ai = config.get("uses_biometric_ai", False)
has_oversight = config.get("human_oversight_enabled", False)
if uses_biometric_ai and not has_oversight:
status = "FAIL"
finding = "Biometric AI active without human oversight mechanism. EU AI Act Art.14 requires override capability."
elif uses_biometric_ai and has_oversight:
status = "PASS"
finding = "Human oversight enabled for biometric AI decisions"
else:
status = "PASS"
finding = "No biometric AI — human oversight Art.14 requirement not triggered"
report.checks.append(ComplianceCheck(
framework="EU AI Act",
article="Art. 14",
requirement="Human oversight for high-risk AI decisions",
status=status,
finding=finding,
severity="high",
))
def print_report(self, report: EUDIWAIComplianceReport) -> None:
print(f"\n{'='*70}")
print(f"EUDIW × EU AI Act Compliance Report")
print(f"System: {report.system_id} | Generated: {report.generated_at}")
print(f"Overall Status: {report.overall_status}")
print(f"{'='*70}")
for check in report.checks:
icon = {"PASS": "✓", "FAIL": "✗", "WARNING": "⚠", "UNKNOWN": "?"}[check.status]
print(f"\n{icon} [{check.status}] {check.framework} — {check.article}")
print(f" Requirement: {check.requirement}")
print(f" Finding: {check.finding}")
if check.status != "PASS":
print(f" Severity: {check.severity.upper()}")
# Example usage
if __name__ == "__main__":
validator = EUDIWAIComplianceValidator()
report = validator.validate(
system_config={
"system_id": "eudiw-onboarding-service-v2",
"relying_party_registration_id": "RP-DE-2024-0042",
"presentation_protocol": "openid4vp",
"uses_biometric_ai": True,
"ai_risk_classification": "high-risk",
"automatic_logging_enabled": True,
"log_retention_years": 5,
"human_oversight_enabled": True,
},
risk_management={"documented": True, "iterative": True},
infrastructure={
"cloud_provider": "sota.io",
"cloud_act_exposure": False,
},
)
validator.print_report(report)
7. The Fundamental Rights Impact Assessment (Art. 9(9))
EU AI Act Art. 9(9) requires that providers of high-risk AI systems used in public authorities, financial services, healthcare, or critical infrastructure conduct a Fundamental Rights Impact Assessment (FRIA) before deployment. For EUDIW AI systems:
Rights at stake:
- Art. 8 EU Charter: Right to private life — biometric processing is inherently privacy-invasive
- Art. 21 EU Charter: Non-discrimination — a face-matching model with unequal error rates by demographic group directly discriminates
- Art. 47 EU Charter: Right to effective judicial remedy — automated identity rejection with no appeal mechanism may violate Art. 47
- Art. 25 GDPR: Data protection by design and by default — the FRIA should feed back into the system's architecture
The FRIA must be documented and available to national market surveillance authorities on request (Art. 74).
8. Timeline: When Do These Requirements Apply?
| Obligation | Deadline | Regulation |
|---|---|---|
| eIDAS 2.0 Member State EUDIW deployment | 26 October 2026 | Regulation EU 2024/1183 Art. 5a |
| Very large platform EUDIW acceptance | 12 months after toolbox completion (est. Q3 2026) | eIDAS 2.0 Art. 5b |
| EU AI Act prohibited practices ban | February 2025 (already in force) | EU AI Act Art. 113 |
| EU AI Act high-risk obligations | August 2026 | EU AI Act Art. 113 |
| EU AI Act GPAI obligations | August 2025 (already in force) | EU AI Act Art. 113 |
| National market surveillance activation | August 2026 | EU AI Act Art. 74 |
The critical window is August 2026: both eIDAS 2.0 EUDIW deployment (October 2026) and EU AI Act high-risk obligations (August 2026) converge in the same quarter. Development teams integrating EUDIW with AI components need to start the conformity assessment process now to meet both deadlines.
9. 25-Item Developer Compliance Checklist
eIDAS 2.0 Relying-Party Checklist
- 1. Legal entity registered as relying party with national authority (Art. 12)
- 2. Attribute request scope formally justified and documented (Art. 12)
- 3. OpenID4VP presentation request correctly formatted (Art. 17)
- 4. ISO 18013-5 mdoc parsing implemented for PID credentials (Art. 17)
- 5. Issuer certificate chain verified against national Trust Anchor (Art. 17)
- 6. Only attributes listed in registration request are processed (Art. 6a(5), GDPR purpose limitation)
- 7. Transaction logs stored with 5-year retention (Art. 12(4))
- 8. Relying-party registration renewed before expiry date
EU AI Act High-Risk Checklist (if biometric AI used)
- 9. AI system formally classified as high-risk under Annex III No. 1(a) (Art. 6(2))
- 10. Technical documentation prepared per Annex IV format (Art. 11)
- 11. Risk management system established and documented as iterative (Art. 9)
- 12. Known/foreseeable risks identified and mitigated (Art. 9(2))
- 13. Residual risk acceptable vs. benefits — decision documented (Art. 9(5))
- 14. Training data governance documentation complete (Art. 10(2))
- 15. Demographic bias evaluation completed and documented (Art. 10(4))
- 16. No raw biometric data in AI audit logs — only hashes or derived features (Art. 12 + GDPR)
- 17. Automatic inference logging enabled with structured output (Art. 12)
- 18. Transparency documentation provided to deployers (Art. 13)
- 19. Human oversight mechanism implemented with override capability (Art. 14)
- 20. Confidence threshold defined for human review escalation (Art. 14(4))
- 21. Accuracy benchmarked across demographic groups (Art. 15(4))
- 22. EU Declaration of Conformity prepared (Art. 48)
- 23. System registered in EU AI Act database if used by public authorities (Art. 49)
- 24. Fundamental Rights Impact Assessment completed if public-sector or financial deployment (Art. 9(9))
- 25. Infrastructure is EU-sovereign — no CLOUD Act exposure for biometric AI logs
Where to Run Your EUDIW AI Backend
EUDIW transaction logs, AI audit records, and PID-derived data are among the most sensitive categories of personal information. Running this infrastructure on US-parent cloud providers creates a structural CLOUD Act conflict with eIDAS 2.0's sovereignty guarantees and GDPR Art. 44–46.
sota.io is an EU-native PaaS with no US parent entity. EUDIW relying-party backends, AI inference services, and 5-year audit log storage all run under EU law exclusively — no CLOUD Act exposure, no transatlantic data transfer, no foreign court orders. Deploy from your terminal in minutes.
# Deploy your EUDIW relying-party service to EU-sovereign infrastructure
sota deploy --region eu-west --project eudiw-relying-party
This article covers Regulation (EU) 2024/1183 (eIDAS 2.0) and Regulation (EU) 2024/1689 (EU AI Act) as of April 2026. The EUDIW toolbox technical specifications are under active development by the European Commission; protocol details may evolve before October 2026 deployment deadlines.
See Also
- NIS2 Art.23 Incident Reporting: 24h/72h/1-Month Timelines — EUDIW relying-party operators in NIS2-covered sectors must report identity infrastructure incidents to NCAs on the Art.23 timeline
- NIS2 Essential Entity vs Important Entity Classification — Trust service providers and digital infrastructure operators are NIS2 Essential Entities regardless of size
- NIS2 Art.23 + GDPR Art.33 Dual Reporting — EUDIW breaches involving PID or biometric data trigger simultaneous NIS2 and GDPR reporting obligations
- DORA Art.19 Major ICT Incident Reporting: 4h/24h/5-Day — Financial-sector EUDIW deployments (banking, payment services) may also be subject to DORA's stricter 4-hour ICT incident notification