EU AI Act + DORA: Dual Compliance for Financial Sector AI Systems (2026)
A European bank deploys a credit-scoring AI that evaluates loan applications in real time. It is high-risk under EU AI Act Annex III Category 6. It is also an ICT system under DORA because it processes data and supports a financial service. When the system generates a wrong output that causes a service disruption, both regulations have incident reporting requirements — with different timelines, different competent authorities, and different disclosure obligations.
This is not an edge case. Every AI system used by a financial entity in scope of DORA simultaneously falls under the EU AI Act. The two regulations overlap in significant areas — ICT risk management, testing, third-party oversight, human oversight — and diverge in others. Running two separate compliance programmes is expensive and creates gaps. Running one unified programme requires understanding exactly where the two frameworks align and where they require separate action.
This guide maps DORA against the EU AI Act for financial sector AI systems: scope, risk management, third-party obligations, incident reporting, resilience testing, and the practical dual-compliance strategy.
Who Is in Scope of Both Regulations?
DORA Scope (Art.2)
DORA applies to financial entities as defined in Art.2(2), including:
- Credit institutions (banks, savings institutions)
- Payment institutions and electronic money institutions
- Investment firms and trading venues
- Insurance and reinsurance undertakings
- Occupational pension institutions (IORPs)
- Crypto-asset service providers (CASPs) under MiCA
- Central counterparties (CCPs) and central securities depositories (CSDs)
- Credit rating agencies and crowdfunding service providers
Microenterprises employing fewer than 10 persons with annual turnover under €2 million qualify for the simplified ICT risk management framework under Art.16, which reduces (but does not eliminate) DORA obligations.
EU AI Act Scope
The EU AI Act applies to providers and deployers of AI systems whose systems are placed on the EU market or used within the EU, regardless of where the provider is established. A US bank serving EU clients through an AI credit-scoring system is in scope. A Frankfurt insurance subsidiary of a UK firm using AI for claims triage is in scope.
The overlap: Any financial entity in scope of DORA that places an AI system on the EU market or uses one within the EU is simultaneously subject to both regulations. For most European banks, insurers, and investment firms, this covers virtually all AI deployments.
AI Systems as ICT Assets Under DORA
DORA does not define "artificial intelligence" and does not use the phrase "AI system." AI systems fall within DORA's framework because DORA applies to all ICT systems — defined broadly in Art.3(3) as "software or hardware, including IT infrastructure" that supports business processes.
Under DORA Art.8(1), financial entities must identify all ICT assets including software, and maintain an inventory that maps dependencies and interconnections. An AI model deployed in production — whether hosted internally or accessed via API — is an ICT asset for DORA purposes.
This means:
- AI systems must appear in the ICT asset register (Art.8(1))
- AI systems must be included in ICT risk assessments (Art.8(2))
- Dependencies on AI providers must be covered in ICT third-party risk management (Art.28)
- AI system failures must be classified and potentially reported as ICT-related incidents (Art.17)
DORA Art.8 ICT Risk Management vs EU AI Act Art.9 Risk Management
The most significant operational overlap between DORA and the EU AI Act occurs in risk management. Both regulations require a systematic risk management process for AI-based ICT systems. They are complementary but not identical.
DORA Art.8: ICT Risk Management Framework
DORA Art.6 requires financial entities to maintain an ICT Risk Management Framework — a documented set of tools, methods, processes, and policies. The framework must address:
- Identification (Art.8): Classify ICT assets by criticality, map business functions supported, identify single points of failure
- Protection (Art.9): Implement ICT security controls, access management, encryption, patch management
- Detection (Art.10): Establish mechanisms to promptly detect anomalous activities affecting ICT systems
- Response and recovery (Art.11): Define and test ICT response and recovery plans
- Communication (Art.14): Define ICT-related incident communication plans
EU AI Act Art.9: Risk Management System
For high-risk AI systems, Art.9 requires a continuous iterative risk management system throughout the system lifecycle. It must include:
- Identification and analysis of known and reasonably foreseeable risks
- Estimation and evaluation of risks that may emerge in intended use and reasonably foreseeable misuse
- Risk control measures through design and development, and in deployment
- Residual risk assessment and user information requirements
Where They Align and Where They Diverge
| Dimension | DORA Art.8 ICT Risk | EU AI Act Art.9 Risk |
|---|---|---|
| Scope | All ICT systems | High-risk AI systems only |
| Risk horizon | ICT operational risks (availability, integrity, confidentiality) | AI-specific risks (bias, accuracy drift, unsafe outputs, misuse) |
| Lifecycle | Continuous, annual review | Continuous, updated pre-deployment and on material change |
| Documentation | ICT risk register, asset inventory | Technical documentation under Annex IV |
| Testing | Resilience testing under Art.26-27 | Post-market monitoring under Art.26 EU AI Act |
| Competent authority | Lead overseer (EBA/ESMA/EIOPA designated) | National market surveillance authority |
Practical implication: A high-risk AI credit-scoring system requires two formal risk assessments — one under DORA's ICT Risk Management Framework and one under EU AI Act Art.9. The DORA assessment focuses on operational resilience (what happens if the system is unavailable or corrupted). The EU AI Act assessment focuses on AI-specific risks (what happens if the system generates biased or inaccurate outputs that discriminate against borrowers).
The most efficient approach is a unified risk register with sections for each regulatory framework, sharing underlying data (system description, interfaces, dependencies) while maintaining separate risk taxonomies.
DORA Art.28-30: ICT Third-Party Risk and EU AI Act Deployer Obligations
When AI Providers Become ICT Third-Party Service Providers
Under DORA Art.3(18), an ICT third-party service provider (TPSP) is any undertaking providing ICT services to a financial entity. When a bank accesses an AI model via API — from a cloud AI provider, an AI-as-a-service platform, or a specialized fintech — that provider is an ICT TPSP for DORA purposes.
DORA Art.28 requires financial entities to:
- Maintain a register of information on all contractual arrangements with ICT TPSPs (Art.28(3))
- Conduct due diligence before entering ICT arrangements and annually thereafter (Art.28(4))
- Ensure contracts include mandatory provisions on service levels, audit rights, data location, and business continuity (Art.30)
- Assess whether ICT arrangements support critical or important functions (CIF) — triggering enhanced oversight requirements
Critical ICT Third-Party Service Providers (CTPPs) Under Art.31
DORA Art.31 establishes a designation mechanism for Critical TPSPs: providers whose ICT services are systemic to EU financial stability. Once designated by the Joint Committee of ESAs, CTPPs are subject to direct oversight by a Lead Overseer (EBA, ESMA, or EIOPA).
For AI providers, the CTPP designation risk is real: a cloud AI provider that supplies credit-scoring models to a majority of European retail banks could qualify. Financial entities using CTPP-designated AI providers benefit from the Lead Overseer's audit and remediation powers — but must also track CTPP compliance status and update their own risk assessments when CTPPs receive oversight recommendations.
EU AI Act Art.25: Deployer Obligations When Using Third-Party AI
Under EU AI Act Art.25, a deployer who uses a third-party AI system for a high-risk application (e.g., the bank using a vendor credit-scoring model) retains specific obligations regardless of the vendor's own compliance:
- Implement instructions for use provided by the provider (Art.25(1)(b))
- Ensure the system is used in accordance with instructions (Art.25(1)(b))
- Conduct fundamental rights impact assessments where required (Art.26(2))
- Monitor the system and report serious incidents to providers and competent authorities (Art.25(1)(a), Art.73)
Key distinction: DORA Art.28 due diligence focuses on the provider's operational resilience (can the AI service stay available?). EU AI Act Art.25 deployer obligations focus on appropriate use of the AI system (are outputs used correctly? Are humans in the loop?).
A bank that performs thorough DORA due diligence on an AI credit-scoring vendor — reviewing their BCP, SLAs, and security controls — but fails to implement EU AI Act Art.14 human oversight for individual credit decisions has a compliance gap under EU AI Act regardless of DORA compliance.
Dual Incident Reporting Timelines
When an AI system failure constitutes both a DORA ICT-related incident and an EU AI Act serious incident, different reporting timelines apply.
DORA Art.17-20: ICT-Related Incident Reporting
DORA Art.17 requires financial entities to classify ICT-related incidents as major if they meet thresholds based on:
- Number of clients or financial counterparts affected
- Duration of the incident
- Geographic spread
- Data loss
- Reputational, financial, and economic impact (Art.18)
For major ICT-related incidents, Art.19 establishes a three-stage reporting timeline:
- Initial notification: Within 4 hours of classification (or within 24 hours of becoming aware, if classification takes time)
- Intermediate report: Within 72 hours of the initial notification
- Final report: Within 1 month of the intermediate report
Reports go to the competent authority designated under DORA (typically the national financial supervisor — BaFin for Germany, FCA-equivalent NCA for other MS), which then shares with ESAs via the European centralized hub.
EU AI Act Art.73: Serious Incident Reporting for High-Risk AI
Under EU AI Act Art.73, providers of high-risk AI systems must report serious incidents to the market surveillance authority. A "serious incident" under Art.3(49) means:
- Any incident or malfunction of a high-risk AI system that directly or indirectly leads to death or serious harm to health
- Any serious breach of fundamental rights obligations
- Any serious disruption to the management or operation of critical infrastructure
Art.73(1) specifies reporting without undue delay and in any case within 15 working days of becoming aware of the serious incident.
The Dual-Reporting Problem
An AI credit-scoring system failure that results in 50,000 loan decisions being suspended for 8 hours could simultaneously be:
- A major ICT-related incident under DORA (large number of clients affected, extended duration) requiring a 4-hour initial notification to the financial supervisor
- A serious incident under EU AI Act Art.73 if the suspension caused denial of essential services to natural persons, requiring reporting to the market surveillance authority within 15 working days
Financial entities must prepare dual incident response procedures with clear decision trees:
- Is this an ICT-related incident under DORA Art.17? → Apply DORA classification criteria
- Does this also constitute a serious incident under EU AI Act Art.3(49)? → Apply AI Act serious incident criteria
- Do both apply? → Initiate parallel notification tracks with different competent authorities and different timelines
The 4-hour DORA initial notification deadline is significantly more demanding than the EU AI Act's 15-working-day window. Building unified incident management that satisfies both requires the DORA timeline to govern the combined response.
DORA Art.26-27: Digital Operational Resilience Testing and EU AI Act Testing
DORA Threat-Led Penetration Testing (TLPT)
DORA Art.26 requires financial entities to carry out Digital Operational Resilience Testing — a programme of tests covering the entire ICT system landscape at least annually for basic tests, and at least every 3 years for Threat-Led Penetration Testing (TLPT) under Art.26(2).
TLPT follows the TIBER-EU methodology and must be conducted by independent certified testers. AI systems supporting critical or important functions are in scope. TLPT for AI systems should include:
- Adversarial prompting and input manipulation tests
- API endpoint abuse testing
- Model confidence manipulation tests
- Data poisoning simulation (for systems with online learning)
EU AI Act Testing Requirements
For high-risk AI systems, Art.9(7) requires testing procedures to ensure the system meets requirements before market placement. Post-market monitoring under Art.72 requires ongoing collection of data on performance after deployment.
Unlike DORA's annual testing obligation, the EU AI Act does not prescribe testing frequency — it requires testing "throughout the development lifecycle" and ongoing monitoring. However, any material change to a high-risk AI system requires re-evaluation under Art.6.
Unified testing approach: Build a single AI Resilience Testing Programme that satisfies both:
- Annual adversarial testing covering DORA ICT operational requirements (availability, integrity, security)
- Continuous bias and accuracy monitoring covering EU AI Act Art.9 risk management requirements
- TLPT every 3 years covering both ICT threat scenarios and AI-specific attack vectors
Python DORAAIActComplianceChecker
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class FinancialEntityCategory(Enum):
CREDIT_INSTITUTION = "credit_institution"
INVESTMENT_FIRM = "investment_firm"
INSURANCE_UNDERTAKING = "insurance_undertaking"
PAYMENT_INSTITUTION = "payment_institution"
CRYPTO_ASSET_SERVICE_PROVIDER = "casp"
MICROENTERPRISE = "microenterprise" # <10 persons, <2M EUR — simplified DORA
class ICTRiskLevel(Enum):
CRITICAL = "critical" # AI supports critical or important function
SIGNIFICANT = "significant"
STANDARD = "standard"
class AIActRiskCategory(Enum):
HIGH_RISK = "high_risk" # Annex III classification
LIMITED_RISK = "limited_risk"
MINIMAL_RISK = "minimal_risk"
GPAI = "gpai" # General purpose AI model
class IncidentType(Enum):
ICT_INCIDENT_ONLY = "ict_only" # DORA only
AI_SERIOUS_INCIDENT_ONLY = "ai_only" # EU AI Act only
DUAL_INCIDENT = "dual" # Both regulations triggered
NO_INCIDENT = "none"
@dataclass
class DORAAIActProfile:
"""Complete profile of a financial entity's AI system for dual compliance."""
entity_category: FinancialEntityCategory
ai_system_name: str
ict_risk_level: ICTRiskLevel
ai_act_category: AIActRiskCategory
supports_critical_function: bool # DORA Art.3(22) critical/important function
is_third_party_ai: bool # External AI API provider
provider_is_ctpp_designated: bool = False
has_human_oversight: bool = False # EU AI Act Art.14
has_ict_risk_register_entry: bool = False
has_eu_ai_act_technical_doc: bool = False
incident_response_covers_both: bool = False
annual_resilience_tests_run: bool = False
post_market_monitoring_active: bool = False
@dataclass
class DualComplianceGap:
regulation: str # "DORA" | "EU AI Act" | "Both"
article: str
description: str
severity: str # "critical" | "high" | "medium"
remediation: str
@dataclass
class ComplianceAssessment:
profile: DORAAIActProfile
gaps: list[DualComplianceGap] = field(default_factory=list)
dora_compliant: bool = False
ai_act_compliant: bool = False
overall_risk_score: int = 0 # 0-100
def classify_incident(
clients_affected: int,
duration_hours: float,
geographic_spread: str, # "local" | "national" | "cross_border"
caused_harm_to_persons: bool,
disrupted_critical_infrastructure: bool
) -> IncidentType:
"""Determine which incident reporting obligations are triggered."""
# DORA major incident thresholds (simplified — see RTS for full criteria)
dora_major = (
clients_affected > 1000 or
duration_hours > 4.0 or
geographic_spread == "cross_border"
)
# EU AI Act serious incident criteria (Art.3(49))
ai_serious = caused_harm_to_persons or disrupted_critical_infrastructure
if dora_major and ai_serious:
return IncidentType.DUAL_INCIDENT
elif dora_major:
return IncidentType.ICT_INCIDENT_ONLY
elif ai_serious:
return IncidentType.AI_SERIOUS_INCIDENT_ONLY
return IncidentType.NO_INCIDENT
def get_reporting_deadline(incident_type: IncidentType) -> dict:
"""Return reporting deadlines by incident type."""
deadlines = {
IncidentType.DUAL_INCIDENT: {
"DORA_initial": "4 hours from classification (Art.19(4)(a))",
"DORA_intermediate": "72 hours from initial (Art.19(4)(b))",
"DORA_final": "1 month from intermediate (Art.19(4)(c))",
"AI_Act_serious": "15 working days from awareness (Art.73(1))",
"note": "DORA 4-hour initial deadline governs combined response cadence"
},
IncidentType.ICT_INCIDENT_ONLY: {
"DORA_initial": "4 hours from classification",
"DORA_intermediate": "72 hours",
"DORA_final": "1 month"
},
IncidentType.AI_SERIOUS_INCIDENT_ONLY: {
"AI_Act_serious": "15 working days from awareness (Art.73(1))"
},
IncidentType.NO_INCIDENT: {"note": "No mandatory external reporting required"}
}
return deadlines.get(incident_type, {})
def assess_dual_compliance(profile: DORAAIActProfile) -> ComplianceAssessment:
"""Assess DORA + EU AI Act dual compliance gaps for a financial entity AI system."""
assessment = ComplianceAssessment(profile=profile)
gaps = []
# DORA Art.8 — ICT Risk Register
if not profile.has_ict_risk_register_entry:
gaps.append(DualComplianceGap(
regulation="DORA",
article="Art.8(1)",
description="AI system not included in ICT asset register",
severity="critical",
remediation="Add AI system to ICT register: name, criticality, business function, dependencies"
))
# DORA Art.28 — Third-party AI provider due diligence
if profile.is_third_party_ai and not profile.provider_is_ctpp_designated:
gaps.append(DualComplianceGap(
regulation="DORA",
article="Art.28(4)",
description="Third-party AI provider not assessed for CTPP designation risk",
severity="high",
remediation="Assess provider: market share, systemic importance, sub-outsourcing chain. Flag for annual CTPP register check."
))
# EU AI Act Art.9 — Technical documentation
if profile.ai_act_category == AIActRiskCategory.HIGH_RISK and not profile.has_eu_ai_act_technical_doc:
gaps.append(DualComplianceGap(
regulation="EU AI Act",
article="Art.11 + Annex IV",
description="High-risk AI system missing technical documentation",
severity="critical",
remediation="Prepare Annex IV documentation: system description, training data, testing, accuracy metrics, intended use, residual risks"
))
# EU AI Act Art.14 — Human oversight
if profile.ai_act_category == AIActRiskCategory.HIGH_RISK and not profile.has_human_oversight:
gaps.append(DualComplianceGap(
regulation="EU AI Act",
article="Art.14",
description="High-risk AI system deployed without human oversight measures",
severity="critical",
remediation="Implement: human review for high-stakes outputs, override capability, competence training for operators"
))
# Dual — Incident response
if not profile.incident_response_covers_both:
gaps.append(DualComplianceGap(
regulation="Both",
article="DORA Art.17 + AI Act Art.73",
description="Incident response procedure does not address dual reporting obligations",
severity="high",
remediation="Add decision tree: classify under both frameworks, identify competent authorities, set 4-hour DORA trigger as primary timeline"
))
# DORA Art.26 — Annual resilience tests
if not profile.annual_resilience_tests_run and profile.supports_critical_function:
gaps.append(DualComplianceGap(
regulation="DORA",
article="Art.26(1)",
description="AI system supporting critical function not included in annual resilience tests",
severity="high",
remediation="Include AI system in DORA annual ICT testing programme. Cover adversarial inputs, API failure, model degradation scenarios."
))
# EU AI Act Art.72 — Post-market monitoring
if profile.ai_act_category == AIActRiskCategory.HIGH_RISK and not profile.post_market_monitoring_active:
gaps.append(DualComplianceGap(
regulation="EU AI Act",
article="Art.72",
description="No post-market monitoring system for high-risk AI outputs",
severity="high",
remediation="Implement: accuracy drift monitoring, bias metrics, output distribution tracking, quarterly review process"
))
assessment.gaps = gaps
assessment.overall_risk_score = min(100, len(gaps) * 15)
assessment.dora_compliant = not any(g.regulation in ("DORA", "Both") and g.severity == "critical" for g in gaps)
assessment.ai_act_compliant = not any(g.regulation in ("EU AI Act", "Both") and g.severity == "critical" for g in gaps)
return assessment
# Example: credit-scoring AI at a European bank
profile = DORAAIActProfile(
entity_category=FinancialEntityCategory.CREDIT_INSTITUTION,
ai_system_name="RetailCreditScorer v3.2",
ict_risk_level=ICTRiskLevel.CRITICAL,
ai_act_category=AIActRiskCategory.HIGH_RISK,
supports_critical_function=True,
is_third_party_ai=True,
has_human_oversight=True,
has_ict_risk_register_entry=True,
has_eu_ai_act_technical_doc=False, # Missing!
incident_response_covers_both=False, # Gap
annual_resilience_tests_run=False, # Gap
post_market_monitoring_active=True
)
result = assess_dual_compliance(profile)
print(f"DORA compliant: {result.dora_compliant}")
print(f"EU AI Act compliant: {result.ai_act_compliant}")
print(f"Risk score: {result.overall_risk_score}/100")
for gap in result.gaps:
print(f" [{gap.severity.upper()}] {gap.regulation} {gap.article}: {gap.description}")
25-Item DORA + EU AI Act Dual Compliance Checklist
Part A — DORA Scope and ICT Risk Foundation (Items 1-5)
- AI system in DORA scope: Confirm financial entity is in DORA Art.2(2) scope. Identify whether microenterprise simplified framework (Art.16) applies.
- AI as ICT asset: Register all AI systems in ICT asset register (Art.8(1)) with criticality classification (critical/significant/standard).
- Business function mapping: Map AI system to the business function it supports. Determine whether the function is "critical or important" under DORA Art.3(22).
- DORA ICT risk assessment: Include AI system in formal ICT risk assessment (Art.8(2)). Document single points of failure, concentration risk from AI API dependency.
- ICT Risk Management Framework coverage: Verify AI system is covered by all five DORA pillars: identify, protect, detect, respond, recover (Arts.8-11, 14).
Part B — EU AI Act Risk Classification (Items 6-10)
- Annex III classification check: Apply EU AI Act Art.6(2) + Annex III to determine whether AI system is high-risk. Financial sector triggers: credit scoring (Cat.6(b)), employment/worker management (Cat.4), critical infrastructure (Cat.2).
- Prohibited use check: Confirm AI system does not use prohibited techniques (Art.5): subliminal manipulation, social scoring by public authorities, real-time remote biometric identification in public spaces.
- GPAI layer identification: If AI system wraps a GPAI model (e.g., LLM via API), document Art.25 deployer obligations and provider Art.52 transparency obligations in technical documentation.
- Technical documentation (Annex IV): Prepare and maintain technical documentation covering all eight Annex IV elements including intended purpose, performance metrics, data governance, and residual risks.
- EU declaration of conformity: Register high-risk AI system with the EU AI database (Art.71) before deployment. Prepare EU declaration of conformity (Art.47).
Part C — Third-Party AI Provider Management (Items 11-15)
- DORA TPSP register: Enter AI API provider in register of ICT third-party service providers (Art.28(3)) with service type, criticality, and contractual arrangement reference.
- DORA Art.30 contractual provisions: Verify AI provider contract includes: service level descriptions, data location specifications, audit rights for the financial entity and competent authority, business continuity and exit plan.
- CTPP designation monitoring: Check European Supervisory Authorities' published CTPP list. If AI provider is designated, integrate Lead Overseer oversight recommendations into own risk assessment (Art.31(5)).
- EU AI Act Art.25 deployer obligations: If using third-party high-risk AI, maintain documentation of: instructions for use compliance, fundamental rights impact assessment (if required), operator competence training.
- Concentration risk assessment: Assess whether dependency on a single AI provider creates systemic concentration risk under DORA Art.29. Consider multi-provider strategy for critical AI functions.
Part D — Dual Incident Response (Items 16-20)
- Incident classification decision tree: Build decision tree classifying ICT-related incidents (DORA Art.17) AND serious AI incidents (EU AI Act Art.3(49)) from a single event description.
- DORA major incident thresholds: Configure monitoring systems to alert when DORA major incident thresholds are approached: client impact numbers, service duration, geographic spread (Art.18 + RTS thresholds).
- 4-hour DORA initial notification: Establish internal escalation that reaches competent authority notification within 4 hours of classifying a major ICT-related incident. Designate 24/7 incident notification owner.
- EU AI Act 15-day serious incident track: Build parallel track: if AI system output caused harm or disrupted critical infrastructure, initiate 15-working-day notification to market surveillance authority (Art.73).
- Incident log for regulators: Maintain unified incident log satisfying both DORA Art.19 reporting content requirements and EU AI Act Art.26 post-market monitoring documentation requirements.
Part E — Testing and Monitoring (Items 21-25)
- DORA annual ICT testing: Include AI system in annual ICT testing programme (Art.26(1)). Cover: availability tests (API downtime simulation), integrity tests (corrupted model output detection), security tests (adversarial input resilience).
- DORA TLPT scope inclusion: For AI systems supporting critical functions at scope-eligible financial entities, include in TIBER-EU Threat-Led Penetration Test every 3 years (Art.26(2)-(5)). Test AI-specific attack vectors.
- EU AI Act post-market monitoring: Implement Art.72 post-market monitoring system for high-risk AI: accuracy drift detection, protected attribute bias tracking, output distribution monitoring, quarterly review process.
- Human oversight competence: Train all operators designated to oversee high-risk AI outputs (Art.14(4)(c)). Document training records. Refresh training on material model updates.
- Joint audit trail: Maintain audit trail satisfying both DORA Art.8(1)(b) system documentation requirements and EU AI Act Art.12 logging requirements. Store logs in EU-sovereign infrastructure to protect from Art.70 confidentiality risks under third-country jurisdictional compellability.
Four Common Dual-Compliance Mistakes
Mistake 1: Treating DORA Compliance as Sufficient for EU AI Act
DORA due diligence on an AI vendor establishes their operational reliability — availability SLAs, BCP, security controls. It does not establish whether the AI system produces accurate, unbiased, and legally compliant outputs. A bank that passes DORA audit but lacks EU AI Act technical documentation, Art.14 human oversight, and post-market monitoring is fully DORA-compliant and fully EU AI Act non-compliant simultaneously.
Mistake 2: Single Competent Authority Assumption
DORA incident reports go to the financial supervisor (BaFin, AMF, FCA-equivalent NCA). EU AI Act serious incident reports go to the market surveillance authority — which in many member states is a different body than the financial supervisor. Prepare contact lists, notification templates, and escalation paths for both authorities before an incident occurs.
Mistake 3: DORA Art.30 Contract = EU AI Act Supplier Agreement
DORA Art.30 mandates specific contractual provisions with ICT TPSPs — audit rights, SLAs, data location. These do not cover EU AI Act deployer obligations: provider technical documentation access, conformity assessment status, instructions for use, notification of system changes. AI vendor contracts must satisfy both checklists independently.
Mistake 4: Annual DORA Test = Post-Market Monitoring
DORA annual resilience tests are event-driven assessments of whether the system can withstand operational shocks. EU AI Act Art.72 post-market monitoring is continuous tracking of whether deployed outputs remain accurate and unbiased over time. A credit-scoring model can survive a full DORA availability test while exhibiting 18-month accuracy drift that discriminates against a protected group. Both regimes require separate ongoing programmes.
Hosting EU Financial AI: Jurisdictional Risk Under DORA Art.70 and EU AI Act
DORA Art.30(2)(e) requires ICT contracts to include data location provisions. For AI systems, "data" includes not only customer data but also model inference logs, audit trails, and incident records. Under Art.70 of the EU AI Act, competent authorities may access confidential technical documentation and logs during market surveillance.
When AI system logs are stored on infrastructure hosted by a US parent company, cloud provider, or group entity under US jurisdiction, they are potentially subject to compelled disclosure under the US CLOUD Act — independent of EU confidentiality protections. A Frankfurt bank's AI credit-scoring audit log stored on AWS us-east-1 is accessible to US federal law enforcement via CLOUD Act order without notification to the EU entity or competent authority.
EU-sovereign infrastructure — hosted on servers physically within the EU by an EU-controlled entity — eliminates this jurisdictional exposure. DORA Art.30(2)(e) data location provisions and EU AI Act Art.70 audit log access should specify EU-sovereign hosting as a contractual and technical requirement for AI systems processing sensitive financial and personal data.
sota.io is the EU-native PaaS: servers in Frankfurt and Amsterdam, ISO 27001 certification pathway, GDPR-compliant managed PostgreSQL. Financial entities can deploy AI inference services, audit log storage, and post-market monitoring systems with contractual EU-sovereignty guarantees that satisfy both DORA data location requirements and EU AI Act Art.70 confidentiality protections.
Implementation Roadmap
Immediate (before EU AI Act high-risk obligations, August 2026):
- Complete Annex III risk classification for all AI systems
- Add all AI systems to DORA ICT asset register
- Enter all AI API providers in DORA TPSP register
- Prepare technical documentation for high-risk AI systems
Short-term (before December 2026):
- Implement human oversight measures for all high-risk AI systems
- Build dual incident response procedure with competent authority contacts for both frameworks
- Launch post-market monitoring for high-risk AI outputs
- Complete DORA Art.30 contract review for all AI API providers
Ongoing:
- Annual DORA ICT testing including AI-specific adversarial tests
- Quarterly EU AI Act post-market monitoring reviews
- CTPP designation list monitoring
- Material change assessment triggering EU AI Act re-evaluation under Art.6
Summary
Financial entities deploying AI systems face the EU AI Act and DORA as two parallel, overlapping compliance frameworks that neither replaces nor exempts from the other. The most critical integration points are:
- Risk management: Unified register, separate taxonomies (ICT operational vs AI-specific risks)
- Third-party oversight: DORA TPSP register + EU AI Act deployer obligations as complementary, not alternative
- Incident response: Dual classification tree with DORA's 4-hour initial notification as the primary timeline driver
- Testing: Combined programme covering DORA resilience testing and EU AI Act post-market monitoring
- Infrastructure: EU-sovereign hosting satisfying both DORA data location provisions and EU AI Act Art.70 audit log confidentiality
The Python DORAAIActComplianceChecker and 25-item checklist above provide a structured starting point for dual compliance gap analysis. Financial entities with AI in scope of both regulations should begin this assessment now — EU AI Act high-risk obligations apply from August 2, 2026.
See Also
- EU AI Act Agentic AI Systems: Provider, Deployer, and High-Risk Classification — Financial sector agentic deployments (credit-scoring agents, automated trading systems) are simultaneously high-risk AI under DORA and under Annex III Category 6; checkpoint architecture and kill-switch requirements apply to both frameworks
- EU AI Act Art.53(1)(d): Cybersecurity and Physical Infrastructure for GPAI Systemic Risk Models — DORA Art.30(2)(e) data location provisions and EU AI Act Art.53(1)(d) physical infrastructure protection both point to EU-sovereign hosting as the single infrastructure decision that satisfies both obligations
- EU AI Act Art.9: Risk Management System for High-Risk AI Systems — Art.9 risk management and DORA Art.8 ICT risk management run in parallel for financial AI; this guide covers the Art.9 requirements that complement DORA's ICT risk framework