EU AI Liability Directive (AILD) + Product Liability Directive 2024: Developer Guide
Until 2024, an EU developer whose AI system caused harm faced a fragmented patchwork of national tort laws. Germany used the Produkthaftungsgesetz. France relied on Code civil Article 1245. The Netherlands applied Burgerlijk Wetboek Book 6. Each produced different outcomes for identical harms.
That fragmentation is ending. Two EU directives — the AI Liability Directive (AILD, COM/2022/0496) and the updated Product Liability Directive (PLD, Directive 2024/2853) — create a harmonised EU-wide liability framework for AI systems. Combined with the EU AI Act already in force, developers now face three overlapping obligations with compounding legal exposure.
This guide explains what each directive requires, how they interact with the AI Act, what this means for your technical architecture, and why your infrastructure jurisdiction is now a legal decision — not just a procurement one.
The Two New Liability Regimes
EU AI Liability Directive (AILD) — Fault-Based Liability
The AI Liability Directive (COM/2022/0496, proposed 28 September 2022) is the EU's answer to a specific problem: AI systems are complex black boxes that make it nearly impossible for injured parties to prove causation under traditional tort law. AILD does not eliminate fault requirements — but it dramatically changes the evidentiary burden around causation.
AILD targets two actors from the EU AI Act taxonomy:
- Providers (AI Act Art. 3(3)): those who develop or place an AI system on the market
- Deployers (AI Act Art. 3(4)): those who use an AI system in a professional context
Article 3 — Disclosure of Evidence:
The most practically significant mechanism. A claimant who wants to sue an AI provider can request a national court to order disclosure of evidence about the high-risk AI system — even before establishing liability. The provider must hand over:
- Training data documentation
- Testing and validation records
- Risk management system documentation (AI Act Art. 9)
- Post-market monitoring logs (AI Act Art. 72)
- Any incident records held by the system
Courts can order disclosure if the claimant demonstrates they are unable to access the evidence independently and that the evidence is plausibly relevant to their claim. For providers of high-risk AI systems (AI Act Annex III), this is the default assumption.
Critically: failure to comply with a disclosure order — or failure to preserve the evidence in the first place — creates an adverse inference. Courts may presume the missing evidence was unfavourable to the provider.
Article 4 — Rebuttable Presumption of Causality:
Where a claimant establishes:
- The defendant violated a duty of care imposed by an AI-specific legal obligation (including EU AI Act requirements), and
- It is reasonably likely — given the circumstances — that the violation caused the harm, and
- Establishing causation would be excessively difficult for the claimant (e.g., due to the technical complexity of the AI system)
...then the court presumes causation. The burden shifts: the provider must prove their AI system did not cause the harm.
What constitutes a "duty of care" under an AI-specific obligation? The AI Act provides the primary source:
| AI Act Violation | AILD Art. 4 Trigger |
|---|---|
| Art. 9: Missing risk management system | Presumed causation if harm occurred in risk area |
| Art. 10: Data governance failures | Presumed causation for data-quality-related harm |
| Art. 11: Missing technical documentation | Adverse inference (Art. 3) + causation presumption |
| Art. 13: Insufficient transparency | Presumed causation for user-decision harm |
| Art. 14: Missing human oversight | Presumed causation for autonomous-decision harm |
| Art. 72: No post-market monitoring | Presumed causation for undetected defects |
The combination is powerful. If your high-risk AI system causes harm and you have not implemented the AI Act's Art. 9 risk management requirements, a national court will presume your system caused the harm — and you carry the burden to prove it did not.
AILD Scope and Timeline:
AILD applies to non-contractual civil liability claims for AI-caused harm. It does not cover contractual claims (those stay in contract law). It covers both high-risk AI (Annex III) and non-high-risk AI, though the causation presumption in Art. 4 has a higher threshold for non-high-risk systems.
AILD does not cap damages. Combined with the AI Act's administrative fines (up to €35M or 7% of global turnover for providers), a single incident can create unbounded legal exposure from two separate directions simultaneously.
Updated Product Liability Directive (PLD 2024/2853) — Strict Liability
The revised Product Liability Directive (Directive 2024/2853, adopted November 2024) replaces the original 1985 directive and explicitly addresses the digital economy. The most significant change for AI developers: software is now a product.
The original 1985 directive was ambiguous on whether software constituted a "product" under EU law. Courts across Member States reached contradictory conclusions. PLD 2024 ends the ambiguity:
"Products within the scope of this Directive include software, including when incorporated in other products. 'Software' includes operating systems, firmware, computer programs, applications and AI systems."
This is a categorical change. Under PLD 2024:
Standalone AI systems (deployed as software) = products subject to strict liability
AI systems embedded in hardware (sensors, medical devices, vehicles) = products subject to strict liability
AI system updates released after market placement = new potential defect liability windows
Strict Liability — No Fault Required:
PLD creates liability for defective products without requiring proof of fault. A claimant needs to prove only:
- The product was defective
- The defect caused the damage
- The damage falls within PLD's scope (personal injury, property damage, psychological harm, data destruction)
A product is defective when it does not provide the safety that persons are generally entitled to expect, taking into account:
- The presentation of the product (what you promised it would do)
- The reasonably expected uses (including misuse)
- The time it was placed on the market
For AI systems, this translates to: if your AI system behaves in a way that falls below the safety expectations created by your documentation, marketing, or reasonable inference — it is defective under PLD, regardless of your internal intent.
Technical Complexity Burden Shift:
PLD 2024 includes a specific rule for technical complexity cases. If a claimant can show it is excessively difficult to establish the defect or causal link — particularly due to technical or scientific complexity — courts may apply a rebuttable presumption of defect or causation. This parallels the AILD Art. 4 mechanism and applies to AI systems specifically because they are identified as a paradigm example of technical complexity.
PLD Scope and Timeline:
- Adopted: November 2024
- Transposition deadline: 9 December 2026 (Member States must implement by this date)
- Applies to: products placed on the EU market after the transposition deadline
- Limitation period: 3 years from awareness of damage + defect + identity of liable party; 10-year long-stop
- Damage caps: None for personal injury (unlimited). Property damage below €500 excluded.
The AI Act Intersection: Triple Liability Exposure
Developers of AI systems subject to the EU AI Act Annex III (high-risk AI) now face obligations under three simultaneous regimes:
| Regime | Type | Enforced by | Maximum Exposure |
|---|---|---|---|
| EU AI Act | Administrative fines | National market surveillance | €35M / 7% global turnover |
| AILD | Civil liability (fault-based) | Injured parties via courts | Unlimited damages |
| PLD 2024 | Civil liability (strict) | Injured parties via courts | Unlimited (personal injury) |
The regimes are not independent. They share:
Common Evidence Base: AI Act Art. 11 technical documentation is the central exhibit in both AILD Art. 3 disclosure requests and PLD defect analysis. If your documentation is missing, inadequate, or inaccurate — this single failure simultaneously triggers:
- AI Act Art. 49 non-compliance penalty
- AILD Art. 3 adverse inference (missing docs presumed unfavourable)
- PLD defect evidence (your product does not match your documentation)
Causation Chain: AI Act Art. 9 risk management system failures create AILD Art. 4 causation presumptions. PLD's reasonable safety expectations are calibrated against your AI Act conformity. A high-risk AI system that passes an internal AI Act conformity assessment but later causes harm has a stronger PLD defense than one with no conformity process at all.
Post-Market Monitoring: AI Act Art. 72 requires providers of high-risk AI to conduct post-market monitoring. This same monitoring data is:
- AILD Art. 3 disclosable evidence
- PLD "state of the art" defense material (if you monitored and had no reason to know of the defect, liability is reduced)
- AILD Art. 4 counter-evidence (monitoring shows you took reasonable precautions)
Practical Developer Obligations
1. Technical Documentation as Legal Artefact
AI Act Art. 11 requires technical documentation before market placement. Under the combined AILD/PLD regime, this documentation serves dual purposes: regulatory compliance and your primary legal defence.
Documentation must cover:
- System description: Purpose, architecture, training methodology
- Risk assessment: Identified risks, risk mitigation measures, residual risks (Art. 9)
- Performance metrics: Accuracy, reliability under various conditions
- Limitations: Known failure modes, operating conditions, out-of-scope uses
- Training data: Sources, preprocessing steps, quality assurance measures (Art. 10)
- Validation methodology: Test set composition, performance benchmarks, edge cases
- Human oversight mechanisms: How the system can be overridden, monitored, or stopped (Art. 14)
Store this documentation in a jurisdiction where you control disclosure. Under AILD Art. 3, EU courts can order disclosure of evidence. This is court-controlled. But if your documentation sits on US cloud infrastructure, US courts and the US government have parallel access routes via the CLOUD Act — and they do not require an EU court order.
2. Risk Management System as Liability Shield
AI Act Art. 9 requires a continuous risk management system throughout the AI system's lifecycle. Under AILD Art. 4, a properly implemented risk management system is your primary defense against the causation presumption:
If you can show:
- Identified the risk category that caused the harm in your risk assessment
- Implemented mitigation measures appropriate to the risk
- Monitored those mitigations post-deployment
- The harm was caused by misuse outside your documented operating conditions
...then the AILD Art. 4 causation presumption does not apply, or the defendant can rebut it.
# Risk management documentation structure — legally relevant fields
risk_entry = {
"risk_id": "R-042",
"category": "false_positive_rate_high_load",
"description": "System accuracy degrades above 10,000 concurrent requests",
"identified_date": "2025-09-15",
"severity": "high",
"likelihood": "medium",
"mitigation": "Rate limiting enforced at 8,000 req/s; load balancer circuit breaker",
"mitigation_implemented_date": "2025-10-01",
"residual_risk": "low",
"monitoring_metric": "accuracy_under_load_p99",
"monitoring_threshold": 0.85,
"last_reviewed": "2026-04-01",
"review_outcome": "within_parameters",
}
Every identified risk that you mitigated is a risk category where the causation presumption has been rebutted in advance. Every unidentified risk in a reasonably foreseeable category is a liability gap.
3. Post-Market Monitoring as Continuous Evidence
AI Act Art. 72 requires a post-market monitoring system. For AILD and PLD purposes, this monitoring creates an ongoing evidentiary record of your system's behaviour after deployment.
Critically: gaps in monitoring logs are evidence of negligence.
Under AILD Art. 3, if a claimant requests disclosure of your post-market monitoring records and you have none — or only intermittent records — courts will draw adverse inferences. Under PLD, the absence of monitoring undermines your "state of the art" defense (that you could not have known of the defect at the time).
# Post-market monitoring log structure — AI Act Art. 72 + AILD-compliant
monitoring_event = {
"timestamp": "2026-04-09T14:32:15Z",
"system_version": "2.1.4",
"deployment_id": "prod-eu-west-01",
"metric": "false_positive_rate",
"value": 0.031,
"threshold": 0.05,
"status": "within_parameters",
"input_category": "medical_imaging_chest_xray",
"anomaly_detected": False,
"human_review_required": False,
"jurisdiction": "EU",
"log_hash": "sha256:a3f8...", # Tamper-evident logging for legal admissibility
}
Store monitoring logs with tamper-evident hashing. In litigation, the authenticity of your monitoring records will be challenged. A hash chain from log entry to log entry demonstrates the records were not retroactively modified.
4. Incident Response as Liability Management
Under PLD 2024, a product recall or corrective action after discovering a defect does not eliminate liability for past harm — but it does limit future harm, which prevents additional PLD claims for incidents after the recall date.
Under AILD Art. 4, a prompt, documented response to a detected failure can rebut the causation presumption for subsequent incidents by showing you identified and addressed the risk.
Post-incident documentation checklist:
- Timestamp of first awareness (starts limitation period under PLD)
- Root cause analysis (rebuts causation presumption for future incidents)
- Scope of affected users or deployments
- Mitigation action taken (patch, rate limit, operational restriction)
- Communication to affected deployers (AI Act Art. 72(2) requires notification)
- Updated risk management entry with lessons learned
- Verification of mitigation effectiveness (post-deployment monitoring data)
Infrastructure Jurisdiction: The CLOUD Act Problem
AILD Art. 3 gives EU courts the power to order evidence disclosure. This is deliberate: EU judicial process controls what evidence reaches the claimant. But this protection is jurisdictional.
If your AI system's technical documentation and monitoring logs are stored on US cloud infrastructure (AWS, Azure, GCP, or any US-incorporated provider's global infrastructure), US authorities have a parallel access route via the CLOUD Act (18 U.S.C. § 2713):
"A provider of electronic communication service or remote computing service shall comply with the obligations of this chapter to preserve, backup, or disclose the contents of a wire or electronic communication and any record or other information pertaining to a customer or subscriber within such provider's possession, custody, or control, regardless of whether such communication, record, or other information is located within or outside of the United States."
In an AI liability dispute involving a US company, US plaintiff, or US regulatory interest, US courts can compel your AWS-hosted monitoring logs without an EU court order — bypassing the AILD Art. 3 jurisdictional protection entirely. Your GDPR-compliant data storage in Frankfurt still sits behind CLOUD Act exposure if the infrastructure is a US-incorporated provider.
The liability implication: US discovery can surface internal documentation (risk assessments, validation failures, known defects, internal communications) that you might successfully resist producing under EU judicial process. A document that should be weighed carefully within an EU AILD proceeding can become independently accessible through US litigation proceedings.
EU-native infrastructure removes this parallel access route:
| Infrastructure | AILD Art. 3 Disclosure | CLOUD Act Access |
|---|---|---|
| AWS / Azure / GCP (US-incorporated) | EU court-ordered | US court-compelled (parallel) |
| EU-native PaaS (EU-incorporated) | EU court-ordered | No CLOUD Act jurisdiction |
This is not a theoretical edge case. The European Data Protection Board has repeatedly noted that CLOUD Act exposure affects all data stored by US-incorporated cloud providers, regardless of physical server location. Legal teams at EU enterprises deploying high-risk AI systems under the AI Act are increasingly requiring EU-native infrastructure specifically to contain the evidence perimeter under AILD.
Liability Architecture: The Developer Checklist
For developers building EU-deployed AI systems under the combined AILD/PLD/AI Act regime:
Pre-Market (before deployment):
- Technical documentation (AI Act Art. 11): complete, accurate, versioned — this is your primary AILD Art. 3 disclosure package
- Risk management system (Art. 9): documented risk register, mitigations, residual risks — this is your AILD Art. 4 causation rebuttal
- Conformity assessment (Art. 43 for Annex III): passes before market placement — PLD defect standard calibrated against this
- Transparency documentation (Art. 13): accurate capability and limitation statements — PLD "legitimate expectations" baseline
- Human oversight mechanisms (Art. 14): technically implemented and documented
- Data governance (Art. 10): training data documented, quality measures recorded — AILD Art. 3 disclosable
Post-Market (ongoing):
- Post-market monitoring (Art. 72): continuous, logged, tamper-evident — continuous AILD evidence preservation
- Serious incident reporting (Art. 73): logged immediately with timestamp — starts PLD limitation period clock
- Corrective action records: documented with root cause analysis and effectiveness verification
Infrastructure:
- Technical documentation stored in EU jurisdiction (CLOUD Act perimeter)
- Monitoring logs with tamper-evident hashing (legal admissibility)
- Evidence preservation policy: minimum 10-year retention (PLD long-stop limitation period)
Enforcement Exposure Summary:
| Directive | Trigger | Maximum |
|---|---|---|
| AI Act | Non-conformity of high-risk AI | €35M or 7% global turnover |
| AILD | Fault + causation (presumed) + damage | Unlimited civil damages |
| PLD 2024 | Defect + causation + damage | Unlimited (personal injury) |
| GDPR | Data processing in AI system | €20M or 4% global turnover |
The maximum combined exposure from a single high-risk AI incident causing personal injury is: AI Act fine + unlimited AILD damages + unlimited PLD damages + GDPR fine. There is no cap. There is no single regulation that creates a ceiling across all four regimes simultaneously.
Key Takeaways
AILD changes the evidentiary landscape, not fault requirements. You still need to be at fault — but proving causation becomes the court's job once you have violated an AI Act obligation. Document your compliance to prevent the presumption from triggering.
PLD 2024 makes software defects strict liability. No fault required. The question is whether your AI system delivered the safety people were entitled to expect. Your conformity documentation sets that expectation — for better or worse.
AI Act compliance is your primary liability defense. Art. 9, Art. 10, Art. 11, Art. 13, Art. 14, and Art. 72 compliance is not just regulatory overhead. It is the evidence base that rebuts AILD causation presumptions, satisfies PLD defect analysis, and limits your exposure under both civil regimes.
Your infrastructure jurisdiction is a legal decision. AI systems operating under AILD/PLD/AI Act are the subject of EU court proceedings. Evidence stored on US-incorporated cloud infrastructure sits behind a CLOUD Act exposure that EU judicial process cannot control. EU-native infrastructure contains your evidence perimeter within the jurisdiction where your legal proceedings take place.
The intersection point is technical documentation. Art. 11 documentation is simultaneously: your AI Act conformity evidence, your AILD Art. 3 disclosure package, your PLD defect defense, and your CLOUD Act exposure control parameter. Its accuracy, completeness, and jurisdictional storage are the single most legally consequential technical decision in high-risk AI development.
See Also
- EU AI Act 2026 Conformity Assessment: High-Risk AI Developer Guide — Art. 43 Path A/B, Annex III categories, Provider vs. Deployer obligations
- EU NIS2 + AI Act: The Double Compliance Burden for Critical Infrastructure Developers — NIS2 Art.21 ICT-risk × AI Act Annex III: Unified Risk Register, Dual Incident Reporting
- GDPR Article 25 Privacy by Design: Developer Implementation Guide — 6-Layer implementation guide for API, DB, Auth, and infrastructure
- EU AI Act Article 9 Risk Management: Technical Implementation — Risk register structure, monitoring integration, acceptable risk thresholds
- CLOUD Act vs EU Data Protection: What Developers Need to Know — Why server location is not enough and what EU-native hosting actually protects