2026-04-14·14 min read·sota.io team

EU AI Liability Directive Proposal COM(2022)496: Fault-Based AI Liability Developer Guide (2026)

The EU AI Liability Directive (ALD) proposal — Commission Document COM(2022)496 — is the fault-based companion to the new Product Liability Directive. Where the new PLD establishes strict liability for defective AI products regardless of fault, the ALD targets a different but equally important category: claims under national tort law where a claimant alleges that an AI system caused harm through negligence, breach of duty, or wrongful conduct.

The ALD is still a proposal as of 2026, proceeding through the European legislative process. Its progress has been slower than the EU AI Act and the new PLD, but it represents the third pillar of the EU's comprehensive AI liability framework — and its core mechanism, the rebuttable presumption of causal link, directly connects EU AI Act compliance to the outcome of civil litigation.

For developers, the ALD matters for a concrete reason: if your AI system violates a duty imposed by the EU AI Act — say, the risk management obligation in Art.9, the data governance requirements in Art.10, or the transparency obligations in Art.13 — and a person suffers harm of the type those obligations were designed to prevent, the ALD creates a presumption that your violation caused the harm. You must then disprove causation or demonstrate that the violation was not material. The EU AI Act compliance record you maintain for regulatory purposes becomes your primary defense in tort litigation.

This guide explains the ALD's architecture, how the two-track EU liability framework works in practice, which AI Act obligations trigger the presumption, and what development teams need to build in their compliance infrastructure.

The Two-Track EU AI Liability Framework

Understanding the ALD requires understanding how it fits with the new PLD. The two instruments address different liability categories and different legal theories.

Track 1 — Strict liability under the new PLD (Directive (EU) 2024/2853): Applies when an AI system qualifies as a defective product. No need to prove fault. The claimant establishes defectiveness, damage, and causation. The new PLD covers physical injury, data destruction, and psychological harm. The AI provider is liable as manufacturer regardless of care taken. EU AI Act non-compliance triggers presumption of defectiveness.

Track 2 — Fault-based liability under the ALD (COM(2022)496): Applies to tort claims under national law (negligence, breach of statutory duty, wrongful act) that fall outside the new PLD's scope — for example, purely economic loss, discrimination claims, claims by businesses rather than consumers, or situations where strict liability under the PLD does not apply. The claimant must prove fault, damage, and causation — but the ALD eases causation through the rebuttable presumption mechanism.

The two tracks are not mutually exclusive. A single incident involving a defective AI system might generate both a PLD strict liability claim and an ALD tort claim. Development teams need to understand both.

What the ALD Covers: Scope and Applicable Claims

The ALD applies to non-contractual fault-based civil liability claims for AI-caused harm brought under national law. The key elements:

"Non-contractual": The ALD does not apply to breach of contract claims. It covers tort claims — claims by parties who do not have a direct contract with the AI provider, or claims that are framed in tort rather than contract even where a contract exists.

"Fault-based": The claimant must allege a wrongful act, negligence, or breach of a legal duty. This is the key distinction from the new PLD's strict liability. Under the ALD, the defendant's fault is in issue — but the ALD makes it substantially easier to establish the causal link between fault and harm.

"National law": The ALD does not create a uniform EU tort cause of action. It harmonizes the evidentiary tools (disclosure obligations and presumption) available within existing national tort frameworks. German negligence law, French tort law, and Dutch onrechtmatige daad claims are all within scope — the ALD adds the presumption and disclosure tools to each.

"AI-caused harm": The harm must have been caused by an AI system as defined in the EU AI Act — autonomous or semi-autonomous software that generates outputs (predictions, recommendations, content, decisions) in response to inputs. Both high-risk and non-high-risk AI systems are within scope, though the presumption mechanism operates differently depending on risk level.

The ALD's key innovation is the rebuttable presumption of causal link. It works in two stages.

Stage 1: Establishing the Conditions for the Presumption

For the presumption to apply, the claimant must establish three conditions:

Condition 1 — Relevant AI Act breach. The defendant (AI provider or deployer) violated a duty imposed by the EU AI Act, or by a national implementing measure. The breach does not need to be a formal regulatory finding or administrative decision — the claimant can establish the breach directly in civil proceedings through evidence including technical documentation, audit records, and expert testimony.

Condition 2 — Harm. The claimant suffered damage. The type of harm that qualifies depends on the national tort law. The ALD does not limit compensable harm — physical, economic, psychological, and discriminatory harm can all qualify depending on the applicable national law.

Condition 3 — Plausible connection to the breach. The claimant must demonstrate that it is plausible — not merely possible, but more probable than not — that the AI Act breach caused or contributed to the harm. This is a lower threshold than the full causation proof that would otherwise be required, but it is not automatic: the claimant must show why the particular AI Act violation could plausibly have led to the specific harm.

When all three conditions are met, causation is presumed. The burden shifts to the defendant.

Stage 2: Rebutting the Presumption

The defendant can rebut the presumption by demonstrating that the AI Act violation did not in fact cause the harm. This requires showing either:

The rebuttal is not easy in practice. The presumption shifts the burden to the defendant, who typically controls the technical evidence. A defendant who cannot explain the causal chain — because documentation is incomplete, because internal testing was inadequate, because the AI system's behavior was not monitored — will struggle to rebut a well-founded presumption.

Which AI Act Obligations Trigger the Presumption

The presumption applies when a duty imposed by the EU AI Act is breached. The specific obligations that most commonly generate exposure are those most directly connected to safety and harm prevention.

High-Risk AI System Obligations (Art.9–17)

Art.9 — Risk Management System. The obligation to maintain a continuous risk management process covering all known and foreseeable risks. Failure to identify a foreseeable risk class that subsequently materialized as harm is a paradigmatic trigger: the claimant argues that proper risk management would have identified the risk, mitigation measures would have been implemented, and the harm would have been avoided.

Art.10 — Data Governance. Requirements for training, validation, and testing data quality, bias examination, and data management practices. Where an AI system causes discriminatory harm and data governance failures contributed — biased training data, inadequate bias examination — Art.10 violations are a strong presumption trigger. The obligation is explicitly designed to prevent discrimination: the scope-of-protection element is met when the harm is discriminatory.

Art.11 — Technical Documentation. The obligation to maintain complete Annex IV technical documentation. While Art.11 itself is procedural, failure to maintain documentation complicates the defendant's ability to rebut presumptions triggered by other violations — and can independently support disclosure orders under the ALD's information disclosure mechanism.

Art.12 — Logging. Automatic logging of events sufficient to trace AI system outputs over time. Where an AI system causes harm and the logging obligation was not met, the defendant may be unable to reconstruct what the AI system did, making rebuttal of any presumption practically impossible.

Art.13 — Transparency and Provision of Information. The obligation to provide deployers with sufficient information about the AI system's capabilities, limitations, and intended use. Where a deployer relied on inadequate information and harm resulted from a limitation that was not disclosed, Art.13 violations trigger the presumption.

Art.14 — Human Oversight. The obligation to design high-risk AI systems to enable effective human oversight, including override mechanisms. Where an AI system made a harmful automated decision and the human oversight mechanism was inadequate or absent, Art.14 is a trigger. This is particularly relevant for high-stakes automated decision-making in credit, employment, and healthcare contexts.

Art.15 — Accuracy, Robustness, and Cybersecurity. The obligation to design systems to be accurate, resilient, and cyber-secure throughout their lifecycle. Where harm resulted from AI system instability, adversarial manipulation, or accuracy failures in a deployment context where accuracy was safety-critical, Art.15 violations are central.

GPAI Model Obligations (Art.53–55)

For general-purpose AI models, the ALD applies to obligations including:

Where a downstream deployer causes harm using a GPAI model, and the model provider failed to provide adequate transparency documentation under Art.53, the GPAI provider may face ALD exposure on a theory that inadequate disclosure was a fault that contributed to the deployer's harmful use.

Deployer Obligations (Art.26)

The ALD applies to deployers as well as providers. Deployer obligations under Art.26 — using the AI system according to provider instructions, maintaining oversight, implementing human review procedures — can trigger the presumption when:

The Information Disclosure Obligation

Parallel to the rebuttable presumption, the ALD establishes an information disclosure obligation that operates in any AI-related fault liability claim — not only when the presumption conditions are met.

When a claimant makes a plausible case that they have been harmed by an AI system and needs access to evidence held by the defendant to substantiate their claim, national courts must have the power to order the disclosure of relevant evidence. The disclosure order is available before and during proceedings.

The following categories of documents are explicitly contemplated in the ALD framework:

For defendants, this disclosure power transforms internal compliance documentation from a regulatory file into potential litigation evidence. Documents that show incomplete risk assessment, inadequate bias examination, or missing monitoring become discoverable and potentially determinative.

The disclosure obligation has limits: courts must balance disclosure against trade secrets and confidentiality interests. The ALD proposal includes a mechanism for courts to order disclosure in a way that protects legitimate confidentiality — for example, to independent experts rather than directly to the claimant. But the core principle is that AI providers cannot use opacity to prevent injured parties from accessing the evidence needed to establish their claims.

ALD Defense Strategy: Building the Compliance Record

For AI developers, the ALD's practical implication is that EU AI Act compliance infrastructure directly affects litigation outcomes. Building the right compliance record is simultaneously regulatory compliance and litigation preparation.

What the Defense Record Must Show

To rebut an ALD presumption, the defendant needs to demonstrate that the relevant AI Act obligations were complied with, or that any non-compliance was not causally connected to the harm. This requires:

Risk identification coverage. Art.9 risk management records must show that the class of risk that materialized was identified in the risk analysis. A risk management document that identifies only theoretical risks without reference to the specific deployment context will not rebut a presumption that an unidentified risk caused harm.

Data quality documentation. Art.10 data governance records must show what bias examination was conducted, on which datasets, using which methods, with which findings, and what mitigations were applied. "We checked for bias" is not adequate; the documentation must show the specific bias categories examined (Art.10(2)), the populations represented in training data, and the steps taken to address identified imbalances.

Capability and limitation disclosure. Art.13 documentation must show what the provider communicated to the deployer about system limitations. If a harm occurred because the deployer relied on an AI output in a context where the AI system's limitations made it unsuitable, the Art.13 record showing that limitations were accurately disclosed can rebut the presumption.

Human oversight evidence. Art.14 records must show what override mechanisms were implemented, how they were tested, whether they were used in practice, and whether the deployer was adequately trained to use them. An Art.14 defense requires evidence that oversight was genuinely effective, not merely that a "human in the loop" was nominally present.

Post-market monitoring records. Art.72 post-market monitoring documentation showing that the AI system's behavior was tracked after deployment, that anomalies were investigated, and that remediation steps were taken when problems were identified supports both the risk management and accuracy defenses.

Breaking the Scope-of-Protection Element

One of the most useful defenses against the ALD presumption is the scope-of-protection argument: the AI Act obligation that was violated was not designed to protect against the type of harm that occurred. This defense requires understanding the precise purpose of each AI Act obligation.

For example: Art.12 logging obligations are designed to enable traceability of AI outputs. If an Art.12 violation is alleged in connection with a data privacy claim, the scope-of-protection defense may succeed if logging failures are not plausibly connected to the privacy harm — the logging obligation protects against untraceable decisions, not against data processing violations.

Building scope-of-protection arguments requires a systematic mapping of AI Act obligations to their protective purposes — which is also the analysis needed to prioritize compliance investment.

Interaction with National Tort Law

The ALD harmonizes tools (disclosure and presumption) but does not create a single EU tort cause of action. The underlying liability claim remains governed by national law.

This means the fault element — the standard of care, what constitutes a wrongful act — is still determined by the applicable national law. In Germany, the fault standard under §823 BGB applies; in France, the faute analysis under Article 1240 of the Civil Code; in the Netherlands, the onrechtmatige daad standard.

What the ALD changes is the evidential position once fault is established or presumed. After the claimant establishes an AI Act breach that triggers the presumption, the ALD provides the court with a standardized mechanism for shifting the causation burden — regardless of what national law says about causation proof.

For multi-jurisdiction deployments, this means the fault analysis differs by country but the presumption mechanism is uniform. A technical compliance record that satisfies the ALD presumption rebuttal standard should work across member states, even if the underlying tort cause of action differs.

Current Status: Proposal, Not Law

The ALD proposal was published in September 2022. As of 2026, it is in the legislative process but has not been adopted. Key developments in the legislative process:

The European Parliament's Legal Affairs Committee (JURI) published its report and proposed amendments. The EP's amendments generally strengthened claimant protections — including extending the presumption to non-high-risk AI systems and broadening the types of harm covered. The Council working party on civil law has engaged with the proposal.

The ALD has been the slower-moving component of the EU AI liability package compared to the new PLD (which was adopted in 2024). Its interaction with national tort law — which varies significantly across member states — created more legislative complexity than the PLD's product liability harmonization.

For development teams, the practical implication of the ALD's pending status is: the compliance infrastructure you build for the EU AI Act and for the new PLD is also the compliance infrastructure that will serve as your ALD defense when the directive is adopted. Building that infrastructure now reduces the risk of exposure under all three instruments.

How ALD Interacts with the New PLD

For a single AI-related incident, both the new PLD and the ALD may be relevant. The interaction works as follows:

PLD claim: The claimant establishes that the AI system was defective (or benefits from the presumption of defectiveness if conformity assessment was not completed), that damage occurred, and that causation exists. No fault required.

ALD claim (simultaneous): The same claimant brings a parallel fault-based claim under national tort law, invoking the ALD presumption based on the AI Act violation that also triggered the PLD defectiveness presumption.

In practice, a claimant with a viable PLD strict liability claim may prefer to pursue that claim — it eliminates the need to prove fault. The ALD becomes most relevant when:

Python Tooling for ALD Exposure Assessment

from dataclasses import dataclass, field
from typing import Optional
from enum import Enum


class HarmType(Enum):
    PHYSICAL_INJURY = "physical_injury"
    PSYCHOLOGICAL_HARM = "psychological_harm"
    ECONOMIC_LOSS = "economic_loss"
    DISCRIMINATION = "discrimination"
    DATA_LOSS = "data_loss"
    REPUTATIONAL = "reputational"
    FUNDAMENTAL_RIGHTS = "fundamental_rights"


class ComplianceStatus(Enum):
    COMPLIANT = "compliant"
    PARTIAL = "partial"
    NON_COMPLIANT = "non_compliant"
    UNKNOWN = "unknown"


@dataclass
class ALDObligationStatus:
    """Status of a single AI Act obligation relevant to ALD presumption."""
    article: str
    obligation_summary: str
    compliance_status: ComplianceStatus
    documentation_quality: int  # 0-10: how well-documented is compliance
    harm_types_protected: list[HarmType]
    rebuttal_evidence: list[str] = field(default_factory=list)
    gaps: list[str] = field(default_factory=list)

    def presumption_risk(self, alleged_harm: HarmType) -> str:
        """Assess presumption trigger risk for a specific harm type."""
        if alleged_harm not in self.harm_types_protected:
            return "low"  # scope-of-protection defense available
        if self.compliance_status == ComplianceStatus.COMPLIANT:
            return "low"
        if self.compliance_status == ComplianceStatus.PARTIAL:
            return "medium"
        if self.compliance_status == ComplianceStatus.NON_COMPLIANT:
            return "high"
        return "unknown"


@dataclass
class ALDExposureAssessment:
    """Full ALD exposure assessment for an AI system."""
    system_name: str
    is_high_risk: bool
    deployment_jurisdictions: list[str]
    obligation_statuses: list[ALDObligationStatus]

    def overall_exposure(self) -> str:
        """Aggregate ALD exposure level."""
        high_count = sum(
            1 for o in self.obligation_statuses
            if o.compliance_status == ComplianceStatus.NON_COMPLIANT
        )
        partial_count = sum(
            1 for o in self.obligation_statuses
            if o.compliance_status == ComplianceStatus.PARTIAL
        )
        if high_count >= 2:
            return "HIGH — multiple non-compliant obligations, presumption likely in harm scenarios"
        if high_count == 1 or partial_count >= 3:
            return "MEDIUM — targeted remediation required for key obligations"
        return "LOW — compliance record adequate for ALD rebuttal"

    def priority_gaps(self) -> list[tuple[str, list[str]]]:
        """Return obligations with highest ALD exposure and specific gaps."""
        priority = [
            (o.article, o.gaps)
            for o in self.obligation_statuses
            if o.compliance_status in (ComplianceStatus.NON_COMPLIANT, ComplianceStatus.PARTIAL)
            and o.documentation_quality < 7
        ]
        return sorted(priority, key=lambda x: len(x[1]), reverse=True)

    def rebuttal_strength(self) -> int:
        """Score 0-10: how strong is the overall rebuttal evidence record."""
        scores = [o.documentation_quality for o in self.obligation_statuses]
        if not scores:
            return 0
        return int(sum(scores) / len(scores))


def assess_ald_exposure(system: ALDExposureAssessment) -> dict:
    """Generate ALD exposure report."""
    return {
        "system": system.system_name,
        "exposure_level": system.overall_exposure(),
        "rebuttal_strength": f"{system.rebuttal_strength()}/10",
        "priority_gaps": system.priority_gaps(),
        "recommendation": (
            "File ALD compliance evidence package alongside AI Act documentation"
            if system.rebuttal_strength() >= 7
            else "Urgent: strengthen compliance documentation before ALD adoption"
        ),
    }

30-Item ALD Readiness Checklist

ALD Scope Assessment (1–5)

  1. Have you mapped your AI systems to identify which are deployed in EU jurisdictions where the ALD will apply?
  2. Have you identified the national tort law frameworks in each deployment jurisdiction (DE: BGB §823, FR: CC Art.1240, NL: BW Art.6:162)?
  3. Have you assessed whether your AI deployments create ALD exposure as provider, deployer, or both?
  4. Have you identified the AI Act obligations most relevant to your deployment context (Art.9–15 for high-risk, Art.53–55 for GPAI)?
  5. Have you mapped each relevant AI Act obligation to the harm types it is designed to protect against (scope-of-protection analysis)?

Presumption Trigger Mitigation (6–12)

  1. Is your Art.9 risk management system documented in a way that shows the risk identification process, not just the conclusion?
  2. Does your Art.9 record cover deployment-context-specific risks (not just generic AI risk categories)?
  3. Does your Art.10 data governance documentation show what bias categories were examined (Art.10(2) protected characteristics)?
  4. Does your Art.10 record include the outcome of bias examination and the mitigations applied?
  5. Does your Art.13 transparency documentation explicitly disclose performance limitations in contexts relevant to deployer decision-making?
  6. Is your Art.14 human oversight implementation documented with evidence of actual operation (not just design specification)?
  7. Do your Art.15 accuracy records show performance benchmarks relevant to the harm types most likely in your deployment context?

Information Disclosure Preparation (13–17)

  1. Is your Annex IV technical documentation complete and current for each high-risk AI system?
  2. Are trade secret protections identified within your technical documentation (to support confidential disclosure arguments)?
  3. Do your post-market monitoring records (Art.72) show what anomalies were detected and how they were investigated?
  4. Are incident reports (Art.73) complete and consistent with post-market monitoring records?
  5. Are data processing agreements and data provenance records sufficient to reconstruct training data composition if ordered in disclosure?

Rebuttal Evidence Building (18–24)

  1. For each AI Act obligation, can you demonstrate specific compliance actions (not just assertions of compliance)?
  2. Do you have pre-deployment validation records showing what the AI system was tested against and what failure modes were identified?
  3. Do you have records showing what deployers were told about system limitations (supporting Art.13 scope-of-protection defense)?
  4. Are there records showing that the harm type alleged was not in the class of risks the violated obligation was designed to address?
  5. Are there independent audit records or third-party assessments that support compliance claims?
  6. For deployer-caused harm scenarios, do provider-to-deployer communications establish what the deployer was required to do under Art.26?
  7. Do logging records (Art.12) allow you to reconstruct AI system behavior at the time of the alleged harm-causing event?

Cross-Track Coordination (25–28)

  1. Have you coordinated ALD compliance documentation with new PLD documentation (both reference the same technical files)?
  2. Have you assessed whether an incident that triggers PLD strict liability also creates ALD fault-based exposure?
  3. Are your AI Act conformity assessment files (Art.43) current — a gap triggers both PLD presumption of defectiveness and ALD presumption of causation?
  4. Do your contracts with deployers allocate ALD exposure clearly, including indemnification for deployer-caused violations of Art.26?

Monitoring and Updates (29–30)

  1. Do you have a process for monitoring ALD legislative progress and updating compliance documentation as the final text is adopted?
  2. Have you briefed your product and engineering teams on the connection between AI Act compliance failures and ALD litigation exposure?

What Developers Should Do Now

The ALD has not been adopted, but its eventual adoption is expected and the compliance infrastructure it demands overlaps entirely with what the EU AI Act and new PLD already require. Actions to take now:

Complete your AI Act compliance documentation. Every Annex IV file you maintain, every Art.9 risk management record you keep, every Art.10 bias examination you document is simultaneously EU AI Act compliance and ALD rebuttal evidence. There is no separate "ALD compliance program" — it is the same program.

Audit your documentation quality, not just existence. Having a risk management document is not the same as having a risk management document that demonstrates the methodology used, the risks identified, the deployment-context-specific analysis, and the mitigations applied. The ALD presumption rebuttal requires quality documentation, not just documentation presence.

Map obligations to harm types. The scope-of-protection defense — that the obligation violated was not designed to protect against the type of harm alleged — requires knowing precisely what each AI Act obligation is for. Build this mapping now; it informs both compliance prioritization and litigation defense.

Brief your legal team on the ALD mechanism. Product liability counsel and data protection counsel are often familiar with PLD claims and GDPR enforcement. The ALD presumption mechanism and its interaction with AI Act compliance is new territory for most litigation teams. Briefing in advance of adoption is more efficient than scrambling after an incident.

Monitor legislative progress. The final ALD text may differ from the proposal in ways that affect compliance strategy — particularly regarding scope (high-risk only vs. all AI), harm types covered, and the strength of the presumption. Track JURI committee outputs and Council working party positions as the legislative process continues.

The EU's AI liability framework — the new PLD, the ALD, and the EU AI Act's own administrative penalty regime — creates an interlocking system where compliance failures compound across enforcement channels simultaneously. A single Art.9 risk management failure can trigger regulatory enforcement under the AI Act, strict liability under the new PLD, and fault presumption under the ALD. The organizations that treat AI Act compliance as primarily a documentation exercise will discover, when litigation arrives, that documentation quality is the difference between a successful rebuttal and an unrebutted presumption.