2026-04-14·14 min read·sota.io team

EU AI Act Art.112: Repeal of Directive 85/374/EEC — AI Product Liability Developer Guide (2026)

Article 112 of the EU AI Act is a single sentence that carries enormous consequences for the liability landscape of AI development in Europe. It formally repeals Directive 85/374/EEC — the Product Liability Directive that had governed defective product liability in the EU since 1985. The repeal is coordinated with the entry into force of the new Product Liability Directive, Directive (EU) 2024/2853, which was specifically redesigned to cover software and AI systems as products in a way the 1985 directive never did.

For AI developers, Art.112 is not abstract housekeeping. It signals a fundamental shift in who bears liability when an AI system causes harm, how claimants can establish defectiveness, and what documentation AI providers must preserve and produce. The same technical documentation and risk management records that the EU AI Act requires as compliance artifacts become your primary defense in product liability litigation under the new framework.

This guide explains why the old PLD failed for AI, what the new PLD changes, how Art.112 connects the AI Act to the product liability regime, and what development teams need to do before these rules apply to their products.

Why Directive 85/374/EEC Failed for AI

The 1985 Product Liability Directive was written in a world of physical products. Its core architecture — manufacturer liable for damage caused by a defect in a product, strict liability without need to prove fault — worked well for faulty toasters and defective car brakes. It had four problems when applied to AI:

Problem 1: Software was ambiguous. The 1985 PLD defined "product" as all movables. Case law in various member states diverged on whether software was a "product" under this definition — particularly standalone software not embedded in physical goods. AI systems, which are predominantly software, sat in a legal grey zone. Some courts said yes; others said no. The AI developer had uncertain liability exposure.

Problem 2: "Defect" was not designed for probabilistic systems. The PLD defined a product as defective when it "does not provide the safety which a person is entitled to expect." For physical products, this is relatively clear — a brake pad that fails under normal use is defective. For AI systems that operate probabilistically, produce emergent behaviors, and may produce different outputs for identical inputs depending on internal state, the "expected safety" standard was practically difficult to apply. Claimants struggled to prove that an AI output was a "defect" rather than a stochastic feature.

Problem 3: The development risks defense. The 1985 PLD included an optional defense (adopted by most member states) for "development risks" — where the state of scientific and technical knowledge at the time of putting the product into circulation was not such that the defect could have been discovered. AI systems, particularly those using large language models or deep learning, routinely produce outputs that developers could not have predicted when the model was trained. The development risks defense potentially insulated AI developers from liability for harms that were genuinely unforeseeable at training time — even when the harm was substantial.

Problem 4: Damage scope excluded data. The 1985 PLD covered death, personal injury, and damage to tangible property over a threshold. It explicitly excluded damage to the defective product itself, and in practice courts did not extend coverage to destruction of data, economic loss from automated decisions, or harm to fundamental rights. AI systems can cause precisely these categories of harm — wrongful credit denial, discriminatory hiring decisions, defamatory synthetic content — none of which fitted neatly within the 1985 damage categories.

The combination of these four gaps meant that a consumer harmed by a defective AI system often had no viable product liability claim, even when the harm was clear and the AI provider was commercially sophisticated.

The New Product Liability Directive (EU) 2024/2853

The new PLD, adopted in October 2024, addresses all four of the 1985 directive's AI gaps directly. Its application aligns with the EU AI Act timeline — making Art.112's repeal part of a coordinated legislative package designed to cover the full lifecycle of AI liability.

Software and AI Systems as Products

The new PLD explicitly defines "product" to include software — including AI systems. Article 4(1) of Directive (EU) 2024/2853 covers all movables and software, removing the ambiguity that plagued the 1985 regime. An AI model, an AI application, a recommendation system, a chatbot — all qualify as "products" subject to the new PLD's strict liability framework.

The "product" definition is technology-neutral: it covers both embedded AI (AI running on a physical device) and standalone software AI (AI accessed as a service). For AI-as-a-service providers, the new PLD is particularly significant — you are now squarely within strict liability territory for defective outputs, not in the grey zone the 1985 directive created.

Defect Redefined for AI

The new PLD redefines "defective product" in a way that accounts for AI characteristics. A product is defective when it "does not provide the safety that the public at large is entitled to expect, taking into account all circumstances, including":

The explicit reference to post-deployment learning is significant for AI developers. An AI model that behaves safely when deployed but drifts into harmful behavior through continued learning cannot use "it was fine when we shipped it" as a complete defense. If the learning capability was inherent in the product design and the drift was foreseeable, the defect analysis applies to the full operational lifecycle.

Presumption of Defectiveness and Evidence Disclosure

The most consequential change for AI developers is the new PLD's approach to evidence and causation. Claimants under the 1985 directive faced a significant practical barrier: they had to prove that the product was defective and that the defect caused the damage. For AI systems, where the technical documentation is held exclusively by the provider and the causal chain between model behavior and output is opaque, this burden was often insurmountable.

The new PLD introduces a presumption of defectiveness when certain conditions are met:

Evidence disclosure obligation. Where the claimant faces "excessive difficulty" in proving defectiveness due to technical or scientific complexity, national courts must order the defendant to disclose relevant evidence in their possession. This means AI providers can be ordered to produce training data documentation, model validation reports, risk assessments, incident logs, and other technical documentation as part of litigation.

Presumption triggers. If the defendant fails to comply with the disclosure order — or if the court determines that the product falls within the scope of an AI Act conformity assessment obligation that was not complied with — there is a presumption that the product is defective. The burden shifts to the defendant to prove non-defectiveness.

For AI developers, the practical implication is direct: your technical documentation under the EU AI Act (Art.11, Annex IV), your risk management records (Art.9), your post-market monitoring reports (Art.72), and your conformity assessment files (Art.43) will be subject to disclosure in product liability litigation. Complete, well-maintained documentation serves two purposes simultaneously — AI Act compliance and liability defense. Gaps in documentation that create AI Act compliance problems also create product liability exposure.

Damage Scope Extended to Data and Fundamental Rights

The new PLD expands the categories of compensable damage to include:

Destruction of data: Damage caused by the permanent loss or corruption of digital data that is not used exclusively for professional purposes is now compensable. This is directly relevant for AI systems that process, transform, or generate data.

Medically recognized psychological harm: Psychological damage recognized by a medical professional is compensable, not just physical injury. AI systems causing documented psychological harm — through harassment, targeted content, or discriminatory treatment — can generate covered damage.

The new PLD maintains the financial threshold (€500 minimum for property damage claims) but the extension to data and psychological damage opens liability pathways that did not exist under the 1985 framework.

Development Risks Defense Narrowed

The new PLD retains the development risks defense but narrows it for software and AI systems. The defense is not available where the defect resulted from characteristics of the product that the producer could have discovered through reasonable testing before putting the product into circulation — even if the general state of scientific knowledge did not allow discovery of that specific defect type.

For AI systems, this means that pre-deployment testing and validation records become evidence in the development risks defense analysis. If a provider conducted limited pre-deployment evaluation and a foreseeable defect was not detected, the defense may fail. Robust pre-deployment testing — and documentation of that testing — strengthens the development risks defense.

How Art.112 Connects AI Act Compliance to Product Liability

Article 112's function is structural: it removes the 1985 PLD and triggers the application of the new PLD. But the substantive connection between the EU AI Act and the new product liability framework runs deeper than the repeal mechanism.

AI Act Non-Compliance as Defectiveness Evidence

The new PLD explicitly includes AI Act conformity assessment non-compliance as a trigger for the defectiveness presumption. Specifically:

This creates a direct incentive structure: EU AI Act compliance is not only a regulatory obligation — it is active litigation insurance. The compliance artifacts you maintain to satisfy GPAI or high-risk AI obligations directly reduce your liability exposure under the new PLD.

Provider/Deployer vs. Manufacturer/Distributor

The EU AI Act's provider/deployer distinction maps onto the new PLD's manufacturer/distributor framework in a way that AI developers must understand:

AI provider = manufacturer. Under the new PLD, the entity that places a product on the market or puts it into service bears primary strict liability. Under the AI Act, the "provider" who develops and places the AI system on the market bears primary compliance obligations. These align. The AI Act provider bears both regulatory and product liability exposure.

AI deployer = potentially a distributor or operator. Deployers who modify an AI system, integrate it into another product, or use it in ways that change the original system's risk profile may become co-manufacturers under the new PLD. The AI Act's Art.25 provision — where a deployer who makes substantial modifications becomes a new provider — corresponds to the new PLD's treatment of substantial modification as triggering manufacturer liability.

Software components. Where an AI system is a component within a larger system (a credit scoring API embedded in a banking application, a computer vision model embedded in industrial equipment), the new PLD's component manufacturer provisions apply. The AI provider may bear liability as a component manufacturer even where the end product is sold by another party.

Exculpation Through Documentation

The new PLD's primary avenue for defendant exculpation is proving non-defectiveness. For AI systems, the most effective path to non-defectiveness proof is comprehensive AI Act compliance documentation:

AI Liability Directive: The Parallel Track

Alongside the new PLD and the AI Act, the EU legislative package includes a proposed AI Liability Directive (ALD) covering non-contractual fault-based liability. While the PLD covers strict liability (no need to prove fault) for defective AI products, the ALD addresses cases where fault-based liability applies — negligence, failure to comply with obligations, etc.

The ALD proposal (COM(2022)496) is not yet adopted law but its trajectory is relevant for developers planning liability frameworks:

Rebuttable presumption of causal link: Where a defendant has violated an AI Act obligation and that violation is plausibly linked to the damage, the ALD would create a rebuttable presumption that the violation caused the damage. This connects AI Act non-compliance directly to civil liability in a fault-based framework.

Disclosure of high-risk AI documentation: Like the new PLD, the ALD proposes court-ordered disclosure of documentation for high-risk AI systems — again making AI Act technical documentation central to litigation.

For developers today, the practical guidance is the same whether the ALD is adopted or not: build and maintain complete AI Act compliance documentation, because that documentation is your liability defense across all potential legal frameworks.

Python Tooling for AI Product Liability Risk Assessment

from dataclasses import dataclass, field
from datetime import date
from typing import Optional
from enum import Enum


class AISystemRiskLevel(Enum):
    PROHIBITED = "prohibited"
    HIGH_RISK = "high_risk"
    LIMITED_RISK = "limited_risk"
    MINIMAL_RISK = "minimal_risk"


class LiabilityRole(Enum):
    PROVIDER_MANUFACTURER = "provider_manufacturer"
    DEPLOYER_OPERATOR = "deployer_operator"
    COMPONENT_MANUFACTURER = "component_manufacturer"
    DISTRIBUTOR = "distributor"


@dataclass
class ProductLiabilityDocumentation:
    """AI Act documentation assets that function as PLD liability protection."""
    technical_documentation_art11: bool = False
    risk_management_records_art9: bool = False
    logging_capability_art12: bool = False
    human_oversight_art14: bool = False
    post_market_monitoring_art72: bool = False
    conformity_assessment_art43: bool = False
    eu_database_registration_art71: bool = False
    declaration_of_conformity_art47: bool = False
    pre_deployment_test_records: bool = False
    incident_response_procedures: bool = False

    def completeness_score(self) -> float:
        """Returns documentation completeness as 0.0-1.0."""
        fields = [
            self.technical_documentation_art11,
            self.risk_management_records_art9,
            self.logging_capability_art12,
            self.human_oversight_art14,
            self.post_market_monitoring_art72,
            self.conformity_assessment_art43,
            self.eu_database_registration_art71,
            self.declaration_of_conformity_art47,
            self.pre_deployment_test_records,
            self.incident_response_procedures,
        ]
        return sum(1 for f in fields if f) / len(fields)


@dataclass
class AIProductLiabilityAssessment:
    """Assesses product liability exposure under new PLD (EU) 2024/2853 + EU AI Act Art.112."""
    system_name: str
    risk_level: AISystemRiskLevel
    liability_role: LiabilityRole
    deployment_date: Optional[date]
    documentation: ProductLiabilityDocumentation = field(
        default_factory=ProductLiabilityDocumentation
    )
    post_deployment_learning: bool = False
    processes_personal_data: bool = False
    can_cause_physical_harm: bool = False

    def assess_exposure(self) -> dict:
        """Run full PLD liability exposure analysis."""
        risks = []
        mitigations = []
        
        # Check presumption of defectiveness triggers
        if self.risk_level == AISystemRiskLevel.HIGH_RISK:
            if not self.documentation.conformity_assessment_art43:
                risks.append(
                    "CRITICAL: High-risk AI without conformity assessment — "
                    "automatic presumption of defectiveness under new PLD Art.X"
                )
            if not self.documentation.eu_database_registration_art71:
                risks.append(
                    "HIGH: No EU database registration — non-compliance evidence in PLD claims"
                )
        
        # Check documentation completeness for evidence disclosure risk
        completeness = self.documentation.completeness_score()
        if completeness < 0.7:
            risks.append(
                f"HIGH: Documentation completeness {completeness:.0%} — "
                "evidence disclosure order may expose gaps and trigger defectiveness presumption"
            )
        elif completeness >= 0.9:
            mitigations.append(
                f"Documentation completeness {completeness:.0%} — "
                "strong position for evidence disclosure scenario"
            )
        
        # Post-deployment learning risk
        if self.post_deployment_learning:
            risks.append(
                "MEDIUM: System continues learning after deployment — "
                "new PLD explicitly covers post-deployment defects from continued learning; "
                "ensure monitoring plan (Art.72) covers behavioral drift"
            )
        
        # Physical harm multiplies PLD exposure
        if self.can_cause_physical_harm:
            risks.append(
                "HIGH: Physical harm potential — strict liability applies under new PLD; "
                "no fault required; documentation is primary defense"
            )
        
        # Role-specific risks
        if self.liability_role == LiabilityRole.COMPONENT_MANUFACTURER:
            risks.append(
                "MEDIUM: Component manufacturer — liable for defects in your component "
                "even when integrated product sold by another party; "
                "ensure component-level documentation covers AI Act Art.11 Annex IV"
            )
        
        # Mitigations from complete documentation
        if self.documentation.technical_documentation_art11:
            mitigations.append("Art.11 technical documentation — primary evidence of expected safety standard")
        if self.documentation.risk_management_records_art9:
            mitigations.append("Art.9 risk management records — foreseeability and mitigation evidence")
        if self.documentation.pre_deployment_test_records:
            mitigations.append("Pre-deployment test records — supports development risks defense")
        
        return {
            "system": self.system_name,
            "risk_level": self.risk_level.value,
            "liability_role": self.liability_role.value,
            "pld_exposure": "HIGH" if len(risks) > 2 else "MEDIUM" if risks else "LOW",
            "presumption_of_defectiveness_risk": any("presumption" in r for r in risks),
            "documentation_completeness": f"{self.documentation.completeness_score():.0%}",
            "risks": risks,
            "mitigations": mitigations,
            "days_to_ai_act_deadline": (date(2026, 8, 2) - date.today()).days,
        }


def assess_ai_product_portfolio(systems: list[AIProductLiabilityAssessment]) -> dict:
    """Assess PLD liability exposure across a portfolio of AI products."""
    results = [s.assess_exposure() for s in systems]
    
    high_exposure = sum(1 for r in results if r["pld_exposure"] == "HIGH")
    presumption_risk = sum(1 for r in results if r["presumption_of_defectiveness_risk"])
    avg_doc = sum(
        float(r["documentation_completeness"].strip("%")) for r in results
    ) / len(results) if results else 0
    
    return {
        "total_systems": len(systems),
        "high_pld_exposure": high_exposure,
        "presumption_risk_count": presumption_risk,
        "average_documentation_completeness": f"{avg_doc:.0f}%",
        "portfolio_summary": results,
    }


# Example: assessing an AI recommendation system
recommendation_ai = AIProductLiabilityAssessment(
    system_name="Content Recommendation Engine v2",
    risk_level=AISystemRiskLevel.LIMITED_RISK,
    liability_role=LiabilityRole.PROVIDER_MANUFACTURER,
    deployment_date=date(2025, 9, 1),
    documentation=ProductLiabilityDocumentation(
        technical_documentation_art11=True,
        risk_management_records_art9=True,
        logging_capability_art12=True,
        human_oversight_art14=False,  # gap
        post_market_monitoring_art72=True,
        conformity_assessment_art43=False,  # not required for limited risk
        eu_database_registration_art71=False,  # not required for limited risk
        declaration_of_conformity_art47=False,
        pre_deployment_test_records=True,
        incident_response_procedures=True,
    ),
    post_deployment_learning=True,
    processes_personal_data=True,
    can_cause_physical_harm=False,
)

result = recommendation_ai.assess_exposure()
print(f"System: {result['system']}")
print(f"PLD Exposure: {result['pld_exposure']}")
print(f"Documentation: {result['documentation_completeness']}")
for risk in result["risks"]:
    print(f"  RISK: {risk}")
for mit in result["mitigations"]:
    print(f"  MITIGATION: {mit}")

30-Item Art.112 / New PLD Product Liability Readiness Checklist

Understanding Your Product Liability Exposure (New PLD Scope)

Defect Analysis and Expected Safety Standard

Documentation for Evidence Disclosure Scenarios

High-Risk AI Specific Obligations (Presumption of Defectiveness Triggers)

Human Oversight and Override Documentation

Provider/Deployer Contractual Allocation

Incident Response and Post-Incident Documentation


Article 112's formal repeal of the 1985 Product Liability Directive closes a chapter in European product regulation that was written before AI existed. The practical consequence for AI developers is that the liability gap that once insulated software providers from strict liability is gone. What replaces it is a framework where your AI Act compliance artifacts — the technical documentation, risk management records, conformity assessments, and post-market monitoring reports you maintain to satisfy regulatory obligations — simultaneously function as your primary liability defense. Building compliant AI is now, unambiguously, risk management in both the regulatory and litigation sense of the term.