2026-04-12·15 min read·sota.io team

EU AI Act Art.86: Right to Explanation for AI Decisions — Developer Guide (2026)

When a bank's AI model rejects a mortgage application, the applicant has always had some rights under GDPR. But GDPR Article 22 — the right not to be subject to automated decision-making — only applies to purely automated decisions. If a human loan officer reviews the AI's output before making the final call, GDPR Art.22 does not apply. The human-in-the-loop removes the protection.

Article 86 of the EU AI Act closes that gap.

Under Art.86, any natural person subject to a high-risk AI decision that produces legal effects or significantly affects them has the right to a clear and meaningful explanation of the AI's role in that decision — regardless of whether a human was involved. This is a fundamentally new obligation that extends beyond GDPR and applies directly to the developers and deployers of Annex III high-risk AI systems.

This guide explains what Art.86 requires, who owes what obligation, how providers must architect explainability, and what deployers must deliver to affected persons.

What Article 86 Actually Says

Article 86(1) — The Core Right:

Any natural person who has been subject to a decision taken by the deployer based significantly on the output of a high-risk AI system listed in Annex III, except systems listed in point 2 of that Annex [GPAI research exception], which produces legal effects or similarly significantly affects that person, has the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

The operative conditions are:

  1. Natural person — legal persons (companies) have no Art.86 rights
  2. Decision based significantly on AI output — incidental AI use is not enough
  3. High-risk AI system listed in Annex III — only the eight high-risk categories apply
  4. Legal effects or significant impact — the threshold separates high-stakes from trivial decisions
  5. Right to explanation — not a right to reverse the decision, but to understand it

Article 86(2) — Provider Enablement Obligation:

Providers of high-risk AI systems must design systems in a way that enables deployers to comply with Art.86(1). This is not optional — it is a conformity requirement under Chapter 2 of the Act.

Article 86(3) — Scope Limitation:

The right under Art.86(1) does not apply in circumstances where the AI system output constitutes solely a preparatory act that has no direct legal or similar significant effect on the natural person (preliminary screening, early-stage filtering). The threshold is significant reliance, not any reliance.

Art.86 vs GDPR Art.22: The Two-Tier Framework

Understanding Art.86 requires understanding what it adds on top of GDPR Article 22.

DimensionGDPR Art.22AI Act Art.86
TriggerSolely automated decisionDecision based significantly on AI (human can be involved)
Core rightRight not to be subject to automated decisionRight to explanation of the AI's role
Human reviewDefeats Art.22 protectionIrrelevant — Art.86 applies regardless
AI typesAny automated processingOnly Annex III high-risk AI systems
Obligation holderData controller (Art.4 GDPR)Deployer (AI Act) + provider (enablement)
What must be providedMeaningful information, logic involved, significanceRole of AI in procedure + main elements of decision
TimelineGDPR = "without undue delay"AI Act = not specified (deployer must define)
Cross-referenceGDPR Art.22(3): right to human reviewAI Act Art.86: independent of Art.22

The combined framework for high-risk AI:

Why this matters for developers: The addition of human review — sometimes done to escape GDPR Art.22 liability — does not escape AI Act Art.86. The person affected can still demand explanation of what the AI contributed to the decision.

Who Owes What: Provider vs Deployer Obligations

Provider Obligations (Art.86(2))

Providers must design their high-risk AI systems to enable deployers to comply with Art.86(1). This means:

  1. Explanation generation capability: The system must be able to produce, for any individual decision, a human-readable account of:

    • Which input features were most influential
    • The confidence or probability of the AI's output
    • What factors in the person's data drove the result
    • Any counterfactual information (what would have changed the decision)
  2. API-accessible explanation output: The explanation cannot be siloed in an internal dashboard. The system must expose it to the deployer in a structured format.

  3. Technical documentation (Art.11) must include: The explainability methodology, the level of explanation granularity, and the accuracy-explainability trade-off decisions made during system design.

  4. Post-market monitoring (Art.72) must track: Whether deployed explanations are understood by affected persons (feedback loop obligation).

Deployer Obligations (Art.86(1))

Deployers — the entities that use the AI system in practice — hold the primary Art.86 obligation. They must:

  1. Deliver the explanation upon request: When an affected person invokes their Art.86 right, the deployer must provide a clear and meaningful explanation of the AI's role in that specific decision.

  2. Maintain decision records: The deployer must be able to reconstruct, for any past high-risk AI decision, what the AI output was and what explanation can be given. This requires logging individual decisions with their AI outputs.

  3. Define a request handling procedure: Art.86 requires a mechanism for affected persons to invoke the right — typically a written request to a designated contact, with a defined response window.

  4. Human oversight (Art.26(2)): Deployers must ensure that humans with competence, authority, and time to review AI outputs are involved — especially for high-impact decisions. The Art.86 explanation must cover what that human oversight consisted of.

The Art.86 threshold is not met by all high-risk AI use. The decision must produce:

Examples that meet the threshold:

Examples that do NOT meet the threshold:

What a Meaningful Explanation Must Include

Article 86(1) requires explanation of two things: (1) the role of the AI system in the decision-making procedure, and (2) the main elements of the decision taken.

The Regulation does not enumerate specific elements. Based on the structure of Art.86 read alongside Art.13 (transparency to deployers) and the GDPR Art.22 framework, a complete Art.86 explanation should include:

Explanation ElementSubstanceFormat
AI's functional roleWas the AI the sole input, a significant factor, or one of several?Qualitative description
Decision outputWhat did the AI system output for this person (score, probability, classification)?Numeric or categorical
Key influencing factorsWhich personal data attributes most influenced the AI's output?Ranked feature list
CounterfactualWhat would have changed the AI's output? (SHAP/LIME-derived)Concrete alternative
Human review descriptionWhat did the human reviewer evaluate? What authority did they have?Procedural narrative
Final decision basisHow was the AI output used in the final decision?Decision rationale
Challenge mechanismHow can the person challenge the decision or request human review?Contact + procedure

Sector-Specific Implementation

Credit and Financial Decisions (Annex III §5)

High-risk AI systems used in credit scoring, insurance pricing, and access to financial services fall under Annex III Category 5(b). Art.86 is highly activated here.

What the explanation must cover:

Regulatory overlay: GDPR Art.22 may also apply if no human review occurs. MiFID II and CRD VI impose additional explainability requirements for AI in investment advice and credit underwriting. Art.86 adds a layer but does not replace these.

Recruitment and Employment (Annex III §4)

AI systems used for CV screening, candidate ranking, aptitude testing, or employee monitoring fall under Annex III Category 4. This is among the highest-volume Art.86 activation scenarios.

What the explanation must cover:

Regulatory overlay: GDPR Art.22 may apply for automated screening without human review. The EU Pay Transparency Directive (2023/970) adds context where AI assists in pay-band decisions. EEOC principles in cross-border hiring contexts may impose additional requirements.

Social Benefits and Public Services (Annex III §5)

Algorithmic systems used by public bodies for benefit eligibility, fraud detection, or social service allocation fall under Annex III Category 5(a)/(c). Art.86 applies.

What the explanation must cover:

Note: Public authorities deploying Annex III AI systems are subject to Art.86. The Charter of Fundamental Rights Art.41 (right to good administration) creates an independent administrative law obligation that aligns with Art.86.

Healthcare and Medical Risk Assessment (Annex III §5 adjacent + Medical Device Regulation)

AI systems used in clinical decision support, patient triage, or medical risk stratification that significantly affect patient care pathways may activate Art.86 — particularly where they filter access to care or prioritise treatment.

What the explanation must cover:

Regulatory overlay: The EU Medical Device Regulation (MDR/IVDR) imposes additional requirements for AI used as medical software. Art.86 applies as an additional layer where MDR-regulated AI also triggers Annex III classification.

Technical Implementation: Building Art.86-Compliant Explanations

from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime, timezone
from typing import Optional
import hashlib
import json


class ExplanationTrigger(Enum):
    LEGAL_EFFECT = "legal_effect"          # Contract, entitlement, status change
    SIGNIFICANT_IMPACT = "significant_impact"  # Material life circumstances effect
    PREPARATORY_ONLY = "preparatory_only"  # Below Art.86 threshold


class HumanReviewType(Enum):
    NONE = "none"                    # Purely automated — GDPR Art.22 also applies
    RUBBER_STAMP = "rubber_stamp"    # Human sees output, no real review
    SUBSTANTIVE = "substantive"      # Human applies independent judgement
    OVERRIDE = "override"            # Human overrode AI recommendation


@dataclass
class FeatureContribution:
    """One data attribute's contribution to the AI output."""
    feature_name: str           # Human-readable field name (e.g., "Credit utilization")
    feature_value: str          # Value for this individual (e.g., "87%")
    contribution_direction: str # "increases_risk" / "decreases_risk" / "neutral"
    contribution_magnitude: str # "high" / "medium" / "low"
    counterfactual: Optional[str] = None  # "If X were Y, the score would change by Z"


@dataclass
class Art86Explanation:
    """
    Complete Art.86 explanation for a single high-risk AI decision.
    
    Must be generated at decision time and stored for retrospective retrieval.
    Enables deployer to fulfil Art.86(1) obligation upon affected person's request.
    """
    # Decision identification
    decision_id: str
    decision_timestamp: datetime
    annex_iii_category: str         # e.g., "Category 5(b) — Credit scoring"
    
    # AI system role
    ai_role_description: str        # Narrative: what did the AI do in this process?
    ai_output_value: str            # The AI's actual output for this individual
    ai_output_confidence: Optional[float] = None  # Confidence/probability if available
    
    # Feature contributions (SHAP/LIME-derived)
    feature_contributions: list[FeatureContribution] = field(default_factory=list)
    
    # Counterfactual
    primary_counterfactual: Optional[str] = None  # Most actionable "what if" scenario
    
    # Human oversight
    human_review_type: HumanReviewType = HumanReviewType.NONE
    human_review_description: Optional[str] = None  # What the reviewer evaluated
    
    # Final decision
    final_decision: str             # The actual outcome for the person
    decision_basis: str             # How AI output was used in final decision
    
    # Challenge mechanism
    challenge_contact: str          # Where to send Art.86 requests
    challenge_deadline_days: int = 30  # Days within which deployer must respond
    
    # Trigger assessment
    trigger_type: ExplanationTrigger = ExplanationTrigger.LEGAL_EFFECT
    
    def generate_person_facing_summary(self) -> str:
        """
        Generate the human-readable Art.86 explanation for the affected person.
        Clear, non-technical language required.
        """
        lines = [
            f"DECISION EXPLANATION — {self.annex_iii_category}",
            f"Decision date: {self.decision_timestamp.strftime('%Y-%m-%d')}",
            "",
            "HOW AI WAS USED:",
            self.ai_role_description,
            "",
            f"AI SYSTEM OUTPUT: {self.ai_output_value}",
        ]
        
        if self.ai_output_confidence:
            lines.append(f"Confidence level: {self.ai_output_confidence:.0%}")
        
        if self.feature_contributions:
            lines.extend(["", "MAIN FACTORS THAT INFLUENCED THIS RESULT:"])
            for fc in sorted(
                self.feature_contributions,
                key=lambda x: {"high": 0, "medium": 1, "low": 2}.get(x.contribution_magnitude, 2)
            )[:5]:  # Top 5 factors
                lines.append(
                    f"  • {fc.feature_name} ({fc.feature_value}): "
                    f"{fc.contribution_direction.replace('_', ' ')} — "
                    f"{fc.contribution_magnitude} influence"
                )
                if fc.counterfactual:
                    lines.append(f"    → {fc.counterfactual}")
        
        if self.primary_counterfactual:
            lines.extend(["", f"WHAT COULD CHANGE THIS: {self.primary_counterfactual}"])
        
        lines.extend([
            "",
            "HUMAN REVIEW:",
            self.human_review_description or self.human_review_type.value,
            "",
            f"DECISION TAKEN: {self.final_decision}",
            f"BASIS: {self.decision_basis}",
            "",
            "HOW TO CHALLENGE THIS DECISION:",
            f"Contact: {self.challenge_contact}",
            f"Response deadline: {self.challenge_deadline_days} days",
        ])
        
        return "\n".join(lines)
    
    def to_audit_record(self) -> dict:
        """Serialise to audit log format for Art.86 request fulfillment."""
        return {
            "decision_id": self.decision_id,
            "timestamp": self.decision_timestamp.isoformat(),
            "annex_iii_category": self.annex_iii_category,
            "ai_role": self.ai_role_description,
            "ai_output": self.ai_output_value,
            "feature_count": len(self.feature_contributions),
            "human_review_type": self.human_review_type.value,
            "final_decision": self.final_decision,
            "trigger_type": self.trigger_type.value,
            "explanation_hash": hashlib.sha256(
                self.generate_person_facing_summary().encode()
            ).hexdigest()[:16],
        }


class Art86RequestHandler:
    """
    Handles incoming Art.86 explanation requests from affected persons.
    Deployer-side fulfilment logic.
    """
    
    def __init__(self, explanation_store: dict[str, Art86Explanation]):
        self.store = explanation_store  # decision_id → Art86Explanation
        self.request_log: list[dict] = []
    
    def receive_request(
        self,
        person_id: str,
        decision_id: str,
        received_at: Optional[datetime] = None,
    ) -> dict:
        """
        Process an Art.86 explanation request.
        Returns status and explanation if found.
        """
        received_at = received_at or datetime.now(timezone.utc)
        
        explanation = self.store.get(decision_id)
        
        request_record = {
            "request_id": f"art86_{decision_id}_{received_at.strftime('%Y%m%d%H%M%S')}",
            "person_id_hash": hashlib.sha256(person_id.encode()).hexdigest()[:12],
            "decision_id": decision_id,
            "received_at": received_at.isoformat(),
            "status": "fulfilled" if explanation else "not_found",
            "trigger_type": explanation.trigger_type.value if explanation else None,
        }
        
        self.request_log.append(request_record)
        
        if not explanation:
            return {
                "status": "not_found",
                "message": "No decision record found for this ID. "
                           "If you believe this is an error, contact us.",
                "request_id": request_record["request_id"],
            }
        
        if explanation.trigger_type == ExplanationTrigger.PREPARATORY_ONLY:
            return {
                "status": "below_threshold",
                "message": "The AI system's involvement in this case was preparatory "
                           "and did not produce legal or significant effects. "
                           "Art.86 does not apply. If you believe otherwise, contact us.",
                "request_id": request_record["request_id"],
            }
        
        return {
            "status": "fulfilled",
            "request_id": request_record["request_id"],
            "explanation": explanation.generate_person_facing_summary(),
            "generated_at": received_at.isoformat(),
        }
    
    def compliance_metrics(self) -> dict:
        """PMM-compatible metrics for Art.86 request volume and response quality."""
        total = len(self.request_log)
        fulfilled = sum(1 for r in self.request_log if r["status"] == "fulfilled")
        below_threshold = sum(1 for r in self.request_log if r["status"] == "below_threshold")
        not_found = sum(1 for r in self.request_log if r["status"] == "not_found")
        
        return {
            "total_requests": total,
            "fulfilled": fulfilled,
            "below_threshold": below_threshold,
            "not_found": not_found,
            "fulfilment_rate": fulfilled / total if total > 0 else None,
            "art86_activation_rate": (fulfilled + below_threshold) / total if total > 0 else None,
        }


# Usage example — Credit Scoring deployer
explanation = Art86Explanation(
    decision_id="loan_2026_04_12_001",
    decision_timestamp=datetime(2026, 4, 12, 10, 30, 0, tzinfo=timezone.utc),
    annex_iii_category="Category 5(b) — Credit scoring and creditworthiness assessment",
    ai_role_description=(
        "An AI credit scoring system assessed your application. "
        "The system analysed your financial history and produced a risk score. "
        "A human underwriter reviewed this score before making the final decision."
    ),
    ai_output_value="Credit risk score: 720/1000 (below minimum threshold of 750 for this product)",
    ai_output_confidence=0.84,
    feature_contributions=[
        FeatureContribution(
            feature_name="Current debt-to-income ratio",
            feature_value="52%",
            contribution_direction="increases_risk",
            contribution_magnitude="high",
            counterfactual="Reducing existing debt by €15,000 would improve your score by approximately 40 points.",
        ),
        FeatureContribution(
            feature_name="Length of credit history",
            feature_value="3 years",
            contribution_direction="increases_risk",
            contribution_magnitude="medium",
            counterfactual="A credit history of 5+ years would reduce risk scoring by approximately 15 points.",
        ),
        FeatureContribution(
            feature_name="Payment consistency (24 months)",
            feature_value="100% on-time",
            contribution_direction="decreases_risk",
            contribution_magnitude="high",
        ),
    ],
    primary_counterfactual=(
        "If your debt-to-income ratio were below 40%, your application would likely "
        "meet the minimum threshold for this product."
    ),
    human_review_type=HumanReviewType.SUBSTANTIVE,
    human_review_description=(
        "An underwriter reviewed the AI risk score alongside your income documentation "
        "and applied the bank's credit policy. The underwriter confirmed the AI assessment."
    ),
    final_decision="Mortgage application declined",
    decision_basis=(
        "AI risk score (720) below product minimum (750). "
        "Underwriter confirmed AI assessment. Credit policy applied consistently."
    ),
    challenge_contact="creditdecisions@example-bank.eu",
    challenge_deadline_days=30,
    trigger_type=ExplanationTrigger.LEGAL_EFFECT,
)

print(explanation.generate_person_facing_summary())

CLOUD Act × Art.86: The Explanation Log Risk

Article 86 requires deployers to maintain decision records with explanation capability — that is, they must store, for every high-risk AI decision, enough data to generate an Art.86-compliant explanation on request. These records typically include:

CLOUD Act dual-compellability table for Art.86 records:

Data CategoryCLOUD Act CompellabilityEU Regulatory CompellabilityRisk with US Infrastructure
Individual input features (personal data)HIGH (US person targeting)HIGH (GDPR + AI Act Art.26)Dual legal order exposure
AI model output / scoreHIGHHIGH (Art.86 evidence)Critical: MSA subpoena + US DOJ
Feature attributions (SHAP values)MEDIUMHIGH (Art.86 explanation basis)Regulatory evidence at risk
Human reviewer notesMEDIUMHIGH (Art.86 oversight record)Confidential process exposed
Explanation request logLOWMEDIUM (Art.86 audit trail)Volume patterns disclosed
Challenge/dispute correspondenceMEDIUMHIGH (rights enforcement trail)Legal privilege risk

EU-native infrastructure = single legal order. Explanation records stored with a European cloud provider operate under GDPR and the AI Act exclusively. There is no concurrent CLOUD Act compellability mechanism for data stored by EU-controlled entities on EU territory.

For high-risk AI deployers in credit, healthcare, or public services — where explanation records contain sensitive personal data and form the basis for regulatory audit — this is not a theoretical risk. US federal agencies can and do issue CLOUD Act orders for financial and healthcare records held by US cloud providers.

Art.86 × Art.99: Fine Exposure

Failure to comply with Art.86 creates two distinct fine pathways:

Pathway 1 — Art.99 Tier 2 (via Art.26 deployer obligations): Article 26 requires deployers to comply with their obligations under the Act, including Art.86. Non-compliance with Art.26 is punishable under Art.99(4) (provider fines) if the deployer is also a provider, or under the national implementing provisions. The AI Act Recital 161 explicitly addresses MSA enforcement against deployers.

Pathway 2 — GDPR (where Art.22 overlap exists): For purely automated decisions where GDPR Art.22 also applies, failure to provide the Art.22(3) explanation exposes the organisation to GDPR fines of up to €20M or 4% of global annual turnover (Art.83(1) GDPR). The GDPR and AI Act are legally independent regimes — the same failure can trigger fines under both.

Combined exposure for deployers using high-risk AI for purely automated significant decisions:

Deployers cannot escape by adding a nominal human review that is not substantive. Recital 47 makes clear that human review must be genuine — a rubber-stamp review by someone who cannot meaningfully evaluate the AI output does not escape the GDPR Art.22 regime. And Art.86 applies regardless.

Art.86 in the AI Act Enforcement Pipeline

Article 86 sits at the intersection of the enforcement chapters. The full pipeline for an Art.86 violation:

Affected person requests explanation
        ↓
Deployer fails to provide (or provides inadequate explanation)
        ↓
Person complains to national DPA (GDPR) or MSA (AI Act)
        ↓
MSA opens Art.79(1) investigation → Art.79(2) corrective measures
        ↓
If violation confirmed: Art.99 Tier 2 fine (deployer) + Tier 1 fine (provider, if Art.86(2) breach)
        ↓
MSA reports to Commission under Art.84 → feeds Art.85 review data

The Art.86 audit trail — explanation records, request logs, challenge correspondence — becomes evidence in MSA investigations. If that audit trail is incomplete or has been modified, Art.99(5) (non-cooperation) can add another fine layer.

Developer Checklist: Art.86 Compliance

Provider Obligations (Art.86(2) — design-time):

Deployer Obligations (Art.86(1) — operational):

Infrastructure (CLOUD Act risk mitigation):

Intersection with GDPR Art.22:

Monitoring and Audit:


EU-native infrastructure keeps your Art.86 explanation records in a single legal order — no CLOUD Act compellability, no dual-regime exposure. Deploy on sota.io


See also: