2026-04-25·15 min read·sota.io team

EU AI Act Art.85: Right of Recourse for Persons Subject to Decisions Based on High-Risk AI Systems — Developer Compliance Guide (2026)

EU AI Act Article 85 is the Regulation's individual rights counterweight to the deployment obligations imposed elsewhere. Where Art.14 requires human oversight, Art.13 mandates transparency, and Art.26 governs deployer responsibilities, Art.85 gives the natural person on the receiving end of an AI-influenced decision a direct legal instrument: the right to obtain an explanation and seek recourse when that decision affects them significantly.

For deployers of Annex III high-risk AI systems, Art.85 is not a passive compliance checkbox. It requires actively implementing intake mechanisms, explanation pipelines, and human review workflows — and having them ready before the system makes its first consequential decision. A deployer who cannot respond to an Art.85 request is non-compliant regardless of how well-documented the underlying AI system is.

Art.85 became applicable as part of the EU AI Act's phased entry into force, with full application on 2 August 2026. Deployers must have compliant recourse mechanisms in place before this date.


Art.85 in the Final Provisions Architecture

Art.85 occupies a significant position in the EU AI Act's closing chapters — sitting immediately after Art.84 (Commission evaluation cycle) and alongside Art.86 (transparency obligations) and the final transitional provisions. This placement signals Art.85's function: it is not a technical requirement for AI system design, but an enforcement-facing individual right that applies at the moment a decision using an AI system reaches a natural person.

The architectural relationship is layered. Art.13 (transparency and information provision) ensures that persons interacting with a high-risk AI system receive information about its capabilities and limitations. Art.14 (human oversight) ensures that qualified humans can review and override AI outputs during operation. Art.85 adds the post-decision layer: once a decision has been made, the affected person can demand to understand the AI's role in it and can challenge the outcome through formal channels.

LayerProvisionWhen It Applies
Pre-deploymentArt.9 RMS, Art.10 data governanceDesign phase
At useArt.13 transparency, Art.14 oversightDuring operation
Post-decisionArt.85 right of recourseAfter decision taken
SystemicArt.74–80 market surveillanceOngoing enforcement

Art.85(1): The Core Right

Art.85(1) establishes the primary entitlement: any natural person who is subject to a decision taken by a deployer that is significantly based on the output of a high-risk AI system listed in Annex III and that produces legal effects or similarly significant effects on that person has the right to obtain from the deployer:

(a) An explanation of the role of the AI system in the decision-making procedure — specifically, what function the AI system performed, how its output was used, and whether human review was applied.

(b) The main parameters of the AI system's output that influenced the decision — not a technical model dump, but a meaningful account of which factors the AI system weighted and how these translated into the specific decision.

(c) The main elements of the decision taken — the outcome itself and the principal reasons for it, framed in terms accessible to the person receiving the explanation.

The "significantly based on" threshold

Art.85(1) deliberately departs from the GDPR Art.22 framework, which applies only to solely automated decisions. Art.85 applies where the AI system output significantly influences the decision — covering human-in-the-loop deployments where a human reviews but typically follows the AI's recommendation. This is the practically important case: a recruitment algorithm that scores candidates, with a human interviewer who rarely overrides the score, falls within Art.85 even though a human technically makes the final call.

Deployers who structure processes to maintain nominal human involvement primarily to avoid Art.85 are exposed to supervisory challenge. The substance of how the AI output was used — not the formal structure of the process — determines whether Art.85 applies.

This threshold mirrors GDPR Art.22's scope language, but applies across a broader decision class:

Decision categoryArt.85 application
Credit refusal or adverse termsYes — financial consequences
Job rejection after AI screeningYes — employment consequences
Benefits denial or reductionYes — access to essential services
University admission denialYes — access to education
Insurance premium increaseYes — financial consequences
AI-assisted diagnosis communicated to patientContext-dependent
Content recommendation on a platformNo — not significant effect
Internal risk scoring not communicated externallyNo — decision not yet taken

Art.85(2): Deployer Obligations

Art.85(2) places the implementation burden squarely on the deployer — the entity that puts the high-risk AI system into use and makes or acts on the decisions. The deployer must:

Implement appropriate mechanisms to receive Art.85 requests and respond to them. "Appropriate" means the mechanism must be readily accessible to affected persons — buried contact forms or corporate complaint procedures that are not surfaced at the point of decision-making are likely insufficient.

Inform natural persons about their Art.85 right at the time the decision is communicated. This is a proactive duty: the deployer cannot wait for the person to discover their rights independently. In practice, decision letters, automated notifications, and system-generated responses must include a standardised Art.85 notice.

Maintain internal records sufficient to fulfil explanation requests retrospectively. Because Art.85 requests may arrive weeks or months after the original decision, the deployer must preserve AI system outputs and the relevant parameters in a form that enables meaningful retrospective explanation.

The time limit for responding to Art.85 requests is not expressly specified in the Article itself — Member State procedural law or the complementary GDPR framework (which provides a one-month response time for data subject access requests) is expected to govern. Deployers should plan for a 30-day SLA as a safe default.


Art.85(3)–(4): Relationship to Other Union Law

Art.85(3) establishes a without-prejudice clause: the rights in Art.85 do not replace or reduce rights available under other Union law. Where GDPR Art.22 applies (solely automated decision with legal effects), both the GDPR right (to human intervention, to express a view, to contest the decision) and the Art.85 right (to an explanation of the AI's role and parameters) apply cumulatively.

Art.85(4) addresses the relationship directly: where Union law already provides the relevant right, Art.85 does not separately mandate it. This means deployers who operate under sectoral frameworks (e.g., the Consumer Credit Directive's adverse action notice requirements, or the Equal Credit Opportunity Act equivalent in Member State law) need to audit whether those sector-specific rights cover the Art.85 ground — if they do, no separate Art.85 mechanism is required for that decision category.

However, given the narrow scope of most sector-specific rights, most deployers of Annex III AI systems will need a standalone Art.85 mechanism in addition to any existing complaint channels.

Art.85 vs. GDPR Art.22: A practitioner's comparison

DimensionGDPR Art.22EU AI Act Art.85
TriggerSolely automated decisionAI output significantly influences decision
Human involvementNone — automation onlyHuman review permitted, may even be present
AI system scopeAny automated processingHigh-risk AI (Annex III only)
Explanation depth"Meaningful information about logic involved"Role + main parameters + main elements of decision
Right to contestExplicit right to contestImplicit via recourse mechanism
Right to human interventionYes — explicitSupported through Art.14 overlap
Enforcement bodyGDPR supervisory authorityNCA + (where GDPR applies) SA
LimitationsArt.22(2) contractual/legal necessityMember State national security/defence

Art.85 × Art.14 Human Oversight Connection

Art.85 does not exist in isolation. The Art.14 human oversight requirement (for high-risk AI systems) creates the infrastructure that makes Art.85 practically enforceable:

When an Art.85 explanation request arrives, the deployer can draw on the Art.14 human oversight records to explain both the AI system's output and the human review that did or did not occur. A deployer who cannot explain the human oversight element is likely failing Art.14 obligations as well as Art.85.

The chain in practice:

AI system output → Art.14 human oversight review → Decision taken
                                    ↓
                         Art.85 explanation request
                                    ↓
                         Explanation: AI role + parameters + human oversight applied

Art.85 × Art.13 Transparency

Art.13 (transparency and provision of information to deployers) requires providers to supply deployers with sufficient documentation to understand the AI system — including intended purposes, performance metrics, limitations, and risks. Art.85 requires deployers to pass a simplified but meaningful version of this understanding on to the affected natural person.

The Art.13 documentation is the upstream source for Art.85 explanations:

Art.13 documentationArt.85 explanation use
Performance characteristics and accuracy"The AI assessed your application as below the threshold for approval based on repayment history data"
Known limitations and foreseeable misuse"The AI system does not assess factors X and Y"
Risk categories and intended purpose"The AI system was designed for credit risk assessment in the context of [deployer use case]"
Human oversight measures"A qualified credit analyst reviewed the AI output before the decision was taken"

Deployers who have not reviewed their Art.13 documentation from the provider are not in a position to satisfy Art.85 explanation requests meaningfully.


Annex III Scope: Which Deployments Are Subject to Art.85

Art.85 applies to decisions made using Annex III high-risk AI systems — not all AI systems. Developers should audit their deployments against the eight Annex III categories:

Annex III CategoryArt.85 high-risk decisions
1. Biometric identification and categorisationIdentity verification outcomes affecting access
2. Critical infrastructure managementResource allocation decisions affecting specific persons
3. Education and vocational trainingAdmission denials, assessment outcomes
4. Employment and workers managementRecruitment rejections, performance assessment, promotion decisions
5. Access to essential private/public servicesCredit refusals, insurance pricing, benefits eligibility
6. Law enforcementProfiling outputs used in investigations, risk scores shared with persons
7. Migration, asylum, border controlVisa decisions, asylum assessment outputs
8. Administration of justiceRisk assessments used in sentencing, bail, supervision decisions

Categories 4 (employment), 5 (essential services), and 8 (justice) generate the highest volume of Art.85-eligible decisions in commercial deployments. Fintech, HR technology, and public administration AI products targeting the EU must implement Art.85 compliance before 2 August 2026.


CLOUD Act Implications

Art.85 explanation obligations require the deployer to access and present AI decision records retrospectively. Where these records are stored on cloud infrastructure subject to the US CLOUD Act (18 U.S.C. § 2713), they are potentially accessible to US law enforcement under a compelled disclosure order — which could be issued without the knowledge of the affected EU person.

For deployers handling recourse requests involving sensitive personal data (employment decisions, credit assessments, benefits eligibility), the intersection is significant:

CLOUD Act scenarioArt.85 compliance impact
Decision records on US infrastructureUS government could access records before deployer responds to Art.85 request
Model parameters on US infrastructure"Main parameters" explanation depends on records potentially subject to CLOUD Act
Human review logs on US infrastructureArt.14 oversight documentation exposed to parallel US jurisdiction
EU-infrastructure deploymentCLOUD Act risk eliminated; Art.85 record access under EU control only

The practical recommendation is straightforward: store Art.85-relevant records on EU infrastructure. This means decision logs, AI output records, human review notes, and response SLA tracking should all be hosted in the EU. This is both an Art.85 compliance measure and an alignment with the broader EU data sovereignty positioning that Art.85's recourse framework presupposes.


Python Implementation: Art85RecourseManager

from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional, List, Dict, Any

class RecourseStatus(Enum):
    PENDING = "pending"
    UNDER_REVIEW = "under_review"
    EXPLANATION_PROVIDED = "explanation_provided"
    HUMAN_REVIEW_TRIGGERED = "human_review_triggered"
    RESOLVED = "resolved"
    ESCALATED_TO_NCA = "escalated_to_nca"

class DecisionCategory(Enum):
    EMPLOYMENT = "employment"          # Annex III cat. 4
    ESSENTIAL_SERVICES = "essential_services"  # Annex III cat. 5
    EDUCATION = "education"            # Annex III cat. 3
    LAW_ENFORCEMENT = "law_enforcement"  # Annex III cat. 6
    JUSTICE = "justice"                # Annex III cat. 8
    BIOMETRIC = "biometric"            # Annex III cat. 1
    MIGRATION = "migration"            # Annex III cat. 7

@dataclass
class AIDecisionRecord:
    decision_id: str
    person_pseudonym: str          # GDPR-compliant; real identity in separate store
    ai_system_id: str
    ai_system_version: str
    ai_system_provider: str
    decision_date: datetime
    decision_category: DecisionCategory
    ai_output_summary: Dict[str, Any]
    main_parameters: List[str]
    human_reviewer_applied: bool
    human_reviewer_id: Optional[str]
    decision_outcome: str
    art_13_documentation_version: str

    def generate_art85_explanation(self) -> str:
        oversight = (
            f"A qualified human reviewer ({self.human_reviewer_id}) reviewed the AI output before the final decision."
            if self.human_reviewer_applied
            else "The AI output was used directly in the decision-making process."
        )
        params = "; ".join(self.main_parameters[:5])
        return (
            f"The AI system '{self.ai_system_id}' (version {self.ai_system_version}, "
            f"provided by {self.ai_system_provider}) contributed to this decision by "
            f"analysing the relevant inputs and generating an output based on the following "
            f"main parameters: {params}. {oversight} "
            f"The main elements of the decision were: {self.decision_outcome}."
        )

@dataclass
class Art85RecourseRequest:
    request_id: str
    decision_id: str
    person_contact: str
    request_date: datetime
    request_type: str          # "explanation", "challenge", "human_review", "complaint"
    status: RecourseStatus = RecourseStatus.PENDING
    explanation_text: Optional[str] = None
    response_date: Optional[datetime] = None
    sla_deadline: Optional[datetime] = None
    nca_referral_date: Optional[datetime] = None
    internal_notes: List[str] = field(default_factory=list)

    def days_remaining(self) -> Optional[int]:
        if self.sla_deadline:
            delta = self.sla_deadline - datetime.now()
            return delta.days
        return None

    def is_overdue(self) -> bool:
        if self.sla_deadline and self.status not in [
            RecourseStatus.RESOLVED, RecourseStatus.ESCALATED_TO_NCA
        ]:
            return datetime.now() > self.sla_deadline
        return False

class Art85RecourseManager:
    """Manages the complete Art.85 right-of-recourse lifecycle for a deployer."""

    def __init__(self, deployer_id: str, sla_days: int = 30):
        self.deployer_id = deployer_id
        self.sla_days = sla_days
        self.decisions: Dict[str, AIDecisionRecord] = {}
        self.requests: Dict[str, Art85RecourseRequest] = {}

    def register_decision(self, decision: AIDecisionRecord) -> None:
        self.decisions[decision.decision_id] = decision

    def notify_person_of_rights(self, decision_id: str) -> str:
        """Returns the Art.85 rights notice text to include in decision communications."""
        return (
            f"This decision was made with the significant assistance of an AI system. "
            f"Under Article 85 of Regulation (EU) 2024/1689 (EU AI Act), you have the right "
            f"to request an explanation of the AI system's role in this decision and the main "
            f"parameters that influenced it. To exercise this right, please contact us at "
            f"[Art.85 contact point] within [applicable period]. "
            f"Reference decision ID: {decision_id}."
        )

    def receive_request(
        self,
        decision_id: str,
        person_contact: str,
        request_type: str = "explanation",
    ) -> Art85RecourseRequest:
        request = Art85RecourseRequest(
            request_id=f"ART85-{self.deployer_id}-{len(self.requests)+1:05d}",
            decision_id=decision_id,
            person_contact=person_contact,
            request_date=datetime.now(),
            request_type=request_type,
            sla_deadline=datetime.now() + timedelta(days=self.sla_days),
        )
        self.requests[request.request_id] = request
        return request

    def provide_explanation(self, request_id: str) -> str:
        request = self.requests[request_id]
        decision = self.decisions.get(request.decision_id)
        if not decision:
            return "Decision record not found. Please contact [contact point] with additional reference details."
        explanation = decision.generate_art85_explanation()
        request.explanation_text = explanation
        request.response_date = datetime.now()
        request.status = RecourseStatus.EXPLANATION_PROVIDED
        return explanation

    def trigger_human_review(self, request_id: str, reviewer_id: str) -> None:
        request = self.requests[request_id]
        request.status = RecourseStatus.HUMAN_REVIEW_TRIGGERED
        request.internal_notes.append(
            f"{datetime.now().isoformat()}: Human review triggered — reviewer {reviewer_id}"
        )

    def escalate_to_nca(self, request_id: str, nca_identifier: str) -> None:
        request = self.requests[request_id]
        request.status = RecourseStatus.ESCALATED_TO_NCA
        request.nca_referral_date = datetime.now()
        request.internal_notes.append(
            f"{datetime.now().isoformat()}: Escalated to NCA {nca_identifier}"
        )

    def compliance_summary(self) -> Dict[str, Any]:
        overdue = [r for r in self.requests.values() if r.is_overdue()]
        pending = [r for r in self.requests.values() if r.status == RecourseStatus.PENDING]
        resolved = [r for r in self.requests.values() if r.status == RecourseStatus.RESOLVED]
        nca_escalations = [
            r for r in self.requests.values() if r.status == RecourseStatus.ESCALATED_TO_NCA
        ]
        return {
            "deployer_id": self.deployer_id,
            "total_decisions_registered": len(self.decisions),
            "total_art85_requests": len(self.requests),
            "pending": len(pending),
            "overdue": len(overdue),
            "resolved": len(resolved),
            "nca_escalations": len(nca_escalations),
            "sla_compliance_rate": (
                (len(resolved) / len(self.requests) * 100) if self.requests else 100.0
            ),
        }


def build_sample_manager() -> Art85RecourseManager:
    manager = Art85RecourseManager(deployer_id="FinTech-EU-001", sla_days=30)

    # Register a credit decision using an Annex III AI system
    manager.register_decision(AIDecisionRecord(
        decision_id="DEC-2026-08-0042",
        person_pseudonym="PSEUD-4A7F2C",
        ai_system_id="CreditScoreEngine-v3",
        ai_system_version="3.2.1",
        ai_system_provider="AI Provider GmbH",
        decision_date=datetime(2026, 8, 15, 10, 30),
        decision_category=DecisionCategory.ESSENTIAL_SERVICES,
        ai_output_summary={"score": 612, "threshold": 650, "outcome": "below_threshold"},
        main_parameters=[
            "Repayment history (weighted 35%)",
            "Current credit utilisation ratio (weighted 30%)",
            "Length of credit history (weighted 15%)",
            "Recent credit enquiries (weighted 10%)",
            "Credit mix (weighted 10%)",
        ],
        human_reviewer_applied=True,
        human_reviewer_id="REV-ANALYST-07",
        decision_outcome="Loan application declined — AI score below approval threshold; human review confirmed",
        art_13_documentation_version="v3.2.1-2026-07",
    ))

    # Receive and respond to an Art.85 request
    request = manager.receive_request(
        decision_id="DEC-2026-08-0042",
        person_contact="applicant@example.com",
        request_type="explanation",
    )
    explanation = manager.provide_explanation(request.request_id)
    print(f"Art.85 explanation generated for {request.request_id}:")
    print(explanation)
    print(f"\nCompliance summary: {manager.compliance_summary()}")
    return manager


if __name__ == "__main__":
    manager = build_sample_manager()
    summary = manager.compliance_summary()
    print(f"\nDeployer: {summary['deployer_id']}")
    print(f"Total decisions registered: {summary['total_decisions_registered']}")
    print(f"Art.85 requests received: {summary['total_art85_requests']}")
    print(f"SLA compliance rate: {summary['sla_compliance_rate']:.1f}%")
    print(f"NCA escalations: {summary['nca_escalations']}")

Member State Limitations (Art.85(4))

Art.85 permits Member States to restrict or exclude the right of recourse in specific contexts where significant public interest overrides individual transparency rights:

ContextPermitted limitation scope
National security and defenceAI-assisted decisions in security services and armed forces
Criminal proceedingsAI-assisted decisions in active criminal investigations
Public securityPrevention, investigation, detection of criminal offences
Important public interestEconomic policy, public health in specified exceptional circumstances

These limitations mirror the structure of GDPR Art.23(1) restrictions. A deployer relying on a Member State limitation must be able to document the specific statutory basis for the limitation — "national security" is not a blanket exemption for all government AI deployments.


Art.85 in the Enforcement Arc

From a supervisory perspective, Art.85 requests are intelligence. An NCA receiving Art.85 escalations from multiple persons regarding the same deployer or AI system has grounds to investigate potential systemic non-compliance with Art.14 (human oversight), Art.13 (transparency), or Art.26 (deployer obligations). Each individual Art.85 failure is potentially:

The penalty regime under Art.99 applies to violations of deployer obligations, of which Art.85 compliance is a component. Deployers should treat Art.85 compliance infrastructure as a foundational risk management investment, not an incremental obligation.

Art.85 request received
    ↓
Deployer provides explanation within SLA?
    ├─ Yes → Art.85 satisfied
    └─ No → Person escalates to NCA
              ↓
         NCA investigates deployer
              ↓
         Art.99 penalty exposure (Art.99(3): up to EUR 15M or 3% GTO)
              ↓
         NCA may require systemic fix affecting all users of the AI system

10-Item Developer Checklist: Art.85 Compliance

  1. Annex III inventory — Identify all high-risk AI systems in your deployment that make or significantly contribute to decisions with legal or similarly significant effects on natural persons.

  2. Decision record retention — Implement a decision logging system that captures AI system outputs, main parameters, human oversight actions, and final decision outcomes. Retain records for at minimum the period during which Art.85 requests are feasible under applicable national procedural law.

  3. Art.85 notice in decision communications — Update all decision letters, automated notifications, and system-generated outputs to include a standardised Art.85 rights notice with a designated contact point.

  4. Recourse intake mechanism — Create a dedicated Art.85 intake channel (form, email, API) distinct from general complaints procedures. Document it in your DPA and compliance programme.

  5. 30-day SLA — Establish an internal SLA for responding to Art.85 requests. Default to 30 days; check whether applicable Member State procedural rules or GDPR Art.12 timelines set a shorter period.

  6. Explanation quality testing — Before go-live, test explanation outputs against the Art.85(1) requirements: role of the AI system, main parameters, main elements of decision. Have a non-technical person review whether the explanation is "meaningful."

  7. Human review escalation path — Define the escalation procedure when a person challenges the AI-assisted decision and requests human review. Ensure the human reviewer has access to the full Art.13 documentation for the AI system.

  8. GDPR Art.22 audit — Map each Art.85-covered decision against GDPR Art.22: if the process is solely automated, both GDPR Art.22 and Art.85 apply. Ensure your processes satisfy the more demanding of the two where they overlap.

  9. EU infrastructure for decision records — Store Art.85-relevant records (decision logs, AI output records, human review notes, recourse request data) on EU-hosted infrastructure to eliminate CLOUD Act compelled-disclosure risk during recourse proceedings.

  10. NCA escalation protocol — Establish a documented protocol for cases where Art.85 requests are escalated to the NCA. Ensure legal counsel is engaged at the escalation stage and that internal records are preserved in a form suitable for NCA review.


See Also