EU AI Act Art.86: Right to Explanation for AI Decisions — Developer Guide (2026)
When a bank's AI model rejects a mortgage application, the applicant has always had some rights under GDPR. But GDPR Article 22 — the right not to be subject to automated decision-making — only applies to purely automated decisions. If a human loan officer reviews the AI's output before making the final call, GDPR Art.22 does not apply. The human-in-the-loop removes the protection.
Article 86 of the EU AI Act closes that gap.
Under Art.86, any natural person subject to a high-risk AI decision that produces legal effects or significantly affects them has the right to a clear and meaningful explanation of the AI's role in that decision — regardless of whether a human was involved. This is a fundamentally new obligation that extends beyond GDPR and applies directly to the developers and deployers of Annex III high-risk AI systems.
This guide explains what Art.86 requires, who owes what obligation, how providers must architect explainability, and what deployers must deliver to affected persons.
What Article 86 Actually Says
Article 86(1) — The Core Right:
Any natural person who has been subject to a decision taken by the deployer based significantly on the output of a high-risk AI system listed in Annex III, except systems listed in point 2 of that Annex [GPAI research exception], which produces legal effects or similarly significantly affects that person, has the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.
The operative conditions are:
- Natural person — legal persons (companies) have no Art.86 rights
- Decision based significantly on AI output — incidental AI use is not enough
- High-risk AI system listed in Annex III — only the eight high-risk categories apply
- Legal effects or significant impact — the threshold separates high-stakes from trivial decisions
- Right to explanation — not a right to reverse the decision, but to understand it
Article 86(2) — Provider Enablement Obligation:
Providers of high-risk AI systems must design systems in a way that enables deployers to comply with Art.86(1). This is not optional — it is a conformity requirement under Chapter 2 of the Act.
Article 86(3) — Scope Limitation:
The right under Art.86(1) does not apply in circumstances where the AI system output constitutes solely a preparatory act that has no direct legal or similar significant effect on the natural person (preliminary screening, early-stage filtering). The threshold is significant reliance, not any reliance.
Art.86 vs GDPR Art.22: The Two-Tier Framework
Understanding Art.86 requires understanding what it adds on top of GDPR Article 22.
| Dimension | GDPR Art.22 | AI Act Art.86 |
|---|---|---|
| Trigger | Solely automated decision | Decision based significantly on AI (human can be involved) |
| Core right | Right not to be subject to automated decision | Right to explanation of the AI's role |
| Human review | Defeats Art.22 protection | Irrelevant — Art.86 applies regardless |
| AI types | Any automated processing | Only Annex III high-risk AI systems |
| Obligation holder | Data controller (Art.4 GDPR) | Deployer (AI Act) + provider (enablement) |
| What must be provided | Meaningful information, logic involved, significance | Role of AI in procedure + main elements of decision |
| Timeline | GDPR = "without undue delay" | AI Act = not specified (deployer must define) |
| Cross-reference | GDPR Art.22(3): right to human review | AI Act Art.86: independent of Art.22 |
The combined framework for high-risk AI:
- If decision is purely automated AND made by a high-risk AI: both GDPR Art.22 and AI Act Art.86 apply
- If decision involves human review AND made by a high-risk AI: only AI Act Art.86 applies (GDPR Art.22 does not)
- If decision involves human review AND made by non-high-risk AI: neither GDPR Art.22 nor AI Act Art.86 applies
Why this matters for developers: The addition of human review — sometimes done to escape GDPR Art.22 liability — does not escape AI Act Art.86. The person affected can still demand explanation of what the AI contributed to the decision.
Who Owes What: Provider vs Deployer Obligations
Provider Obligations (Art.86(2))
Providers must design their high-risk AI systems to enable deployers to comply with Art.86(1). This means:
-
Explanation generation capability: The system must be able to produce, for any individual decision, a human-readable account of:
- Which input features were most influential
- The confidence or probability of the AI's output
- What factors in the person's data drove the result
- Any counterfactual information (what would have changed the decision)
-
API-accessible explanation output: The explanation cannot be siloed in an internal dashboard. The system must expose it to the deployer in a structured format.
-
Technical documentation (Art.11) must include: The explainability methodology, the level of explanation granularity, and the accuracy-explainability trade-off decisions made during system design.
-
Post-market monitoring (Art.72) must track: Whether deployed explanations are understood by affected persons (feedback loop obligation).
Deployer Obligations (Art.86(1))
Deployers — the entities that use the AI system in practice — hold the primary Art.86 obligation. They must:
-
Deliver the explanation upon request: When an affected person invokes their Art.86 right, the deployer must provide a clear and meaningful explanation of the AI's role in that specific decision.
-
Maintain decision records: The deployer must be able to reconstruct, for any past high-risk AI decision, what the AI output was and what explanation can be given. This requires logging individual decisions with their AI outputs.
-
Define a request handling procedure: Art.86 requires a mechanism for affected persons to invoke the right — typically a written request to a designated contact, with a defined response window.
-
Human oversight (Art.26(2)): Deployers must ensure that humans with competence, authority, and time to review AI outputs are involved — especially for high-impact decisions. The Art.86 explanation must cover what that human oversight consisted of.
What "Legal Effects or Significant Impact" Means
The Art.86 threshold is not met by all high-risk AI use. The decision must produce:
-
Legal effects: The decision changes the person's legal rights, obligations, or status — credit granted/denied, employment offered/refused, benefit approved/rejected, educational placement determined, criminal risk assessment used in sentencing.
-
Similarly significant effects: Effects that, while not formally legal, substantially affect the person's life circumstances, opportunities, or wellbeing — denied housing through an AI screening system, flagged for fraud investigation, excluded from insurance coverage.
Examples that meet the threshold:
- Mortgage application rejected by credit-scoring AI → legal effect (contract refused)
- CV filtered out by AI recruitment screening → significant effect (job opportunity lost)
- Social benefit reduced by algorithmic eligibility system → legal effect (entitlement reduced)
- Patient triaged as low-priority by healthcare AI → significant effect (medical access delayed)
- Individual flagged as high-risk by law enforcement AI → significant effect (investigation initiated)
Examples that do NOT meet the threshold:
- AI recommends products to a website visitor → no significant effect (commercial suggestion only)
- AI sorts customer support tickets by priority → no significant effect (internal workflow, no individual outcome)
- AI transcribes a call for a customer service agent → no significant effect (processing tool, not decision)
What a Meaningful Explanation Must Include
Article 86(1) requires explanation of two things: (1) the role of the AI system in the decision-making procedure, and (2) the main elements of the decision taken.
The Regulation does not enumerate specific elements. Based on the structure of Art.86 read alongside Art.13 (transparency to deployers) and the GDPR Art.22 framework, a complete Art.86 explanation should include:
| Explanation Element | Substance | Format |
|---|---|---|
| AI's functional role | Was the AI the sole input, a significant factor, or one of several? | Qualitative description |
| Decision output | What did the AI system output for this person (score, probability, classification)? | Numeric or categorical |
| Key influencing factors | Which personal data attributes most influenced the AI's output? | Ranked feature list |
| Counterfactual | What would have changed the AI's output? (SHAP/LIME-derived) | Concrete alternative |
| Human review description | What did the human reviewer evaluate? What authority did they have? | Procedural narrative |
| Final decision basis | How was the AI output used in the final decision? | Decision rationale |
| Challenge mechanism | How can the person challenge the decision or request human review? | Contact + procedure |
Sector-Specific Implementation
Credit and Financial Decisions (Annex III §5)
High-risk AI systems used in credit scoring, insurance pricing, and access to financial services fall under Annex III Category 5(b). Art.86 is highly activated here.
What the explanation must cover:
- The credit score or risk classification the AI produced (e.g., "probability of default: 12.4%")
- The three to five data attributes that most influenced that score (e.g., "debt-to-income ratio, length of credit history, recent missed payments")
- What the applicant could change to improve future assessments (counterfactual)
- Whether a human underwriter reviewed the AI output and what criteria they applied
- The bank's final decision basis (AI score + underwriter judgement + credit policy)
Regulatory overlay: GDPR Art.22 may also apply if no human review occurs. MiFID II and CRD VI impose additional explainability requirements for AI in investment advice and credit underwriting. Art.86 adds a layer but does not replace these.
Recruitment and Employment (Annex III §4)
AI systems used for CV screening, candidate ranking, aptitude testing, or employee monitoring fall under Annex III Category 4. This is among the highest-volume Art.86 activation scenarios.
What the explanation must cover:
- Whether the AI ranked, scored, or filtered the candidate (process role)
- The factors the AI used to assess the candidate (skills matching, keywords, inferred attributes)
- What the human recruiter saw (full AI output or filtered shortlist?)
- Why the candidate was not shortlisted or not offered the position
- How to request reconsideration or submit to human review
Regulatory overlay: GDPR Art.22 may apply for automated screening without human review. The EU Pay Transparency Directive (2023/970) adds context where AI assists in pay-band decisions. EEOC principles in cross-border hiring contexts may impose additional requirements.
Social Benefits and Public Services (Annex III §5)
Algorithmic systems used by public bodies for benefit eligibility, fraud detection, or social service allocation fall under Annex III Category 5(a)/(c). Art.86 applies.
What the explanation must cover:
- The eligibility score or risk flag the AI produced
- The citizen's data that drove the output (income data, household composition, past claims)
- Whether the AI produced a recommendation or a binding decision
- What human caseworker review occurred and what standard was applied
- The specific grounds for rejection or reduction, both AI-derived and policy-based
- The appeals process and contact information
Note: Public authorities deploying Annex III AI systems are subject to Art.86. The Charter of Fundamental Rights Art.41 (right to good administration) creates an independent administrative law obligation that aligns with Art.86.
Healthcare and Medical Risk Assessment (Annex III §5 adjacent + Medical Device Regulation)
AI systems used in clinical decision support, patient triage, or medical risk stratification that significantly affect patient care pathways may activate Art.86 — particularly where they filter access to care or prioritise treatment.
What the explanation must cover:
- The risk score or classification the AI produced (e.g., "urgent: 73% probability of deterioration")
- The clinical variables that drove the output (vital signs, lab values, history)
- How the clinical team used the AI output in treatment decisions
- What alternative pathways were considered
Regulatory overlay: The EU Medical Device Regulation (MDR/IVDR) imposes additional requirements for AI used as medical software. Art.86 applies as an additional layer where MDR-regulated AI also triggers Annex III classification.
Technical Implementation: Building Art.86-Compliant Explanations
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime, timezone
from typing import Optional
import hashlib
import json
class ExplanationTrigger(Enum):
LEGAL_EFFECT = "legal_effect" # Contract, entitlement, status change
SIGNIFICANT_IMPACT = "significant_impact" # Material life circumstances effect
PREPARATORY_ONLY = "preparatory_only" # Below Art.86 threshold
class HumanReviewType(Enum):
NONE = "none" # Purely automated — GDPR Art.22 also applies
RUBBER_STAMP = "rubber_stamp" # Human sees output, no real review
SUBSTANTIVE = "substantive" # Human applies independent judgement
OVERRIDE = "override" # Human overrode AI recommendation
@dataclass
class FeatureContribution:
"""One data attribute's contribution to the AI output."""
feature_name: str # Human-readable field name (e.g., "Credit utilization")
feature_value: str # Value for this individual (e.g., "87%")
contribution_direction: str # "increases_risk" / "decreases_risk" / "neutral"
contribution_magnitude: str # "high" / "medium" / "low"
counterfactual: Optional[str] = None # "If X were Y, the score would change by Z"
@dataclass
class Art86Explanation:
"""
Complete Art.86 explanation for a single high-risk AI decision.
Must be generated at decision time and stored for retrospective retrieval.
Enables deployer to fulfil Art.86(1) obligation upon affected person's request.
"""
# Decision identification
decision_id: str
decision_timestamp: datetime
annex_iii_category: str # e.g., "Category 5(b) — Credit scoring"
# AI system role
ai_role_description: str # Narrative: what did the AI do in this process?
ai_output_value: str # The AI's actual output for this individual
ai_output_confidence: Optional[float] = None # Confidence/probability if available
# Feature contributions (SHAP/LIME-derived)
feature_contributions: list[FeatureContribution] = field(default_factory=list)
# Counterfactual
primary_counterfactual: Optional[str] = None # Most actionable "what if" scenario
# Human oversight
human_review_type: HumanReviewType = HumanReviewType.NONE
human_review_description: Optional[str] = None # What the reviewer evaluated
# Final decision
final_decision: str # The actual outcome for the person
decision_basis: str # How AI output was used in final decision
# Challenge mechanism
challenge_contact: str # Where to send Art.86 requests
challenge_deadline_days: int = 30 # Days within which deployer must respond
# Trigger assessment
trigger_type: ExplanationTrigger = ExplanationTrigger.LEGAL_EFFECT
def generate_person_facing_summary(self) -> str:
"""
Generate the human-readable Art.86 explanation for the affected person.
Clear, non-technical language required.
"""
lines = [
f"DECISION EXPLANATION — {self.annex_iii_category}",
f"Decision date: {self.decision_timestamp.strftime('%Y-%m-%d')}",
"",
"HOW AI WAS USED:",
self.ai_role_description,
"",
f"AI SYSTEM OUTPUT: {self.ai_output_value}",
]
if self.ai_output_confidence:
lines.append(f"Confidence level: {self.ai_output_confidence:.0%}")
if self.feature_contributions:
lines.extend(["", "MAIN FACTORS THAT INFLUENCED THIS RESULT:"])
for fc in sorted(
self.feature_contributions,
key=lambda x: {"high": 0, "medium": 1, "low": 2}.get(x.contribution_magnitude, 2)
)[:5]: # Top 5 factors
lines.append(
f" • {fc.feature_name} ({fc.feature_value}): "
f"{fc.contribution_direction.replace('_', ' ')} — "
f"{fc.contribution_magnitude} influence"
)
if fc.counterfactual:
lines.append(f" → {fc.counterfactual}")
if self.primary_counterfactual:
lines.extend(["", f"WHAT COULD CHANGE THIS: {self.primary_counterfactual}"])
lines.extend([
"",
"HUMAN REVIEW:",
self.human_review_description or self.human_review_type.value,
"",
f"DECISION TAKEN: {self.final_decision}",
f"BASIS: {self.decision_basis}",
"",
"HOW TO CHALLENGE THIS DECISION:",
f"Contact: {self.challenge_contact}",
f"Response deadline: {self.challenge_deadline_days} days",
])
return "\n".join(lines)
def to_audit_record(self) -> dict:
"""Serialise to audit log format for Art.86 request fulfillment."""
return {
"decision_id": self.decision_id,
"timestamp": self.decision_timestamp.isoformat(),
"annex_iii_category": self.annex_iii_category,
"ai_role": self.ai_role_description,
"ai_output": self.ai_output_value,
"feature_count": len(self.feature_contributions),
"human_review_type": self.human_review_type.value,
"final_decision": self.final_decision,
"trigger_type": self.trigger_type.value,
"explanation_hash": hashlib.sha256(
self.generate_person_facing_summary().encode()
).hexdigest()[:16],
}
class Art86RequestHandler:
"""
Handles incoming Art.86 explanation requests from affected persons.
Deployer-side fulfilment logic.
"""
def __init__(self, explanation_store: dict[str, Art86Explanation]):
self.store = explanation_store # decision_id → Art86Explanation
self.request_log: list[dict] = []
def receive_request(
self,
person_id: str,
decision_id: str,
received_at: Optional[datetime] = None,
) -> dict:
"""
Process an Art.86 explanation request.
Returns status and explanation if found.
"""
received_at = received_at or datetime.now(timezone.utc)
explanation = self.store.get(decision_id)
request_record = {
"request_id": f"art86_{decision_id}_{received_at.strftime('%Y%m%d%H%M%S')}",
"person_id_hash": hashlib.sha256(person_id.encode()).hexdigest()[:12],
"decision_id": decision_id,
"received_at": received_at.isoformat(),
"status": "fulfilled" if explanation else "not_found",
"trigger_type": explanation.trigger_type.value if explanation else None,
}
self.request_log.append(request_record)
if not explanation:
return {
"status": "not_found",
"message": "No decision record found for this ID. "
"If you believe this is an error, contact us.",
"request_id": request_record["request_id"],
}
if explanation.trigger_type == ExplanationTrigger.PREPARATORY_ONLY:
return {
"status": "below_threshold",
"message": "The AI system's involvement in this case was preparatory "
"and did not produce legal or significant effects. "
"Art.86 does not apply. If you believe otherwise, contact us.",
"request_id": request_record["request_id"],
}
return {
"status": "fulfilled",
"request_id": request_record["request_id"],
"explanation": explanation.generate_person_facing_summary(),
"generated_at": received_at.isoformat(),
}
def compliance_metrics(self) -> dict:
"""PMM-compatible metrics for Art.86 request volume and response quality."""
total = len(self.request_log)
fulfilled = sum(1 for r in self.request_log if r["status"] == "fulfilled")
below_threshold = sum(1 for r in self.request_log if r["status"] == "below_threshold")
not_found = sum(1 for r in self.request_log if r["status"] == "not_found")
return {
"total_requests": total,
"fulfilled": fulfilled,
"below_threshold": below_threshold,
"not_found": not_found,
"fulfilment_rate": fulfilled / total if total > 0 else None,
"art86_activation_rate": (fulfilled + below_threshold) / total if total > 0 else None,
}
# Usage example — Credit Scoring deployer
explanation = Art86Explanation(
decision_id="loan_2026_04_12_001",
decision_timestamp=datetime(2026, 4, 12, 10, 30, 0, tzinfo=timezone.utc),
annex_iii_category="Category 5(b) — Credit scoring and creditworthiness assessment",
ai_role_description=(
"An AI credit scoring system assessed your application. "
"The system analysed your financial history and produced a risk score. "
"A human underwriter reviewed this score before making the final decision."
),
ai_output_value="Credit risk score: 720/1000 (below minimum threshold of 750 for this product)",
ai_output_confidence=0.84,
feature_contributions=[
FeatureContribution(
feature_name="Current debt-to-income ratio",
feature_value="52%",
contribution_direction="increases_risk",
contribution_magnitude="high",
counterfactual="Reducing existing debt by €15,000 would improve your score by approximately 40 points.",
),
FeatureContribution(
feature_name="Length of credit history",
feature_value="3 years",
contribution_direction="increases_risk",
contribution_magnitude="medium",
counterfactual="A credit history of 5+ years would reduce risk scoring by approximately 15 points.",
),
FeatureContribution(
feature_name="Payment consistency (24 months)",
feature_value="100% on-time",
contribution_direction="decreases_risk",
contribution_magnitude="high",
),
],
primary_counterfactual=(
"If your debt-to-income ratio were below 40%, your application would likely "
"meet the minimum threshold for this product."
),
human_review_type=HumanReviewType.SUBSTANTIVE,
human_review_description=(
"An underwriter reviewed the AI risk score alongside your income documentation "
"and applied the bank's credit policy. The underwriter confirmed the AI assessment."
),
final_decision="Mortgage application declined",
decision_basis=(
"AI risk score (720) below product minimum (750). "
"Underwriter confirmed AI assessment. Credit policy applied consistently."
),
challenge_contact="creditdecisions@example-bank.eu",
challenge_deadline_days=30,
trigger_type=ExplanationTrigger.LEGAL_EFFECT,
)
print(explanation.generate_person_facing_summary())
CLOUD Act × Art.86: The Explanation Log Risk
Article 86 requires deployers to maintain decision records with explanation capability — that is, they must store, for every high-risk AI decision, enough data to generate an Art.86-compliant explanation on request. These records typically include:
- Individual input data used by the AI for that decision
- The AI model's output (score, classification, confidence)
- Feature attribution values (SHAP/LIME results)
- Human reviewer notes and override records
- The final decision and its basis
CLOUD Act dual-compellability table for Art.86 records:
| Data Category | CLOUD Act Compellability | EU Regulatory Compellability | Risk with US Infrastructure |
|---|---|---|---|
| Individual input features (personal data) | HIGH (US person targeting) | HIGH (GDPR + AI Act Art.26) | Dual legal order exposure |
| AI model output / score | HIGH | HIGH (Art.86 evidence) | Critical: MSA subpoena + US DOJ |
| Feature attributions (SHAP values) | MEDIUM | HIGH (Art.86 explanation basis) | Regulatory evidence at risk |
| Human reviewer notes | MEDIUM | HIGH (Art.86 oversight record) | Confidential process exposed |
| Explanation request log | LOW | MEDIUM (Art.86 audit trail) | Volume patterns disclosed |
| Challenge/dispute correspondence | MEDIUM | HIGH (rights enforcement trail) | Legal privilege risk |
EU-native infrastructure = single legal order. Explanation records stored with a European cloud provider operate under GDPR and the AI Act exclusively. There is no concurrent CLOUD Act compellability mechanism for data stored by EU-controlled entities on EU territory.
For high-risk AI deployers in credit, healthcare, or public services — where explanation records contain sensitive personal data and form the basis for regulatory audit — this is not a theoretical risk. US federal agencies can and do issue CLOUD Act orders for financial and healthcare records held by US cloud providers.
Art.86 × Art.99: Fine Exposure
Failure to comply with Art.86 creates two distinct fine pathways:
Pathway 1 — Art.99 Tier 2 (via Art.26 deployer obligations): Article 26 requires deployers to comply with their obligations under the Act, including Art.86. Non-compliance with Art.26 is punishable under Art.99(4) (provider fines) if the deployer is also a provider, or under the national implementing provisions. The AI Act Recital 161 explicitly addresses MSA enforcement against deployers.
Pathway 2 — GDPR (where Art.22 overlap exists): For purely automated decisions where GDPR Art.22 also applies, failure to provide the Art.22(3) explanation exposes the organisation to GDPR fines of up to €20M or 4% of global annual turnover (Art.83(1) GDPR). The GDPR and AI Act are legally independent regimes — the same failure can trigger fines under both.
Combined exposure for deployers using high-risk AI for purely automated significant decisions:
- AI Act Art.99 Tier 2: up to €15M or 3% global turnover
- GDPR Art.83: up to €20M or 4% global turnover
- National implementing provisions (varying by Member State)
Deployers cannot escape by adding a nominal human review that is not substantive. Recital 47 makes clear that human review must be genuine — a rubber-stamp review by someone who cannot meaningfully evaluate the AI output does not escape the GDPR Art.22 regime. And Art.86 applies regardless.
Art.86 in the AI Act Enforcement Pipeline
Article 86 sits at the intersection of the enforcement chapters. The full pipeline for an Art.86 violation:
Affected person requests explanation
↓
Deployer fails to provide (or provides inadequate explanation)
↓
Person complains to national DPA (GDPR) or MSA (AI Act)
↓
MSA opens Art.79(1) investigation → Art.79(2) corrective measures
↓
If violation confirmed: Art.99 Tier 2 fine (deployer) + Tier 1 fine (provider, if Art.86(2) breach)
↓
MSA reports to Commission under Art.84 → feeds Art.85 review data
The Art.86 audit trail — explanation records, request logs, challenge correspondence — becomes evidence in MSA investigations. If that audit trail is incomplete or has been modified, Art.99(5) (non-cooperation) can add another fine layer.
Developer Checklist: Art.86 Compliance
Provider Obligations (Art.86(2) — design-time):
- Explainability architecture documented in Technical Documentation (Art.11 Annex IV)
- Feature attribution method selected (SHAP/LIME/Anchors) and validated for accuracy-explainability trade-off
- Explanation API endpoint documented in instructions for use (Art.13)
- Counterfactual generation capability included in system design
- Explanation generation tested for all Annex III use case categories in scope
- Explanation quality validated against GDPR WP Article 29 "meaningful information" standard
- Confidence/probability output included where statistically meaningful
- Explanation language tested for non-expert comprehensibility
- Art.86(3) preparatory-act threshold documented — when explanation is not required
- Post-market monitoring (Art.72) includes explanation quality tracking
Deployer Obligations (Art.86(1) — operational):
- Art.86 request procedure defined and documented (who receives, who responds, in what timeframe)
- Decision record schema captures all Art.86-required elements at decision time
- Decision records retained for minimum 10 years (Art.12 log retention where applicable)
- SHAP/feature attribution values stored per decision (not just aggregate model metrics)
- Human review documentation captured (reviewer ID, review timestamp, override flag)
- Explanation generation can be triggered from stored decision record (not only real-time)
- Art.86 request log maintained separately from decision records (privacy separation)
- Response SLA defined (30 days recommended; check national implementing law)
- Challenge and appeals process documented in public-facing policy
- Staff training includes Art.86 request handling procedure
Infrastructure (CLOUD Act risk mitigation):
- Decision records (personal data + AI outputs) stored on EU-controlled infrastructure
- Feature attribution values stored in EU jurisdiction (single legal order)
- Human reviewer notes stored in EU jurisdiction
- Art.86 request correspondence stored in EU jurisdiction
- Cloud provider contract includes no CLOUD Act third-party compellability
- Data residency verified at infrastructure level (not just contractual commitment)
- Backup and disaster recovery paths also EU-resident
- Legal hold procedures tested for EU-resident explanation records
Intersection with GDPR Art.22:
- Legal basis for AI-influenced decisions documented (GDPR Art.6 + Art.22(2))
- GDPR Art.22 applicability assessed separately from AI Act Art.86 (different triggers)
- For purely automated decisions: GDPR Art.22(3) right to human review implemented
- DPO briefed on Art.86 / GDPR Art.22 combined compliance requirements
- Privacy Impact Assessment (DPIA) updated to include Art.86 processing activities
- Art.86 explanation records assessed for data minimisation (only what is needed for explanation)
- Retention period for explanation records aligned with both GDPR and AI Act obligations
- Cross-border data transfer assessment for explanation records sent to affected persons
Monitoring and Audit:
- Art.86 request volume tracked as PMM metric (Art.72 post-market monitoring)
- Fulfilment rate tracked (requests answered vs. failed/overdue)
- Explanation adequacy spot-checked by DPO or compliance team quarterly
- MSA audit readiness tested — can you produce the explanation for any past decision within 72 hours?
- Art.84 reporting preparation includes Art.86 request statistics for annual MSA submission
EU-native infrastructure keeps your Art.86 explanation records in a single legal order — no CLOUD Act compellability, no dual-regime exposure. Deploy on sota.io
See also: