2026-04-16·12 min read·

EU AI Act Art.13 Transparency Obligations: Developer Guide (Instructions for Use, Chatbot Disclosure 2026)

EU AI Act Article 13 is the transparency obligation of the high-risk AI compliance chain: before your system reaches a deployer, you must hand over written instructions that give them everything they need to use it lawfully. Article 13 is also where the AI Act meets the right to explanation — because Art.13(3) instructions must enable deployers to interpret outputs and explain decisions to affected persons under Art.86.

This guide covers the full Art.13 implementation scope: the seven mandatory instructions-for-use elements, the Art.13 × Art.50 chatbot and emotion recognition intersection, the Art.13 × Art.86 right to explanation, CLOUD Act jurisdiction for documentation, and what EU-native deployments mean for single-regime transparency compliance.


Art.13 in the High-Risk AI Compliance Chain

Art.13 sits between Art.12 (logging, which creates the evidence trail) and Art.14 (human oversight, which requires a human in the loop for critical decisions):

ArticleObligationDirection
Art.9Risk Management SystemProvider builds, deployer uses
Art.10Training Data GovernanceProvider obligation
Art.11Technical DocumentationProvider obligation
Art.12Logging & Record-KeepingProvider builds, deployer stores
Art.13Transparency & Instructions for UseProvider → Deployer
Art.14Human OversightProvider designs, deployer activates
Art.15Accuracy, Robustness, CybersecurityProvider obligation

Art.13 is the handoff document: everything the deployer needs to know to be compliant is encoded in the instructions for use. If the instructions are incomplete, the provider has violated Art.13 even if the system itself is technically correct.


Art.13(1) — Transparency Design Principle

"High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately."

Art.13(1) is a design obligation, not just a documentation obligation. The system architecture itself must support transparency:

What "sufficiently transparent" means in practice:

Transparency vs. interpretability distinction:

Art.13 handles the first. Art.86 (right to explanation) handles the second — and depends entirely on Art.13 infrastructure being in place.


Art.13(2) — Format and Language Requirements

Art.13(2) specifies that instructions for use must be:

Developer implications:

RequirementImplementation Note
Digital formatMachine-readable preferred; PDF alone is not sufficient for programmatic verification
LanguageMulti-tenant SaaS deploying to multiple Member States must localize instructions per jurisdiction
ComprehensibleTechnical jargon without explanation fails this requirement
CompleteMissing any Art.13(3) element is non-compliance, not a minor gap

Multi-language SaaS pattern: If your high-risk AI system is deployed as SaaS to EU organizations, your instructions for use must be available in the deployer's Member State language. A German hospital deploying your medical AI system (Annex III Cat.5) must receive instructions in German. An English-only document does not satisfy Art.13(2) for German-market deployments.


Art.13(3) — Mandatory Instructions for Use: 7 Content Elements

Art.13(3) specifies exactly what the instructions for use must contain. This is the most implementation-dense part of Art.13.

Element 1: Provider Identity and Contact (Art.13(3)(a))

The instructions must include:

This is not just a corporate header — it establishes accountability. If a deployer's compliance auditor needs to contact the provider about system behavior or an Art.86 explanation request, this element is the only mandated channel.

Element 2: System Capabilities, Limitations, and Performance (Art.13(3)(b))

This is the most technically demanding element — six subcategories:

SubcategoryContent Required
(b)(i) Intended purposeUse cases the system has been validated for, plus explicitly excluded use cases
(b)(ii) Performance metricsAccuracy, robustness, cybersecurity — actual validated values, not marketing claims
(b)(iii) Risk circumstancesScenarios where the system may fail or produce rights-affecting errors
(b)(iv) Population performanceAccuracy broken down by the demographic groups the system is designed to serve
(b)(v) Training data specificationsWhere relevant, input data requirements tied to training/validation dataset characteristics
(b)(vi) Output interpretationWhat each output means, how to interpret confidence levels, decision thresholds

Why (b)(iv) must align with Art.10 documentation: If your Art.10 bias examination found disparate impact ratios (DIR) below 0.8 for specific demographic groups, your Art.13(3)(b)(iv) disclosure must reflect that. Inconsistency between Art.10 training data documentation and Art.13 instructions for use is a red flag in conformity assessment — auditors cross-check these deliberately.

Element 3: Pre-Determined Changes (Art.13(3)(c))

Any changes to the system that were pre-planned at the time of the initial conformity assessment must be disclosed:

An empty planned-changes section is valid — it means no changes are currently planned.

Element 4: Human Oversight Measures (Art.13(3)(d))

The instructions must describe:

This element links Art.13 directly to Art.14. The instructions are the bridge between the provider's oversight design and the deployer's oversight implementation. If Art.14 measures are complex, Art.13(3)(d) must be correspondingly detailed.

Element 5: Computational Requirements and System Lifetime (Art.13(3)(e))

This element creates a support obligation on the provider — you cannot end-of-life a high-risk AI system without updating the instructions to reflect that and giving deployers adequate notice to find an alternative.

Element 6: Feedback and Reporting Mechanisms (Art.13(3)(f))

Where applicable, a description of mechanisms that allow deployers, affected persons, or others to:

This is the feedback loop: Art.13(3)(f) creates the channel through which Art.86 right-to-explanation requests reach the provider. Without Art.13(3)(f), there is no operationally clear path for an affected person to obtain an explanation.

Element 7: Accuracy by Population Group (Art.13(3)(g))

Where applicable, the system's accuracy broken down by:

This is distinct from (b)(iv) — (b)(iv) is about known risk circumstances where performance degrades, while (g) is about measured performance stratified by population in normal operation. Both must be present where the distinction is relevant to your Annex III category.


Art.13(3) Complete Structure in Python

from dataclasses import dataclass, field
from typing import List, Optional, Dict, Any

@dataclass
class PopulationAccuracyMetric:
    group: str
    accuracy: float
    sample_size: int
    test_date: str
    metric_type: str  # "precision", "recall", "f1", "auc"

@dataclass
class RiskCircumstance:
    scenario: str
    affected_fundamental_right: str  # EU Charter Article reference
    mitigation_available: bool
    risk_level: str  # "high", "medium", "low"

@dataclass
class InstructionsForUse:
    """
    Art.13(3) compliant instructions for use structure.
    All 7 mandatory elements encoded as typed fields.
    """
    # Element 1: Provider identity (Art.13(3)(a))
    provider_name: str
    provider_address: str
    provider_contact: str
    authorized_rep_contact: Optional[str] = None

    # Element 2: Capabilities/limitations (Art.13(3)(b))
    intended_purpose: str = ""
    excluded_use_cases: List[str] = field(default_factory=list)
    accuracy: float = 0.0
    robustness_score: float = 0.0
    cybersecurity_level: str = ""
    risk_circumstances: List[RiskCircumstance] = field(default_factory=list)
    population_accuracy: List[PopulationAccuracyMetric] = field(default_factory=list)
    input_data_specification: str = ""
    output_interpretation_guide: str = ""

    # Element 3: Pre-determined changes (Art.13(3)(c))
    planned_changes: List[Dict[str, Any]] = field(default_factory=list)

    # Element 4: Human oversight (Art.13(3)(d))
    oversight_mechanisms: List[str] = field(default_factory=list)
    oversight_intensity: str = ""  # "continuous", "periodic", "exception-based"
    oversight_role_recommendation: str = ""

    # Element 5: Computational requirements (Art.13(3)(e))
    minimum_cpu_cores: int = 0
    minimum_ram_gb: int = 0
    expected_lifetime_years: float = 0.0
    maintenance_schedule: str = ""
    software_update_policy: str = ""

    # Element 6: Feedback mechanisms (Art.13(3)(f))
    concern_reporting_url: Optional[str] = None
    explanation_request_contact: Optional[str] = None

    # Element 7: Population accuracy (Art.13(3)(g))
    population_accuracy_breakdown: List[PopulationAccuracyMetric] = field(default_factory=list)

    def validate_art13_completeness(self) -> Dict[str, bool]:
        """Check all 7 Art.13(3) mandatory elements are present."""
        return {
            "13(3)(a) provider_identity": bool(
                self.provider_name and self.provider_address and self.provider_contact
            ),
            "13(3)(b) capabilities_limitations": bool(
                self.intended_purpose
                and self.accuracy > 0
                and len(self.risk_circumstances) > 0
                and self.output_interpretation_guide
            ),
            "13(3)(c) planned_changes_disclosed": True,  # Empty = no changes (valid)
            "13(3)(d) human_oversight": bool(
                self.oversight_mechanisms and self.oversight_intensity
            ),
            "13(3)(e) computational_requirements": bool(
                self.minimum_cpu_cores > 0
                and self.expected_lifetime_years > 0
                and self.maintenance_schedule
            ),
            "13(3)(f) feedback_mechanisms": True,  # "where applicable" — can be N/A
            "13(3)(g) population_accuracy": True,   # "where applicable" — can be N/A
        }

    def compliance_gaps(self) -> List[str]:
        checks = self.validate_art13_completeness()
        return [element for element, passed in checks.items() if not passed]

Art.13 × Art.50 — The Chatbot and Emotion Recognition Intersection

Art.13 governs provider → deployer transparency. Art.50 governs a different layer: system → end user transparency. When a high-risk AI system is also a chatbot or emotion recognition system, both apply simultaneously.

Art.50(1) — Chatbot Disclosure

"Providers of AI systems intended to interact directly with natural persons shall design and develop them in such a way that the natural persons concerned are informed that they are interacting with an AI system."

Who this applies to:

Developer obligation:

Art.50(2) — Emotion Recognition

"Providers of AI systems for emotion recognition... shall inform the natural persons exposed to it of the operation of the system."

Art.50(2) creates a real-time disclosure obligation: affected persons must be informed before or at the time of exposure to emotion recognition AI. This is more demanding than Art.13 which only requires instructions to deployers.

Interaction with Art.13:

Art.13 × Art.50 System Type Matrix

System TypeArt.13 Applies?Art.50 Applies?Who Owes Disclosure
High-risk AI, no user interactionYesNoProvider → Deployer only
High-risk AI chatbot (e.g., legal aid AI)YesYes (Art.50(1))Provider → Deployer (Art.13) + System → User (Art.50)
High-risk AI + emotion recognitionYesYes (Art.50(2))Provider → Deployer (Art.13) + System → Subject (Art.50)
GPAI system with chatbot interfaceNoYes (Art.50(1))System → User only
Non-high-risk AI chatbotNoYes (Art.50(1))System → User only
Biometric categorization (Annex III Cat.1)YesYes (Art.50(3))Both chains simultaneously

Python: Art.50 Chatbot Disclosure Middleware

from typing import Optional
from enum import Enum

class DisclosureContext(Enum):
    CHATBOT_INITIAL = "art50_chatbot_initial"
    EMOTION_RECOGNITION = "art50_emotion_real_time"
    BIOMETRIC_CATEGORY = "art50_biometric"
    SYNTHETIC_CONTENT = "art50_synthetic"

def generate_art50_disclosure(
    context: DisclosureContext,
    system_name: str,
    system_purpose: str,
    language: str = "en"
) -> str:
    """
    Generate Art.50 compliant user disclosure text.
    Must be shown BEFORE or AT START of AI interaction.
    """
    templates = {
        "en": {
            DisclosureContext.CHATBOT_INITIAL: (
                f"You are interacting with an AI system ({system_name}). "
                f"This system assists with {system_purpose}. "
                "You are not speaking with a human."
            ),
            DisclosureContext.EMOTION_RECOGNITION: (
                f"Notice: {system_name} uses AI emotion recognition. "
                "Your emotional expressions are being analyzed by an AI system."
            ),
            DisclosureContext.BIOMETRIC_CATEGORY: (
                f"Notice: {system_name} uses biometric AI categorization. "
                "Your biometric data is being processed by an AI system."
            ),
        },
        "de": {
            DisclosureContext.CHATBOT_INITIAL: (
                f"Sie interagieren mit einem KI-System ({system_name}). "
                f"Dieses System unterstützt bei {system_purpose}. "
                "Sie sprechen nicht mit einem Menschen."
            ),
            DisclosureContext.EMOTION_RECOGNITION: (
                f"Hinweis: {system_name} verwendet KI-Emotionserkennung. "
                "Ihre emotionalen Ausdrücke werden von einem KI-System analysiert."
            ),
        }
    }

    lang_templates = templates.get(language, templates["en"])
    return lang_templates.get(context, templates["en"][context])


class Art50DisclosureMiddleware:
    """
    Enforce Art.50 disclosure at session start.
    Inject into your request pipeline before first user interaction.
    """

    def __init__(self, system_name: str, system_purpose: str):
        self.system_name = system_name
        self.system_purpose = system_purpose
        self._disclosed_sessions: set = set()

    def process_interaction(
        self,
        session_id: str,
        user_language: str = "en",
        context: DisclosureContext = DisclosureContext.CHATBOT_INITIAL
    ) -> Optional[str]:
        """
        Returns disclosure text if first interaction in this session.
        Returns None if already disclosed.
        """
        if session_id not in self._disclosed_sessions:
            self._disclosed_sessions.add(session_id)
            return generate_art50_disclosure(
                context, self.system_name, self.system_purpose, user_language
            )
        return None

Art.13 × Art.86 — Right to Explanation

AI Act Art.86 creates a right to explanation for persons subject to high-risk AI decisions:

"Any affected person subject to a decision taken by the deployer that is based on the output of a high-risk AI system listed in Annex III... which produces legal effects or similarly significantly affects that person... may request from the deployer a meaningful explanation of the role the high-risk AI system played in the decision-making procedure."

How Art.13 and Art.86 connect:

  1. Art.13(3)(b)(vi): Instructions must explain how to interpret outputs → deployer can only explain outputs if Art.13 gives them the tools to do so
  2. Art.13(3)(d): Human oversight measures must be documented → deployer must know which decisions required human review before explaining
  3. Art.13(3)(f): Feedback and reporting mechanisms → the Art.86 explanation request flows through this channel back to the provider

Developer obligation from Art.86:

GDPR Art.22 × AI Act Art.86

GDPR Art.22 gives data subjects the right not to be subject to solely automated decisions with legal or similarly significant effects. AI Act Art.86 adds a complementary right to explanation after the fact.

Key interactions:

from dataclasses import dataclass
from typing import List, Optional

@dataclass
class DecisionExplanation:
    """
    Structured explanation fulfilling Art.86 + GDPR Art.22(3).
    Must be generatable from system outputs to enable deployer responses.
    """
    decision_id: str
    decision_outcome: str
    ai_system_contribution: str     # "The AI system recommended X based on Y"
    key_factors: List[str]          # Top factors influencing the decision
    uncertainty_level: str          # "high", "medium", "low"
    human_review_occurred: bool
    human_override_available: bool
    appeal_channel: str
    oversight_mechanism_used: str   # Which Art.14 measure was applied
    data_categories_used: List[str] # Types of personal data that influenced the decision
    consent_basis: Optional[str]    # GDPR Art.22(2) basis if automated decision


def generate_explanation_for_affected_person(
    decision_id: str,
    instructions_for_use: "InstructionsForUse",
    inference_output: dict,
    human_reviewed: bool
) -> DecisionExplanation:
    """
    Build an Art.86 + GDPR Art.22(3) explanation from system output context.
    Only possible when Art.13(3)(b)(vi) output interpretation guide is complete.
    """
    confidence = inference_output.get("confidence", 0.0)
    return DecisionExplanation(
        decision_id=decision_id,
        decision_outcome=inference_output.get("decision", ""),
        ai_system_contribution=(
            f"The AI system ({instructions_for_use.provider_name}) analyzed the provided "
            f"input data according to its intended purpose: {instructions_for_use.intended_purpose}. "
            f"The system produced an output with a confidence score of {confidence:.2%}. "
            f"{instructions_for_use.output_interpretation_guide}"
        ),
        key_factors=inference_output.get("top_factors", []),
        uncertainty_level="high" if confidence < 0.75 else "low",
        human_review_occurred=human_reviewed,
        human_override_available=True,
        appeal_channel=instructions_for_use.explanation_request_contact or "",
        oversight_mechanism_used=instructions_for_use.oversight_intensity,
        data_categories_used=inference_output.get("data_categories", []),
        consent_basis=None
    )

Art.13 × CLOUD Act — Documentation Jurisdiction

If your Art.13 instructions for use and supporting technical documentation are stored on US-hosted infrastructure, the CLOUD Act creates a parallel access channel:

The problem:

Art.11 × Art.13 jurisdiction double-bind:

EU-native documentation chain: Storing your Art.13 instructions and Art.11 technical documentation on EU-sovereign infrastructure means:


Art.13 Compliance Checklist (30 Items)

class Art13ComplianceAuditor:
    """Audit an InstructionsForUse object against all Art.13 requirements."""

    CHECKLIST = [
        # Art.13(2) Format Requirements
        ("Format: Digital format available", lambda ifu: True),
        ("Format: Official language of deployment Member State", lambda ifu: True),
        ("Format: Concise, complete, correct, clear", lambda ifu: bool(ifu.intended_purpose)),

        # Art.13(3)(a) Provider Identity
        ("Art.13(3)(a): Provider name present", lambda ifu: bool(ifu.provider_name)),
        ("Art.13(3)(a): Provider address present", lambda ifu: bool(ifu.provider_address)),
        ("Art.13(3)(a): Provider contact present", lambda ifu: bool(ifu.provider_contact)),

        # Art.13(3)(b) Capabilities
        ("Art.13(3)(b)(i): Intended purpose documented", lambda ifu: bool(ifu.intended_purpose)),
        ("Art.13(3)(b)(i): Excluded use cases documented", lambda ifu: len(ifu.excluded_use_cases) > 0),
        ("Art.13(3)(b)(ii): Accuracy value present", lambda ifu: ifu.accuracy > 0),
        ("Art.13(3)(b)(ii): Robustness score present", lambda ifu: ifu.robustness_score > 0),
        ("Art.13(3)(b)(iii): Risk circumstances documented", lambda ifu: len(ifu.risk_circumstances) > 0),
        ("Art.13(3)(b)(iv): Population performance metrics", lambda ifu: len(ifu.population_accuracy) > 0),
        ("Art.13(3)(b)(v): Input data specification", lambda ifu: bool(ifu.input_data_specification)),
        ("Art.13(3)(b)(vi): Output interpretation guide", lambda ifu: bool(ifu.output_interpretation_guide)),

        # Art.13(3)(c) Planned Changes
        ("Art.13(3)(c): Planned changes disclosed (or none)", lambda ifu: True),

        # Art.13(3)(d) Human Oversight
        ("Art.13(3)(d): Oversight mechanisms listed", lambda ifu: len(ifu.oversight_mechanisms) > 0),
        ("Art.13(3)(d): Oversight intensity specified", lambda ifu: bool(ifu.oversight_intensity)),
        ("Art.13(3)(d): Oversight role recommendation", lambda ifu: bool(ifu.oversight_role_recommendation)),

        # Art.13(3)(e) Computational Requirements
        ("Art.13(3)(e): CPU requirements specified", lambda ifu: ifu.minimum_cpu_cores > 0),
        ("Art.13(3)(e): RAM requirements specified", lambda ifu: ifu.minimum_ram_gb > 0),
        ("Art.13(3)(e): Expected lifetime specified", lambda ifu: ifu.expected_lifetime_years > 0),
        ("Art.13(3)(e): Maintenance schedule present", lambda ifu: bool(ifu.maintenance_schedule)),
        ("Art.13(3)(e): Software update policy present", lambda ifu: bool(ifu.software_update_policy)),

        # Art.13(3)(f) Feedback Mechanisms
        ("Art.13(3)(f): Feedback channel available", lambda ifu: ifu.concern_reporting_url is not None),
        ("Art.13(3)(f): Explanation request contact", lambda ifu: ifu.explanation_request_contact is not None),

        # Art.86 Right to Explanation readiness
        ("Art.86: Output interpretable for explanation", lambda ifu: bool(ifu.output_interpretation_guide)),
        ("Art.86: Explanation channel documented", lambda ifu: ifu.explanation_request_contact is not None),

        # Art.50 intersection
        ("Art.50(1): Chatbot disclosure built into system (if applicable)", lambda ifu: True),
        ("Art.50(2): Emotion recognition notification (if applicable)", lambda ifu: True),

        # CLOUD Act risk
        ("CLOUD Act: Documentation on EU-sovereign infrastructure", lambda ifu: True),
    ]

    def audit(self, ifu: "InstructionsForUse") -> dict:
        results = {}
        for check_name, check_fn in self.CHECKLIST:
            try:
                results[check_name] = check_fn(ifu)
            except Exception:
                results[check_name] = False
        passed = sum(1 for v in results.values() if v)
        total = len(results)
        results["__summary__"] = f"{passed}/{total} checks passed ({100*passed//total}%)"
        return results

Art.13 Cross-Article Documentation Matrix

ArticleObligationArt.13 Instructions Element
Art.9 Risk ManagementRisk register and mitigationArt.13(3)(b)(iii) risk circumstances
Art.10 Training DataBias examination resultsArt.13(3)(b)(iv) population accuracy
Art.11 Technical DocumentationFull technical packageArt.13 instructions = Annex IV Section 1
Art.12 LoggingLogging capabilities built-inArt.13(3)(d) human oversight measures
Art.14 Human OversightOversight mechanism designArt.13(3)(d) describes to deployer
Art.47 Declaration of ConformityProvider accountabilityArt.13(3)(a) provider identity reference
Art.50 User TransparencyAI disclosure to end usersArt.13(3)(b)(vi) enables deployer Art.50 compliance
Art.72 Post-Market MonitoringPMM plan referenceArt.13(3)(c) planned changes
Art.83 Substantial ModificationChanged system requires updated IFUArt.13 instructions must be reissued
Art.86 Right to ExplanationAffected person explanation requestsArt.13(3)(f) explanation channel
GDPR Art.22Automated decision rightsArt.13(3)(b)(vi) enables GDPR-compliant explanation

EU-Native Deployment: Single-Jurisdiction Art.13 Compliance

When you deploy a high-risk AI system on EU-sovereign infrastructure, your Art.13 compliance posture simplifies significantly:

Compliance DimensionUS CloudEU-Native (sota.io)
Art.13 documentation storageCLOUD Act accessibleEU law governs exclusively
Art.11 × Art.13 10-year retentionUS DOJ parallel compellability for a decadeSingle EU regime
Art.13(3)(b)(ii) cybersecurity disclosureUS cloud certification gaps to explainEU-framework certifiable natively
Art.50 chatbot disclosure languageUS legal review for EU contentEU data sovereignty by default
Art.86 explanation data storageGDPR Art.5(1)(f) integrity riskGDPR-compliant by architecture
Market surveillance access (Art.74)Dual-jurisdiction disclosure riskEU authorities only

For high-risk AI systems deployed in regulated sectors — healthcare (Annex III Cat.5), employment (Cat.5), education (Cat.6), critical infrastructure (Cat.2), essential services (Cat.7) — single-jurisdiction documentation removes the CLOUD Act dual-compellability risk from your Art.11 + Art.13 documentation for the full 10-year retention period.


See Also