2026-04-22·15 min read·

EU AI Act Art.9: Risk Management System for High-Risk AI — Iterative Lifecycle, Residual Risk, and the Living Document Obligation (2026)

Article 9 is the operational core of the EU AI Act's high-risk compliance framework. Where Art.8 activates the obligation to comply with Art.9–15, Art.9 defines what risk management actually means in practice: a structured, documented, continuous process that must run from initial design through end-of-life, updated as the system evolves and as post-market monitoring generates new risk data.

The most important thing to understand about Art.9 is that it does not permit a point-in-time risk assessment. It mandates a risk management system — a living set of processes, documentation, and controls that the provider must maintain throughout the entire product lifecycle.

This guide covers:


The Art.9 RMS Structure: Not a Document, a System

The EU AI Act's choice of the word "system" is deliberate. Art.9(1) specifies:

High-risk AI systems shall be subject to a risk management system. The risk management system shall be understood as a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps...

This framing has three practical consequences:

  1. Lifecycle scope: The RMS must be active before market placement, during deployment, and updated as long as the system is in use. Placing a system on the market does not close the RMS cycle — it opens the post-market phase.

  2. Iterative requirement: The RMS must be reviewed and updated regularly, not just when incidents occur. Art.9 does not specify a minimum review frequency, but the principle of "regular systematic review" requires documented review cycles, typically quarterly or annually for stable systems, more frequently when changes are made.

  3. Documented process: The RMS is not an informal practice. Art.9 requires that it be "established, implemented, documented, and maintained" — all four verbs apply. Annex IV Section 4 requires that the technical documentation include a description of the risk management system, including the risk identification and analysis performed and the risk management measures adopted.


Step 1: Risk Identification and Analysis — Art.9(2)

Art.9(2) defines what must be identified and analysed in the first step of the RMS:

Providers shall identify and analyse the known and foreseeable risks that the high-risk AI system may pose to health, safety, or fundamental rights when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.

Known vs. Foreseeable Risks

The distinction between known and foreseeable risks is operationally important:

The "reasonably foreseeable misuse" clause is significant. Art.9(2) is not limited to intended use — it requires analysis of how the system might be misused in ways that are foreseeable given the context. A document-generation AI sold for legal drafting creates a foreseeable misuse risk if users submit it to courts without human review.

Risk Scope: Health, Safety, Fundamental Rights

Art.9 explicitly scopes risk to three categories:

CategoryExamples in High-Risk AI Context
HealthMedical device AI: incorrect diagnostic outputs; mental health screening: false negatives
SafetySafety-critical system: failure under adversarial input; autonomous component: unexpected behaviour at distribution boundary
Fundamental rightsBiometric categorisation: profiling exposure; creditworthiness: discriminatory denial; employment screening: systemic exclusion of protected groups

The fundamental rights category is the widest and often the hardest to analyse. It includes rights under the EU Charter (Art.21 non-discrimination, Art.8 data protection, Art.47 effective remedy) as well as rights protected under ECHR. Providers working in Annex III categories — particularly employment, education, and access to public services — must conduct a structured fundamental rights impact analysis as part of Art.9(2).


Step 2: Risk Evaluation and Residual Risk — Art.9(3)-(4)

After identification and analysis, Art.9(3) requires that the provider:

Estimate and evaluate the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.

Art.9(4) then sets the standard for what happens after risk mitigation measures are applied:

Adopt suitable risk management measures in accordance with the provisions of the following sections and take into account the effects and possible interactions resulting from the combination of the requirements set out in those sections, so as to ensure that the overall residual risk associated with each hazard as well as the overall residual risk of the high-risk AI system is judged acceptable.

The Residual Risk Standard

The phrase "judged acceptable" in Art.9(4) is the operative compliance threshold. The EU AI Act does not require zero residual risk — it requires that residual risk be acceptable after all practicable mitigation measures have been applied.

What constitutes acceptable residual risk is not defined in the text. Providers must calibrate acceptability based on:

  1. State-of-the-art (as referenced in Art.8): If the current state of the art in the relevant product category supports a specific risk reduction technique and the provider has not implemented it, the remaining risk is unlikely to be judged acceptable.

  2. Intended purpose severity: A residual risk that is acceptable for a low-stakes Annex III application (e.g., general-purpose educational AI) is unlikely to be acceptable for a high-stakes application (e.g., AI used in access to essential public services under Art.6(2)(a)).

  3. Comparative baseline: The risk of the AI system's output versus the counterfactual (no AI system, or the previous human process the AI replaces). Art.9's framework implicitly requires comparison to the alternative.

  4. Harmonised standards: Where harmonised standards exist under Art.40 for the relevant product category, compliance with those standards creates a presumption of conformity. A provider that follows a harmonised standard's risk management methodology and reaches a "judged acceptable" conclusion is presumed compliant with Art.9(4).


Step 3: Pre-Market Testing — Art.9(5)

Art.9(5) establishes testing obligations that must be satisfied before market placement:

High-risk AI systems shall be tested for the purposes of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Section.

Testing Must Be Risk-Targeted

Art.9(5) is not a general performance testing requirement — it is specifically aimed at identifying risk management measures. This means the test protocol must be designed to probe:

The Art.9(6) Testing Population Requirement

Art.9(6) adds a specific requirement for what testing must cover:

High-risk AI systems shall be tested by using pre-defined measures and benchmarks that shall be proportionate to the intended purpose of the high-risk AI system and to the degree of risk associated with the high-risk AI system.

In practice, this means that test protocols must be documented with:

The requirement for pre-defined benchmarks is significant. A provider that tests a model and then selects pass/fail thresholds after seeing the results has not satisfied Art.9(6). The criteria must be established before testing.


Step 4: Children and Vulnerable Users — Art.9(7)

Art.9(7) introduces heightened protection where children are involved:

Where the information referred to in paragraph 2 cannot be fully determined during the design and development phase due to the nature of the high-risk AI system, providers shall further specify and update this information once the high-risk AI system is placed on the market or put into service.

But the special children provision is in Recital 49 and the broader Art.9(2) framework: when the intended purpose includes use by or affecting children (e.g., educational AI, AI used in childcare, AI used in access to services for minors), providers must:

  1. Identify the specific vulnerability profile of the child user population
  2. Apply heightened risk analysis to psychological, cognitive, and developmental risks
  3. Implement risk management measures calibrated to the developmental stage of the expected users
  4. Not rely solely on adult-calibrated testing datasets

The practical implication: if your Annex III AI system targets or reaches minors, your Art.9(2) risk analysis must explicitly address that population and your Art.9(6) testing must include representative minor user data (with appropriate consent and data protection compliance under Art.10).


Step 5: SME Proportionality — Art.9(8)

Art.9(8) provides a proportionality carve-out for small and medium enterprises:

Without prejudice to Article 11(4), where providers of high-risk AI systems are SMEs, including start-ups, they may carry out the relevant conformity assessment procedure with a notified body before putting the high-risk AI system into service, instead of prior to placing it on the market.

This is a timing flexibility, not a substantive exemption. SMEs are still required to have a fully compliant RMS before putting the system into service. The difference is that SMEs may do the notified body conformity assessment at service commencement rather than at market placement, which can be relevant where the distinction between "market" and "service" matters for the specific product category.

Additionally, under Art.11(4), SMEs benefit from simplified technical documentation requirements — but again, the RMS itself must be compliant.


The Living Document Obligation: Post-Market Updates

The most practically demanding aspect of Art.9 is the "living document" requirement. Art.9(1) establishes that the RMS is a "continuous iterative process" and Art.9 paragraph 1's reference to "regular systematic review and updating" creates an ongoing obligation.

What Triggers an RMS Update

TriggerRequired RMS Action
Significant accuracy change (drift)Re-run risk analysis for affected use cases; update residual risk evaluation
New misuse pattern identifiedAdd to Art.9(2) foreseeable misuse inventory; assess whether existing controls remain adequate
Post-market monitoring incidentRoot cause analysis; update risk controls; document change in Annex IV Section 4
Substantial modification (Art.3(23))Full RMS restart — substantial modifications reset the lifecycle
New harmonised standard publishedRe-evaluate compliance; update technical documentation
Regulatory guidance updateAssess impact on risk analysis and control measures
New deployment geographyRe-analyse risks for the specific legal and social context

Substantial Modification Resets the Clock

Art.3(23) defines "substantial modification" as a change that affects the AI system's compliance with the EU AI Act's requirements or changes the intended purpose. When a substantial modification occurs, the provider is treated as placing a new system on the market — which means the full Art.9 RMS cycle must restart, including pre-market testing, risk analysis, and Annex IV documentation.

The key question providers face is whether a given change constitutes a substantial modification. Indicative criteria from the regulatory context:


Integration with Other Art.9–15 Requirements

Art.9 does not operate in isolation. Its risk identification output is the input to several other compliance requirements:

Art.9 → Art.10 (Data Governance)

Art.9(2) risk analysis often reveals data-related risks: training dataset bias, coverage gaps, distribution shift between training and deployment data. These findings must be fed into the Art.10(2) data governance process, which requires that training, validation, and testing datasets be subject to "appropriate data governance and management practices."

The practical link: risks identified in Art.9(2) that are traceable to data quality problems must appear in the Art.10 data governance documentation, and the mitigations must be reflected in both the Art.9 risk controls and the Art.10 dataset assessment.

Art.9 → Art.12 (Logging)

Art.12(1) requires that high-risk AI systems have the capability to automatically generate logs of events during operation. The Art.9 RMS should identify which events constitute risk indicators and must be logged — this list drives the Art.12 logging specification.

Without the Art.9 risk analysis determining what to monitor, an Art.12 logging implementation lacks a principled basis for what to capture.

Art.9 → Annex IV Section 4

Annex IV Section 4 requires the technical documentation to include a description of:

This means the Art.9 process must produce documentation artefacts — not just internal policies. The risk register, risk evaluation reports, testing protocols, and residual risk acceptability judgements must all be documented in a form that can be included in or referenced from the Annex IV technical documentation.


Python Implementation: RiskManagementSystem Class

from dataclasses import dataclass, field
from datetime import datetime, date
from enum import Enum
from typing import Optional
import json


class RiskSeverity(Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    CRITICAL = "critical"


class RiskCategory(Enum):
    HEALTH = "health"
    SAFETY = "safety"
    FUNDAMENTAL_RIGHTS = "fundamental_rights"


class RiskStatus(Enum):
    IDENTIFIED = "identified"
    ANALYSED = "analysed"
    MITIGATED = "mitigated"
    RESIDUAL_ACCEPTED = "residual_accepted"
    RESIDUAL_REJECTED = "residual_rejected"
    MONITORING = "monitoring"


@dataclass
class Risk:
    risk_id: str
    description: str
    category: RiskCategory
    severity: RiskSeverity
    source: str  # "intended_use" | "foreseeable_misuse"
    identified_date: date
    status: RiskStatus = RiskStatus.IDENTIFIED
    mitigation_measures: list[str] = field(default_factory=list)
    residual_severity: Optional[RiskSeverity] = None
    residual_accepted: Optional[bool] = None
    acceptance_rationale: Optional[str] = None
    last_reviewed: Optional[date] = None


@dataclass
class TestProtocol:
    protocol_id: str
    name: str
    target_risks: list[str]  # risk_ids
    pre_defined_criteria: dict[str, str]  # criterion_name -> pass condition
    test_date: Optional[date] = None
    results: Optional[dict] = None
    passed: Optional[bool] = None


@dataclass
class RMSUpdateRecord:
    update_date: date
    trigger: str
    changes: list[str]
    reviewer: str
    next_review_date: date


class RiskManagementSystem:
    """
    Art.9-compliant risk management system for high-risk AI.
    Implements the iterative lifecycle: identify → analyse → evaluate → test → monitor.
    """

    def __init__(self, system_name: str, intended_purpose: str, annex_iii_category: str):
        self.system_name = system_name
        self.intended_purpose = intended_purpose
        self.annex_iii_category = annex_iii_category
        self.risks: dict[str, Risk] = {}
        self.test_protocols: dict[str, TestProtocol] = {}
        self.update_history: list[RMSUpdateRecord] = []
        self.created_date = date.today()
        self.version = "1.0"

    # Art.9(2): Risk identification
    def add_risk(self, risk: Risk) -> str:
        self.risks[risk.risk_id] = risk
        return risk.risk_id

    def identify_risks_from_intended_use(self, use_scenarios: list[str]) -> list[str]:
        """Generate risk IDs for documentation. Provider must populate actual risks."""
        risk_ids = []
        for i, scenario in enumerate(use_scenarios):
            risk_id = f"IU-{i+1:03d}"
            print(f"[Art.9(2)] Register risk from intended use: {scenario} → ID: {risk_id}")
            risk_ids.append(risk_id)
        return risk_ids

    def identify_foreseeable_misuse(self, misuse_scenarios: list[str]) -> list[str]:
        risk_ids = []
        for i, scenario in enumerate(misuse_scenarios):
            risk_id = f"FM-{i+1:03d}"
            print(f"[Art.9(2)] Register foreseeable misuse risk: {scenario} → ID: {risk_id}")
            risk_ids.append(risk_id)
        return risk_ids

    # Art.9(3)-(4): Risk evaluation and residual risk
    def evaluate_residual_risk(
        self,
        risk_id: str,
        residual_severity: RiskSeverity,
        accepted: bool,
        rationale: str,
    ) -> None:
        if risk_id not in self.risks:
            raise ValueError(f"Risk {risk_id} not found. Register it first via add_risk().")
        risk = self.risks[risk_id]
        risk.residual_severity = residual_severity
        risk.residual_accepted = accepted
        risk.acceptance_rationale = rationale
        risk.status = (
            RiskStatus.RESIDUAL_ACCEPTED if accepted else RiskStatus.RESIDUAL_REJECTED
        )
        risk.last_reviewed = date.today()
        if not accepted:
            raise ValueError(
                f"Risk {risk_id} residual risk not acceptable. Additional mitigation required "
                f"before market placement. Rationale: {rationale}"
            )

    def check_all_residual_risks_accepted(self) -> bool:
        """Art.9(4): All residual risks must be judged acceptable before placement."""
        unaccepted = [
            r for r in self.risks.values()
            if r.status not in (RiskStatus.RESIDUAL_ACCEPTED, RiskStatus.MONITORING)
        ]
        if unaccepted:
            ids = [r.risk_id for r in unaccepted]
            raise ValueError(
                f"[Art.9(4)] Pre-placement check failed. Risks not yet accepted: {ids}. "
                f"Resolve all residual risks before market placement."
            )
        return True

    # Art.9(5)-(6): Pre-defined testing
    def create_test_protocol(self, protocol: TestProtocol) -> None:
        """Art.9(6): Criteria must be pre-defined (before testing begins)."""
        if protocol.test_date is not None:
            raise ValueError(
                "[Art.9(6)] Test protocol must be created with pre-defined criteria BEFORE "
                "testing. Do not add test_date at protocol creation time."
            )
        self.test_protocols[protocol.protocol_id] = protocol

    def record_test_results(
        self, protocol_id: str, results: dict, test_date: Optional[date] = None
    ) -> bool:
        protocol = self.test_protocols[protocol_id]
        protocol.test_date = test_date or date.today()
        protocol.results = results

        # Evaluate against pre-defined criteria
        passed = True
        for criterion, condition in protocol.pre_defined_criteria.items():
            if criterion not in results:
                print(f"[Art.9(6)] MISSING: criterion '{criterion}' not in test results")
                passed = False
            else:
                # Simple string-based condition check; extend for numeric thresholds
                actual = str(results[criterion])
                if condition.startswith(">="):
                    threshold = float(condition[2:])
                    if float(actual) < threshold:
                        print(f"[Art.9(6)] FAIL: {criterion}={actual} < {threshold}")
                        passed = False
                elif condition.startswith("<="):
                    threshold = float(condition[2:])
                    if float(actual) > threshold:
                        print(f"[Art.9(6)] FAIL: {criterion}={actual} > {threshold}")
                        passed = False
                else:
                    if actual != condition:
                        print(f"[Art.9(6)] FAIL: {criterion}={actual} != {condition}")
                        passed = False

        protocol.passed = passed
        return passed

    # Art.9(1): Living document — update lifecycle
    def record_update(self, trigger: str, changes: list[str], reviewer: str, days_to_next_review: int = 365) -> None:
        record = RMSUpdateRecord(
            update_date=date.today(),
            trigger=trigger,
            changes=changes,
            reviewer=reviewer,
            next_review_date=date.fromordinal(date.today().toordinal() + days_to_next_review),
        )
        self.update_history.append(record)
        self.version = f"{float(self.version) + 0.1:.1f}"
        print(f"[Art.9(1)] RMS updated to v{self.version}. Next review: {record.next_review_date}")

    def generate_annex_iv_section4(self) -> dict:
        """Generate Annex IV Section 4 documentation artefact."""
        return {
            "system": self.system_name,
            "intended_purpose": self.intended_purpose,
            "annex_iii_category": self.annex_iii_category,
            "rms_version": self.version,
            "created": str(self.created_date),
            "last_updated": str(date.today()),
            "risks_identified": len(self.risks),
            "risks_accepted": sum(1 for r in self.risks.values() if r.residual_accepted),
            "risks_pending": sum(1 for r in self.risks.values() if r.residual_accepted is None),
            "test_protocols": len(self.test_protocols),
            "test_protocols_passed": sum(1 for p in self.test_protocols.values() if p.passed),
            "update_history_entries": len(self.update_history),
            "risk_register": [
                {
                    "id": r.risk_id,
                    "category": r.category.value,
                    "severity": r.severity.value,
                    "source": r.source,
                    "status": r.status.value,
                    "residual_accepted": r.residual_accepted,
                }
                for r in self.risks.values()
            ],
        }

    def pre_placement_gate(self) -> dict:
        """Full Art.9 compliance gate before market placement."""
        issues = []

        # Check all risks have residual evaluation
        for r in self.risks.values():
            if r.residual_accepted is None:
                issues.append(f"Risk {r.risk_id} has no residual risk evaluation")
            elif not r.residual_accepted:
                issues.append(f"Risk {r.risk_id} residual risk not accepted")

        # Check all test protocols have been run and passed
        for p in self.test_protocols.values():
            if p.test_date is None:
                issues.append(f"Test protocol {p.protocol_id} not yet executed")
            elif not p.passed:
                issues.append(f"Test protocol {p.protocol_id} failed — cannot place on market")

        return {
            "gate_passed": len(issues) == 0,
            "issues": issues,
            "risks_evaluated": len(self.risks),
            "protocols_run": sum(1 for p in self.test_protocols.values() if p.test_date),
        }

Usage Example

# Provider: medical AI for diagnostic support (Annex III — medical devices)
rms = RiskManagementSystem(
    system_name="DiagnosticSupportAI v2.1",
    intended_purpose="Assist radiologists in detecting pulmonary nodules on CT scans",
    annex_iii_category="Annex III(5)(a) - Medical devices",
)

# Art.9(2): Identify risks
rms.add_risk(Risk(
    risk_id="IU-001",
    description="False negative nodule detection leading to missed diagnosis",
    category=RiskCategory.HEALTH,
    severity=RiskSeverity.CRITICAL,
    source="intended_use",
    identified_date=date(2026, 1, 15),
    mitigation_measures=[
        "Minimum sensitivity threshold 0.95 on validation set",
        "Mandatory radiologist confirmation before diagnostic conclusion",
        "Flagging of low-confidence outputs",
    ],
))

rms.add_risk(Risk(
    risk_id="FM-001",
    description="AI output used as sole diagnostic basis without radiologist review",
    category=RiskCategory.HEALTH,
    severity=RiskSeverity.HIGH,
    source="foreseeable_misuse",
    identified_date=date(2026, 1, 15),
    mitigation_measures=[
        "UI explicitly labels output as 'decision support only'",
        "Technical block prevents issuing report without radiologist sign-off field",
    ],
))

# Art.9(3)-(4): Evaluate residual risk
rms.evaluate_residual_risk(
    "IU-001",
    residual_severity=RiskSeverity.LOW,
    accepted=True,
    rationale="Post-mitigation sensitivity 0.97 on 2,400-scan validation set. "
              "Mandatory radiologist confirmation eliminates independent misdiagnosis path.",
)

# Art.9(6): Pre-defined test protocol
rms.create_test_protocol(TestProtocol(
    protocol_id="TP-001",
    name="Sensitivity/Specificity Validation — External Test Set",
    target_risks=["IU-001"],
    pre_defined_criteria={
        "sensitivity": ">=0.95",
        "specificity": ">=0.85",
        "auc_roc": ">=0.92",
    },
))

# Run tests (after protocol is defined)
rms.record_test_results("TP-001", {"sensitivity": "0.97", "specificity": "0.88", "auc_roc": "0.94"})

# Pre-placement gate
gate = rms.pre_placement_gate()
print(f"Gate passed: {gate['gate_passed']}")

# Annex IV Section 4 documentation
doc = rms.generate_annex_iv_section4()
print(json.dumps(doc, indent=2))

Art.9 vs. the Existing Art.9 Post

The EU AI Act's Art.9 was first published as "eu-ai-act-art-9-risk-management-system-living-document-developer-guide" in April 2026. This 2026 deep-dive series post expands coverage of:


Art.9 Implementation Checklist (25 Items)

Use this checklist to verify Art.9 compliance before market placement. Each item maps to a specific paragraph or sub-obligation.

Risk Identification — Art.9(2)

Risk Evaluation — Art.9(3)-(4)

Testing — Art.9(5)-(6)

Living Document — Art.9(1)

Annex IV Documentation


See Also