2026-04-14·13 min read·

What Most Developers Get Wrong About Art.9

If you are building a high-risk AI system — a CV screening tool, a credit risk model, an educational assessment platform — you already know that EU AI Act compliance requires risk management documentation. What many developers miss: Art.9 does not describe a one-time assessment you complete before deployment and file away. It mandates a risk management system — a continuous, iterative process that must be maintained, updated, and re-executed throughout the entire lifecycle of your AI system.

The term the regulation uses is "living document." That phrase has concrete legal obligations behind it that determine what your Annex IV technical documentation package must contain (specifically Section 4, which maps directly to Art.9) and what you must do every time your model changes, your use case expands, or post-market monitoring surfaces new risk data.

This guide covers what Art.9 actually requires, the 5-step lifecycle the regulation describes, how to structure the Risk Register, where Art.9 intersects with Art.10 data governance, and a Python implementation for tracking compliance state.


Who Is Subject to Art.9?

Art.9 applies to providers of high-risk AI systems listed in Annex III. If you place a high-risk AI system on the EU market or put it into service in the EU, you are a provider and Art.9 is mandatory. There is no Annex VI self-assessment shortcut for Art.9 — it is required regardless of which conformity assessment track you use.

Annex III categories in scope:

If your system falls under any of these categories, Art.9 applies from August 2, 2026 for new systems.


Art.9 in Full: What the Regulation Actually Requires

Art.9(1) states the core obligation: providers shall establish, implement, document, and maintain a risk management system for high-risk AI systems.

The regulation then defines what "system" means through Art.9(2)-(8):

Art.9(2): The risk management system is a continuous iterative process run throughout the entire lifecycle. It must be regularly, systematically updated. This is the "living document" mandate.

Art.9(3): Identify and analyze known and reasonably foreseeable risks associated with the high-risk AI system when used as intended and under conditions of reasonably foreseeable misuse.

Art.9(4): Estimate and evaluate risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.

Art.9(5): Adopt suitable risk management measures in accordance with the provisions of the following sections:

Art.9(6): Document residual risks after applying measures under Art.9(5). Communicate to deployers and users in accordance with Art.13 and Art.14.

Art.9(7): Testing to identify appropriate and targeted risk management measures. Testing must occur prior to deployment and at later points if the system changes.

Art.9(8): For GPAI model providers placing general-purpose AI on the market: these providers are exempt from Art.9 unless they also integrate the model into a high-risk AI system they deploy themselves.


The 5-Step Risk Management Lifecycle

The regulation describes risk management as iterative. In practice, this maps to 5 steps that your Risk Register must document for each identified risk:

Step 1: Risk Identification

Identify known and reasonably foreseeable risks across three dimensions:

Intended use risks: What can go wrong when the system is used exactly as designed?

Reasonably foreseeable misuse risks: What can go wrong when users deviate from intended use?

Technical risks: Infrastructure and model behavior risks

Each identified risk gets a Risk ID, description, category (intended use / misuse / technical), and affected fundamental rights.

Step 2: Risk Analysis

For each identified risk, estimate:

DimensionScaleNotes
Severity1-51=negligible, 5=death or fundamental rights violation
Likelihood1-51=rare, 5=almost certain given use patterns
Reversibility1-31=fully reversible, 3=permanent
Breadth1-31=individual, 3=systemic/population-level

Risk Score = Severity × Likelihood × Reversibility × Breadth

Document the basis for each estimate — this is what regulators will examine. "Likelihood: 3 — historical HR tool audit data shows 23% base rate for proxy variable correlation in CV screening" is defensible. "Likelihood: 2 — seems unlikely" is not.

Step 3: Risk Evaluation

Evaluate each risk against your acceptable risk threshold. Art.9 does not set a numerical threshold — you must define one and document the basis for it in your Risk Register.

Typical threshold approach:

The threshold must be consistent and documented. Regulators can challenge threshold-setting as a proxy for whether you took risk seriously.

Step 4: Risk Mitigation

For each risk above threshold, document the mitigation measure selected and applied:

Mitigation types (Art.9(5)(a)-(c)):

Design-level elimination (preferred under Art.9(5)(a)):

Control measures (Art.9(5)(b)):

Information provision (Art.9(5)(c)):

After mitigation, re-score the risk (residual risk). Document both the pre-mitigation score and the post-mitigation score. If residual risk remains above acceptable threshold, the system cannot be deployed.

Residual risk must be communicated to deployers per Art.9(6). This is a hard obligation, not optional. Include in your Art.13 product information sheet: what risks remain after provider controls, what deployer controls are required to manage them.

Step 5: Monitor and Update (The Living Document)

Art.9(2) requires the risk management system to be "regularly and systematically updated." This is not a vague aspiration — it has concrete triggers:

Mandatory update triggers:

Update log requirement: Each update to the Risk Register must be timestamped and describe what triggered the update, what changed, and who authorized the change. This is the "living" part that regulators will examine first.


Risk Register Structure

Your Risk Register is the core artifact of Art.9 compliance. It should be a versioned document (not a spreadsheet that gets overwritten) with the following structure:

risk_management/
├── RISK-REGISTER.md          # Master register (all risks, current state)
├── RISK-REGISTER-v1.0.md     # Immutable snapshot at deployment
├── RISK-REGISTER-v1.1.md     # Snapshot after first update
├── updates/
│   ├── 2026-08-01-initial-deployment.md
│   ├── 2026-09-15-model-update-v1.1.md
│   └── 2026-11-03-post-market-signal.md
└── decisions/
    ├── threshold-rationale.md
    └── residual-risk-disclosures.md

Each risk entry format:

## RISK-001: Training Data Discrimination (Employment, HR Tool)

**Category:** Intended use risk
**Affected fundamental right:** Art.21 EU Charter — Non-discrimination (sex, racial/ethnic origin)
**Affected Art.9 section:** Art.9(3)

### Analysis (v1.0 — 2026-08-01)
- Severity: 4 (significant impact on employment opportunity — protected outcome)
- Likelihood: 3 (peer-reviewed studies show 23% base rate in similar tools)
- Reversibility: 2 (hiring decision reversal requires employer action, not automatic)
- Breadth: 2 (affects cohort of applicants systematically, not single individual)
- **Risk Score: 48** — UNACCEPTABLE (threshold: ≥45)

### Mitigation Applied
- M1 (Design): Removed postcode from feature set (proxy variable for ethnicity)
- M2 (Design): Reweighted training set to achieve demographic parity within ±5%
- M3 (Control): Mandatory human review for all rejection decisions above seniority Level 3

### Residual Risk (post-mitigation)
- Severity: 3, Likelihood: 2, Reversibility: 2, Breadth: 2
- **Residual Risk Score: 24** — HIGH (below unacceptable threshold, above acceptable)
- **Disclosed to deployers:** Yes (Art.13 product sheet v1.0, Section 4)
- **Required deployer control:** Human review for Level 3+ rejection decisions

### Update History
- v1.0 (2026-08-01): Initial deployment assessment
- v1.1 (2026-09-15): Re-scored after model retrain. Likelihood reduced from 3→2 following demographic parity audit. Score now 24.

Art.9 + Art.10 Intersection: Why Data Governance Feeds Risk Management

Art.10 (Data Governance) is not separate from Art.9 — it is a primary input to the risk identification step. Art.10(2) requires training data to be "relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose." Art.10(5) permits processing of special categories of personal data (ethnicity, health, political opinion) for bias detection and correction — but only under strict conditions.

The Art.9 ↔ Art.10 connection:

Practical implication: Your Risk Register should cross-reference your data governance documentation. Risk RISK-001 above should link to the bias audit report from Art.10 data governance. This is what Annex IV Section 4 (Art.9 log) and Annex IV Section 3 (Art.10 data governance) look like when properly integrated.


Art.9 in Annex IV Section 4: What Goes in Your Technical Documentation

Annex IV Section 4 is titled "Information on the monitoring, functioning and control of the AI system." It maps to Art.9 and Art.12. For your technical documentation package (required before conformity assessment), Section 4 must contain:

  1. Risk register reference: Link to RISK-REGISTER.md (current version) and the snapshot at deployment date
  2. Risk management process description: Describe your 5-step lifecycle, your scoring methodology, your acceptable threshold and its rationale
  3. Mitigation measures summary: For each HIGH/UNACCEPTABLE risk, describe the primary mitigation type (design elimination, control measure, information provision)
  4. Residual risk summary: Table of all residual risks with scores, required deployer controls, and Art.13 disclosure status
  5. Update trigger list: Enumerate the conditions that require a Risk Register update
  6. Testing results: Results from Art.9(7) pre-deployment testing that validated mitigation effectiveness
  7. Version history: RISK-REGISTER.md version log with dates and update triggers

Python RiskManagementSystem

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
import json
from datetime import date

class RiskCategory(Enum):
    INTENDED_USE = "intended_use"
    REASONABLY_FORESEEABLE_MISUSE = "reasonably_foreseeable_misuse"
    TECHNICAL = "technical"

class MitigationType(Enum):
    DESIGN_ELIMINATION = "art9_5a_design"       # Art.9(5)(a): eliminate by design
    CONTROL_MEASURE = "art9_5b_control"          # Art.9(5)(b): control measure
    INFORMATION_PROVISION = "art9_5c_information" # Art.9(5)(c): inform/train

class RiskLevel(Enum):
    UNACCEPTABLE = "unacceptable"  # Cannot deploy
    HIGH = "high"                   # Mitigation mandatory + disclosure required
    MEDIUM = "medium"               # Mitigation recommended
    LOW = "low"                     # Document and monitor

@dataclass
class RiskScore:
    severity: int        # 1-5
    likelihood: int      # 1-5
    reversibility: int   # 1-3
    breadth: int         # 1-3

    @property
    def total(self) -> int:
        return self.severity * self.likelihood * self.reversibility * self.breadth

    @property
    def level(self) -> RiskLevel:
        score = self.total
        if score >= 45:
            return RiskLevel.UNACCEPTABLE
        elif score >= 20:
            return RiskLevel.HIGH
        elif score >= 10:
            return RiskLevel.MEDIUM
        else:
            return RiskLevel.LOW

@dataclass
class Mitigation:
    mitigation_id: str
    mitigation_type: MitigationType
    description: str
    implemented: bool = False
    implementation_date: Optional[date] = None

@dataclass
class Risk:
    risk_id: str
    description: str
    category: RiskCategory
    affected_rights: list[str]
    pre_mitigation: RiskScore
    mitigations: list[Mitigation] = field(default_factory=list)
    post_mitigation: Optional[RiskScore] = None
    disclosed_to_deployers: bool = False
    required_deployer_controls: list[str] = field(default_factory=list)

    @property
    def is_deployment_blocking(self) -> bool:
        """Returns True if residual risk is still unacceptable."""
        effective_score = self.post_mitigation or self.pre_mitigation
        return effective_score.level == RiskLevel.UNACCEPTABLE

    def to_register_entry(self) -> dict:
        effective = self.post_mitigation or self.pre_mitigation
        return {
            "risk_id": self.risk_id,
            "description": self.description,
            "category": self.category.value,
            "pre_mitigation_score": self.pre_mitigation.total,
            "pre_mitigation_level": self.pre_mitigation.level.value,
            "mitigations_applied": len(self.mitigations),
            "post_mitigation_score": effective.total,
            "post_mitigation_level": effective.level.value,
            "deployment_blocking": self.is_deployment_blocking,
            "disclosed_to_deployers": self.disclosed_to_deployers,
        }

class RiskManagementSystem:
    """
    EU AI Act Art.9 Risk Management System.
    Implements the 5-step lifecycle as a living document.
    """

    def __init__(self, system_name: str, system_version: str):
        self.system_name = system_name
        self.system_version = system_version
        self.risks: list[Risk] = []
        self.update_log: list[dict] = []
        self.register_version = "1.0"
        self.last_updated = date.today()

    # Step 1: Identify
    def identify_risk(self, risk: Risk) -> None:
        """Art.9(3): Identify known and reasonably foreseeable risks."""
        self.risks.append(risk)

    # Step 2 & 3: Analyze + Evaluate (captured in RiskScore)

    # Step 4: Mitigate
    def add_mitigation(self, risk_id: str, mitigation: Mitigation,
                       post_mitigation_score: RiskScore) -> None:
        """Art.9(5): Apply mitigation measure and record residual risk."""
        risk = self._get_risk(risk_id)
        risk.mitigations.append(mitigation)
        risk.post_mitigation = post_mitigation_score

    def disclose_residual_risk(self, risk_id: str,
                                deployer_controls: list[str]) -> None:
        """Art.9(6): Document residual risk disclosure to deployers."""
        risk = self._get_risk(risk_id)
        risk.disclosed_to_deployers = True
        risk.required_deployer_controls = deployer_controls

    # Step 5: Monitor and Update
    def record_update(self, trigger: str, changes: str,
                      authorized_by: str) -> None:
        """Art.9(2): Log mandatory Risk Register update."""
        self.update_log.append({
            "date": str(date.today()),
            "trigger": trigger,
            "changes": changes,
            "authorized_by": authorized_by,
            "register_version": self.register_version,
        })
        # Increment register version
        major, minor = self.register_version.split(".")
        self.register_version = f"{major}.{int(minor) + 1}"
        self.last_updated = date.today()

    def assess_deployment_readiness(self) -> dict:
        """Art.9(7): Evaluate whether system can be deployed."""
        blocking = [r for r in self.risks if r.is_deployment_blocking]
        undisclosed_high = [
            r for r in self.risks
            if (r.post_mitigation or r.pre_mitigation).level in
               (RiskLevel.HIGH, RiskLevel.UNACCEPTABLE)
            and not r.disclosed_to_deployers
        ]
        return {
            "system": self.system_name,
            "version": self.system_version,
            "total_risks": len(self.risks),
            "blocking_risks": len(blocking),
            "undisclosed_high_risks": len(undisclosed_high),
            "deployment_cleared": len(blocking) == 0 and len(undisclosed_high) == 0,
            "register_version": self.register_version,
            "last_updated": str(self.last_updated),
        }

    def export_register(self) -> list[dict]:
        return [r.to_register_entry() for r in self.risks]

    def _get_risk(self, risk_id: str) -> Risk:
        for r in self.risks:
            if r.risk_id == risk_id:
                return r
        raise ValueError(f"Risk {risk_id} not found")


# Example: CV Screening SaaS — TalentRank
rms = RiskManagementSystem("TalentRank CV Screening", "v1.0")

# Step 1+2: Identify and analyze
r1 = Risk(
    risk_id="RISK-001",
    description="Training data discrimination: proxy variable for ethnicity (postcode) in feature set",
    category=RiskCategory.INTENDED_USE,
    affected_rights=["Art.21 EU Charter — Non-discrimination"],
    pre_mitigation=RiskScore(severity=4, likelihood=3, reversibility=2, breadth=2),
)
rms.identify_risk(r1)

r2 = Risk(
    risk_id="RISK-002",
    description="Distribution shift: production applicant pool deviates from training set (different geography)",
    category=RiskCategory.TECHNICAL,
    affected_rights=["Art.21 EU Charter — Non-discrimination"],
    pre_mitigation=RiskScore(severity=3, likelihood=3, reversibility=2, breadth=2),
)
rms.identify_risk(r2)

# Step 4: Mitigate
rms.add_mitigation(
    "RISK-001",
    Mitigation(
        "M-001",
        MitigationType.DESIGN_ELIMINATION,
        "Removed postcode from feature set; reweighted training for demographic parity ±5%",
        implemented=True,
        implementation_date=date(2026, 7, 15),
    ),
    post_mitigation_score=RiskScore(severity=3, likelihood=2, reversibility=2, breadth=2),
)
rms.disclose_residual_risk(
    "RISK-001",
    ["Human review mandatory for all rejection decisions at Level 3+ seniority"]
)

# Step 5: Record update (example post-deployment update)
rms.record_update(
    trigger="Post-market monitoring signal: bias audit Q4 2026",
    changes="RISK-001 Likelihood reduced 3→2 after demographic parity audit confirms ±3% (better than ±5% target)",
    authorized_by="CTO — TalentRank",
)

result = rms.assess_deployment_readiness()
print(json.dumps(result, indent=2))
# {
#   "system": "TalentRank CV Screening",
#   "version": "v1.0",
#   "total_risks": 2,
#   "blocking_risks": 0,
#   "undisclosed_high_risks": 1,   ← RISK-002 needs deployer disclosure
#   "deployment_cleared": false,
#   "register_version": "1.1",
#   "last_updated": "2026-07-20"
# }

The undisclosed_high_risks: 1 output tells you that RISK-002 is a HIGH-level residual risk that has not yet been disclosed to deployers — a hard Art.9(6) violation if you were to deploy in this state. Fix: add rms.disclose_residual_risk("RISK-002", [...]) with the required deployer controls before deployment.


Common Implementation Mistakes

Mistake 1: Treating risk management as a one-time exercise Risk management must survive model updates. Every time model weights change, training data changes, or a new use case is added, the Risk Register must be re-evaluated. Build this into your CI/CD: a model deployment pipeline should have a "risk register review" gate.

Mistake 2: Generic risk descriptions "Risk of bias" is not an Art.9(3)-compliant risk identification. A compliant identification names the mechanism (training data proxy variable), the protected characteristic affected (ethnicity), the output impacted (shortlist inclusion), and the population affected (applicants in postal code X). Specificity is what enables Step 2 analysis.

Mistake 3: Setting threshold without rationale You define the acceptable risk threshold. Regulators will not accept "we set threshold at 45 out of 75." They will ask: why 45? The answer should reference: (a) the fundamental rights stakes (employment decisions require higher threshold than marketing personalization), (b) comparable standards (ISO 31000, ISO/IEC 42001), and (c) your product's specific deployer context.

Mistake 4: Confusing Art.9(5)(a) with Art.9(5)(b) Art.9(5)(a) — design elimination — is the preferred measure under Art.9(5)(a-c) ordering. Control measures under (b) are for risks that cannot be eliminated by design. Using only control measures when design elimination was feasible exposes you to challenge: why didn't you retrain on debiased data instead of adding a human review layer?

Mistake 5: Not versioning the Risk Register The living document requirement means regulators can ask for the Risk Register as it existed at deployment date (to compare against current version). If you overwrite the register with each update, you have destroyed evidence of your original assessment. Version and archive every snapshot.


EU AI Act Art.9 Compliance Checklist (20 Items)

Risk Identification (Art.9(3)) — Items 1-5:

Risk Analysis and Evaluation (Art.9(3)-(4)) — Items 6-9:

Risk Mitigation (Art.9(5)) — Items 10-14:

Residual Risk Disclosure (Art.9(6)) — Items 15-16:

Living Document (Art.9(2)) — Items 17-20:


Infrastructure Note: Art.9 and the 10-Year Retention Requirement

Art.18 requires providers to keep technical documentation (which includes the Risk Register and all its historical versions) for 10 years after the high-risk AI system is placed on the market. For a system deployed in August 2026, your risk management documentation must be available until August 2036.

This retention requirement creates a jurisdiction question under Art.18 and the US CLOUD Act: if your documentation is stored on a US-based cloud provider, US law enforcement can compel access to that data without EU legal process. For conformity documentation that may contain sensitive business information, an EU-native hosting infrastructure (with European jurisdiction, no CLOUD Act exposure) is the lower-risk choice. sota.io is an EU-native PaaS built specifically for this use case — deploy your risk documentation infrastructure without CLOUD Act exposure from day one.


Key Takeaways

  1. Art.9 is a system, not a document. The 5-step lifecycle must be operationalized, not just described.
  2. The "living document" obligation has concrete triggers: model updates, use case expansion, monitoring signals, annual review. Enumerate them.
  3. Risk scoring must be evidenced, not estimated. Cite bias audits, test results, peer-reviewed studies.
  4. Residual risk disclosure to deployers (Art.9(6)) is mandatory for all HIGH-level residual risks. It is not optional.
  5. Version your Risk Register. Regulators can ask for the snapshot at deployment date — you must have it.
  6. Art.9 and Art.10 are linked: data governance outputs (bias audit results) feed directly into risk analysis estimates.
  7. 10-year retention (Art.18) means your risk documentation infrastructure must be designed for long-term availability in EU jurisdiction.

See Also