2026-04-23·14 min read·

EU AI Act Art.40: Post-Market Monitoring — PMM Plans, Continuous Surveillance, and Risk Feedback Loops for High-Risk AI Systems (2026)

Article 40 of the EU AI Act introduces one of the most operationally demanding continuous obligations for providers of high-risk AI systems: the requirement to establish, implement, document, and maintain a post-market monitoring system that actively tracks how deployed systems perform in real-world conditions, identifies emerging risks and safety issues, and feeds findings back into the risk management and conformity assessment processes that were completed before deployment.

The requirement is structurally significant because it reframes AI system compliance from a pre-deployment certification activity into a lifecycle obligation. Passing conformity assessment under Art.43, achieving CE marking, and completing technical documentation under Art.11 are necessary but not sufficient conditions for ongoing legal compliance under the EU AI Act. Art.40 adds a continuous post-deployment surveillance obligation whose outputs can trigger regulatory consequences — most importantly, Art.72 serious incident reporting — and whose systematic failures can expose providers to enforcement action under Art.79.

For software developers and AI engineers, Art.40 translates into an engineering and data collection problem: you need to instrument deployed systems to capture the right signals, establish processes for analysing those signals against performance baselines and safety thresholds, and maintain audit-ready records that demonstrate continuous monitoring was actually performed.

What Art.40 Requires: The PMM System

Art.40(1) establishes the core obligation: providers of high-risk AI systems shall implement a post-market monitoring system in a manner proportionate to the nature of the AI technologies and the risks of the high-risk AI system.

The phrase "proportionate to the nature of the AI technologies and the risks" is not a softening of the requirement — it is an instruction to calibrate the depth and breadth of monitoring to the risk profile established in the Art.9 risk management system. A high-risk AI system used in critical infrastructure that affects a large population requires more intensive and broader monitoring than a high-risk AI system used in a narrow, controlled deployment with limited affected persons. The proportionality principle requires providers to explicitly document why their chosen monitoring approach is appropriate for the specific risk profile, not simply that they are monitoring.

Art.40(2) specifies the data collection obligation: the PMM system shall actively collect, document, and analyse data relevant to the performance of deployed high-risk AI systems throughout their lifetime. This active collection obligation distinguishes Art.40 from passive approaches like waiting for users to report problems. The provider must affirmatively collect data — not merely receive it passively — and the obligation runs throughout the lifetime of the deployed system, not just during an initial deployment period.

The Recitals to the AI Act (particularly Recital 79) clarify that post-market monitoring should cover data on the functioning of the AI system, the results of decisions taken by the AI system, interactions between the AI system and its users, and data enabling the provider to identify potential systemic issues with the AI system's design or training. This recital-level guidance gives providers substantive content for determining what data categories to collect under Art.40(2)'s active collection requirement.

The PMM Plan: Structure and Required Content

Art.40(3) requires that the PMM system be based on a post-market monitoring plan, and that this plan form part of the technical documentation required under Art.11 and Annex IV. The plan must be updated where necessary as a result of the PMM activities themselves — creating a self-amending obligation where monitoring findings that affect the plan must be incorporated into it.

The PMM plan must contain, at minimum:

Monitoring objectives and scope: A clear statement of what the PMM system is designed to monitor, which aspects of the AI system's performance and outputs are subject to surveillance, which deployment contexts and user populations are included, and which risks identified in the Art.9 risk management system the monitoring is specifically designed to detect as they manifest in real-world conditions.

Data collection methods and sources: A specification of the technical mechanisms by which monitoring data will be collected — telemetry, logging, user feedback systems, performance dashboards, API monitoring, output sampling — and the sources of that data, including the AI system itself, user interfaces, operator reporting channels, external incident databases, and national market surveillance authority communications.

Data categories and indicators: The specific data categories that will be collected (performance metrics, error rates, output distributions, usage patterns, user feedback, incident reports, near-miss events), the key performance indicators against which collected data will be evaluated, the thresholds that trigger escalation procedures, and the baseline performance profiles against which deviations will be assessed.

Analysis cadence and review processes: The frequency of data analysis (continuous automated analysis for critical safety parameters, periodic human review cycles for trend analysis, triggered reviews upon threshold breaches), the roles and responsibilities for performing analysis, and the decision criteria for escalating findings within the provider's organisation and to regulatory authorities.

Corrective action protocols: The processes triggered when PMM analysis identifies performance degradation, unexpected behaviour, safety incidents, or emerging risks — including internal escalation, root cause analysis, technical remediation (model update, system modification, deployment restriction), customer and deployer notification, and regulatory reporting under Art.72 where applicable.

Record-keeping and documentation obligations: How PMM data will be retained, for how long, in what format, and how access will be managed to ensure audit-readiness for national competent authorities under Art.79.

Plan review and update triggers: The conditions under which the PMM plan itself will be reviewed and updated — including at minimum following serious incidents, following significant changes to the deployment context, following AI system updates that affect the risk profile, and following guidance or enforcement actions by competent authorities.

PMM Data: What to Collect and Why

Art.40's active data collection obligation requires providers to make deliberate engineering choices about instrumentation. The following data categories represent the minimum viable set for a proportionate PMM system for most high-risk AI system categories:

Performance metrics: Accuracy, precision, recall, F1 scores, or domain-appropriate performance indicators measured against the performance targets established in the technical documentation and risk management system. Performance drift below baseline thresholds is a primary PMM signal — it suggests that the real-world data distribution has shifted from the training distribution, that the deployment context has changed, or that system degradation has occurred.

Output distribution monitoring: Statistical tracking of the distribution of system outputs over time, designed to detect distribution shift (the system is producing outputs in proportions that differ significantly from its validated performance profile), bias emergence (protected characteristics are becoming predictive of system outputs in ways that were not present in the validated system), or anomalous output clustering (the system is repeatedly making similar errors or unusual decisions in identifiable contexts).

Error and failure tracking: Documentation of individual errors, system failures, incorrect outputs, and near-miss events — instances where the system produced an output that could have caused harm but did not, or where an operator or user intervention prevented a harmful outcome. Near-misses are particularly valuable for identifying emerging risk patterns before they manifest as serious incidents.

Interaction quality signals: Where the AI system operates with user interaction — in employment screening, credit scoring, medical device functions, biometric identification — data on user experience, user feedback, appeals and corrections initiated by users or affected persons, and operator interventions that reversed AI system outputs provide qualitative PMM signals that quantitative performance metrics may not capture.

Deployment context changes: Data identifying changes in the contexts in which the AI system is deployed — new user populations, expanded geographic scope, changed use cases by operators, integration with new downstream systems — that may affect the risk profile in ways not addressed by the original conformity assessment.

Incident data from external sources: Horizon scanning for reports of similar systems from other providers experiencing failures or safety incidents, relevant regulatory guidance, academic literature identifying emerging risks in the AI technology category, and national market surveillance authority publications on AI system issues.

Art.40 × Art.9: The PMM–Risk Management Feedback Loop

The most structurally important relationship in the post-market monitoring framework is the feedback loop between Art.40 PMM activities and the Art.9 risk management system. Art.9 requires providers to establish, implement, document, and maintain a risk management system for high-risk AI systems that runs throughout the entire lifecycle of the system and identifies, estimates, evaluates, and mitigates risks. Art.40 is the mechanism by which the "throughout the entire lifecycle" element of Art.9 remains operational after deployment.

The feedback loop operates as follows: the Art.9 risk management system, completed before deployment, identifies the known and reasonably foreseeable risks of the AI system and the measures taken to address them. The Art.40 PMM system monitors deployed performance to detect whether those known risks are manifesting in ways not fully addressed by the mitigation measures, and whether new risks — not identified during pre-deployment risk assessment — are emerging as the system encounters real-world conditions. When PMM findings reveal either category of risk, they must be fed back into the Art.9 risk management system, which must be updated, and the resulting technical documentation changes must be reflected in updated conformity assessment records.

This feedback loop is not optional for providers who implement changes following PMM findings. An Art.40 PMM finding that leads to a model update, architecture change, or deployment restriction is a change to the AI system that must be processed through the Art.9 risk management system. If the change is significant enough to affect the conformity assessment conclusions, it may require a new or supplementary conformity assessment under Art.43. Providers who treat PMM as an operational activity separate from compliance documentation will find themselves with a gap between their deployed system and their CE-marked technical documentation — a gap that national market surveillance authorities under Art.79 are specifically empowered to identify and act upon.

The practical implication is that the Art.40 PMM plan must include explicit provisions for escalating PMM findings that have risk management implications to the persons responsible for the Art.9 risk management system, and must document the decisions taken about whether PMM findings require technical documentation updates.

Art.40 × Art.72: The PMM-to-Incident-Reporting Pipeline

Art.72 of the EU AI Act requires providers of high-risk AI systems to report to market surveillance authorities any serious incident they become aware of. Art.72(1) defines serious incidents to include malfunctions or failures of the AI system that result in death, serious injury, significant damage to property, or serious impacts on fundamental rights, as well as situations where the AI system's operation has created a serious risk to health, safety, or fundamental rights that has not resulted in harm but could do so.

Art.40's PMM system is the primary mechanism by which providers become aware of serious incidents within the meaning of Art.72. The "become aware of" language in Art.72(1) creates a due diligence standard — a provider that has no PMM system, or whose PMM system fails to collect the data that would reveal a serious incident, cannot rely on ignorance as a defence when the incident subsequently becomes known. Competent authorities reviewing a serious incident will examine whether the provider's PMM system was adequate to detect the incident and, if it was not, whether the PMM system design itself represents a failure of Art.40 compliance.

The Art.40 PMM plan must therefore include a defined pipeline from PMM data analysis to Art.72 reporting. This pipeline must specify: the data signals that would indicate a serious incident or serious risk, the threshold criteria that trigger escalation to the Art.72 reporting obligation, the internal roles responsible for making the Art.72 reporting determination, and the timeline for reporting (Art.72(3) requires reporting without undue delay — and in any case within fifteen days of becoming aware of an event).

The timeline creates particular engineering obligations for the PMM system. Fifteen days is a short window for a provider who must detect an incident through PMM data, attribute it to the AI system, assess its severity against the Art.72 threshold, prepare the incident report, and submit it to the relevant market surveillance authority. PMM systems that rely solely on periodic batch analysis — reviewing data weekly or monthly — may systematically fail to identify serious incidents within the fifteen-day reporting window. Providers should design their PMM data collection and analysis pipelines with the Art.72 reporting deadline explicitly in mind, using automated alerting for critical safety signals that could indicate serious incidents rather than relying on periodic human review cycles alone.

Art.40 × Art.17: PMM Within the Quality Management System

Art.17 of the EU AI Act requires providers of high-risk AI systems to implement a quality management system. Art.17(1)(g) explicitly includes "the post-market monitoring system" as one of the elements that the QMS must cover. This means Art.40's PMM system is not a standalone obligation — it must be integrated into the provider's Art.17 QMS, which means it is subject to QMS governance, documentation standards, internal audit requirements, and management review processes.

For providers with existing ISO 9001 or sector-specific quality management systems (ISO 13485 for medical devices, EN 9100 for aerospace, IEC 62443 for industrial automation), the Art.40 PMM requirements must be mapped into existing QMS structures. The PMM plan becomes a QMS document subject to QMS document control. PMM data collection and analysis procedures become QMS procedures with version control and approval workflows. PMM findings that require corrective action become QMS corrective action records. QMS internal audits must cover PMM system compliance.

For providers who do not have existing QMS structures (common in software-native AI companies not previously subject to EU product safety regulation), Art.17 requires establishing a full QMS with Art.40 PMM as one of its components. This is not a light-touch exercise — it requires documented procedures, roles and responsibilities, record-keeping systems, and internal audit capabilities that many software organisations are not accustomed to maintaining.

PMM for GPAI Model Providers Under Art.75

Article 75 of the EU AI Act establishes monitoring obligations specifically for providers of general-purpose AI models. While Art.75 is the primary framework for GPAI model PMM, Art.40 applies to providers who deploy high-risk AI systems built on top of GPAI models. The interaction creates a two-tier monitoring obligation in the GPAI supply chain:

The GPAI model provider must fulfil Art.75 obligations, which include monitoring for misuse, systemic risks, and incidents attributable to the model. The downstream provider who deploys a high-risk AI system using a GPAI model as a component must fulfil Art.40 obligations for the deployed system as a whole — including monitoring how the GPAI component performs in the specific high-risk deployment context, which may differ substantially from the GPAI model's general performance profile.

In practice, this means high-risk AI system providers using GPAI model APIs (such as foundation model APIs for AI-assisted medical diagnosis, employment screening, or credit decisioning) must instrument both their system-level performance and their use of the underlying model, and must have contractual arrangements with the GPAI model provider to receive incident notifications and model change information that could affect Art.40 PMM obligations.

Art.40 × Art.79: PMM Records and Market Surveillance Authority Access

Art.79 empowers national market surveillance authorities to conduct market surveillance activities including requesting access to the technical documentation, including PMM records, of high-risk AI system providers. PMM records are a primary target for MSA review because they demonstrate whether the provider's continuous compliance obligation under Art.40 has been met in practice — not just at the point of conformity assessment.

The record-keeping requirements implicit in Art.40 (documented PMM plan, data collection records, analysis findings, corrective action records, plan update history) must be maintained in a form accessible to MSAs. Art.79 does not specify a retention period for PMM records, but the Art.18 technical documentation retention period (ten years from the date the high-risk AI system is placed on the market or put into service) applies to all technical documentation components, including the PMM plan and its supporting records. Providers must plan for a ten-year retention obligation when designing PMM data storage architectures.

Python Implementation: PostMarketMonitoringTracker

The following Python class provides a structured framework for managing Art.40 PMM obligations programmatically, covering monitoring thresholds, incident detection, and reporting pipeline management:

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
import datetime


class PMMLevel(Enum):
    ROUTINE = "routine"          # Normal operation, no threshold breach
    ELEVATED = "elevated"        # Performance degradation, increased monitoring
    ALERT = "alert"              # Threshold breach, investigation required
    INCIDENT = "incident"        # Serious incident — Art.72 reporting triggered


class IncidentSeverity(Enum):
    SERIOUS = "serious"          # Art.72 reporting mandatory (≤15 days)
    SIGNIFICANT = "significant"  # Internal investigation, Art.9 update required
    MINOR = "minor"              # Logged, reviewed in periodic cycle


@dataclass
class PerformanceBaseline:
    metric_name: str
    baseline_value: float
    alert_threshold_pct: float    # % degradation below baseline triggering ALERT
    incident_threshold_pct: float # % degradation triggering INCIDENT classification
    measurement_unit: str


@dataclass
class PMMLMonitoringRecord:
    timestamp: datetime.datetime
    metric_name: str
    observed_value: float
    baseline_value: float
    deviation_pct: float
    pmm_level: PMMLLevel
    notes: str = ""
    corrective_action_required: bool = False
    art72_report_required: bool = False


@dataclass
class PMMLPlan:
    system_id: str
    system_name: str
    risk_category: str                # Art.9 risk classification
    monitoring_start_date: datetime.date
    performance_baselines: list[PerformanceBaseline] = field(default_factory=list)
    monitoring_records: list[PMMLMonitoringRecord] = field(default_factory=list)
    plan_version: str = "1.0"
    last_reviewed: Optional[datetime.date] = None
    last_updated_reason: str = ""

    def add_baseline(self, baseline: PerformanceBaseline) -> None:
        """Register a performance baseline metric for continuous monitoring."""
        self.performance_baselines.append(baseline)

    def record_measurement(
        self,
        metric_name: str,
        observed_value: float,
        notes: str = ""
    ) -> PMMLMonitoringRecord:
        """Record a PMM measurement and classify the PMM level."""
        baseline = next(
            (b for b in self.performance_baselines if b.metric_name == metric_name),
            None
        )
        if not baseline:
            raise ValueError(f"No baseline registered for metric: {metric_name}")

        deviation_pct = ((baseline.baseline_value - observed_value) / baseline.baseline_value) * 100

        if deviation_pct >= baseline.incident_threshold_pct:
            pmm_level = PMMLLevel.INCIDENT
            art72_required = True
            corrective_required = True
        elif deviation_pct >= baseline.alert_threshold_pct:
            pmm_level = PMMLLevel.ALERT
            art72_required = False
            corrective_required = True
        elif deviation_pct > 0:
            pmm_level = PMMLLevel.ELEVATED
            art72_required = False
            corrective_required = False
        else:
            pmm_level = PMMLLevel.ROUTINE
            art72_required = False
            corrective_required = False

        record = PMMLMonitoringRecord(
            timestamp=datetime.datetime.now(datetime.timezone.utc),
            metric_name=metric_name,
            observed_value=observed_value,
            baseline_value=baseline.baseline_value,
            deviation_pct=round(deviation_pct, 2),
            pmm_level=pmm_level,
            notes=notes,
            corrective_action_required=corrective_required,
            art72_report_required=art72_required,
        )
        self.monitoring_records.append(record)
        return record

    def get_open_incidents(self) -> list[PMMLMonitoringRecord]:
        """Return all records classified as INCIDENT — each requires Art.72 assessment."""
        return [r for r in self.monitoring_records if r.pmm_level == PMMLLevel.INCIDENT]

    def get_art72_deadline(self, incident_record: PMMLMonitoringRecord) -> datetime.date:
        """Calculate Art.72 reporting deadline: 15 days from incident detection."""
        return (incident_record.timestamp + datetime.timedelta(days=15)).date()

    def plan_status_report(self) -> dict:
        """Generate current PMM plan status for technical documentation."""
        incidents = self.get_open_incidents()
        return {
            "system_id": self.system_id,
            "plan_version": self.plan_version,
            "monitoring_records_total": len(self.monitoring_records),
            "open_incidents": len(incidents),
            "art72_reporting_required": any(r.art72_report_required for r in incidents),
            "oldest_unresolved_art72_deadline": (
                min(self.get_art72_deadline(r) for r in incidents).isoformat()
                if incidents else None
            ),
            "baselines_monitored": len(self.performance_baselines),
            "last_reviewed": self.last_reviewed.isoformat() if self.last_reviewed else None,
        }


# Example usage — medical AI triage system (Annex III category)
pmm = PMMLPlan(
    system_id="MED-AI-TRIAGE-001",
    system_name="Emergency Triage Decision Support System",
    risk_category="Annex III 5(a) — AI for medical purposes",
    monitoring_start_date=datetime.date(2026, 3, 1),
)

pmm.add_baseline(PerformanceBaseline(
    metric_name="triage_accuracy",
    baseline_value=0.94,
    alert_threshold_pct=3.0,   # Alert at <91.2% accuracy
    incident_threshold_pct=7.0, # Art.72 trigger at <87.4% accuracy
    measurement_unit="proportion_correct",
))

pmm.add_baseline(PerformanceBaseline(
    metric_name="false_negative_rate",
    baseline_value=0.03,  # Note: deviation_pct for FNR = increase above baseline
    alert_threshold_pct=-15.0,  # Alert if FNR increases >15% above baseline
    incident_threshold_pct=-30.0,
    measurement_unit="proportion_missed_critical",
))

result = pmm.record_measurement("triage_accuracy", 0.865, "April weekly review")
print(f"PMM Level: {result.pmm_level.value}")
print(f"Art.72 report required: {result.art72_report_required}")
if result.art72_report_required:
    deadline = pmm.get_art72_deadline(result)
    print(f"Art.72 reporting deadline: {deadline}")

report = pmm.plan_status_report()
print(report)

Art.40 Compliance Checklist: 26-Item PMM Implementation Guide

PMM System Architecture (Items 1–6)

  1. PMM system established as a documented, standing organisational process (not an ad hoc activity) proportionate to the Art.9 risk profile of the deployed system.
  2. PMM plan created, version-controlled, and incorporated into technical documentation under Art.11 and Annex IV as a required component.
  3. PMM plan covers all deployment contexts of the high-risk AI system, including any expansions of geographic scope, user population, or use case since initial conformity assessment.
  4. PMM data collection mechanisms technically implemented — telemetry, logging, user feedback channels, operator reporting — with continuous automated collection for safety-critical metrics.
  5. PMM system integrated into Art.17 QMS document control, with PMM plan subject to QMS approval, versioning, and internal audit coverage.
  6. Roles and responsibilities for PMM activities formally defined and documented: data collection ownership, analysis responsibility, threshold breach escalation authority, Art.72 reporting decision-maker.

Data Collection and Analysis (Items 7–13)

  1. Performance baseline metrics defined for all safety-relevant AI system capabilities, with documented alert and incident thresholds calibrated to Art.9 risk assessment conclusions.
  2. Active collection processes implemented for performance metrics, output distributions, error and failure events, and near-miss incidents.
  3. User interaction data (appeals, corrections, negative feedback, harm reports from affected persons) systematically captured through operator-facilitated channels where the system operates with user interaction.
  4. Deployment context change monitoring implemented — processes to detect when operators are using the system in contexts beyond the conformity-assessed scope.
  5. External monitoring in place: horizon scanning for similar system incidents, regulatory guidance, academic literature identifying emerging risks in the AI technology category.
  6. PMM analysis cadence defined: automated alerting for critical metrics, periodic human review cycles for trend analysis, triggered reviews upon threshold breaches.
  7. Statistical methods for detecting distribution shift, performance drift, and bias emergence documented and applied consistently to PMM data.

Risk Management Integration (Items 14–17)

  1. PMM findings escalation pathway to Art.9 risk management system formally documented — PMM findings that reveal new or unaddressed risks trigger Art.9 system review and update.
  2. Process defined for assessing whether PMM-triggered Art.9 updates require supplementary conformity assessment under Art.43.
  3. PMM plan update process triggered by Art.9 risk management system updates — changes to risk profile or mitigation measures reflected in updated PMM plan within defined timeframe.
  4. Technical documentation update workflow triggered by PMM-sourced changes to system design, architecture, or deployment restrictions.

Art.72 Incident Reporting Pipeline (Items 18–22)

  1. Incident severity classification criteria defined and documented against Art.72 thresholds (death, serious injury, significant property damage, serious fundamental rights impact, serious risk to health/safety without harm).
  2. Automated alerting implemented for PMM data signals that could indicate Art.72-qualifying serious incidents, with alert thresholds calibrated to ensure detection within the 15-day reporting window.
  3. Art.72 reporting decision tree documented: who makes the determination, what evidence is required, what reporting authority is notified (MSA in each member state where the system is deployed).
  4. Art.72 report template prepared and approved, covering all information requirements — system identification, incident description, timeline, affected persons, causal analysis, corrective measures.
  5. Internal escalation timeline defined to ensure Art.72 reports are submitted within 15 days of incident detection — accounting for internal review, legal sign-off, and regulatory submission processes.

Records and Market Surveillance Readiness (Items 23–26)

  1. PMM records retention system implemented: ten years retention for all PMM plans, data analysis records, incident records, corrective action records, and plan update history.
  2. PMM records organised for MSA access under Art.79 — clear document structure enabling rapid production of PMM records upon MSA request.
  3. GPAI supply chain PMM coverage confirmed where applicable: contractual arrangements with GPAI model providers for incident notifications and model change alerts that affect the high-risk AI system's PMM obligations.
  4. Annual PMM plan review scheduled: documented review assessing whether data collection methods, baselines, thresholds, and analysis processes remain appropriate given actual deployment experience.

Key Takeaways

Art.40 is the provision that transforms EU AI Act compliance from a certification-point obligation into a continuous lifecycle obligation. The conformity assessment procedures under Art.43, the technical documentation under Art.11, and the CE marking declaration under Art.48 are all backward-looking at the time of deployment — they record what the provider did before putting the system on the market. Art.40 creates the forward-looking obligation: actively monitoring what the system does after deployment, detecting deviations from validated performance, and reporting serious incidents to authorities.

The engineering consequence is that high-risk AI system providers cannot treat regulatory compliance as a project that ends at deployment. PMM instrumentation, data pipelines, alerting systems, analysis processes, and incident reporting workflows must be built, maintained, and operated as production engineering obligations. A deployed high-risk AI system with no functioning PMM system is a non-compliant system regardless of how well its pre-deployment conformity assessment was conducted.

The Art.40 × Art.72 pipeline is the most time-critical element. Fifteen days from becoming aware of a serious incident to regulatory reporting is a short window. Providers who discover serious incidents through manual review of incident tickets rather than automated PMM alerting will consistently struggle to meet this deadline. The PMM system design must treat the 15-day Art.72 deadline as a hard engineering constraint, not an administrative target.


This analysis covers EU AI Act Art.40 post-market monitoring requirements as of April 2026. The AI Act's Chapter V (Arts.40–49) addressing market surveillance and oversight is in force from August 2026 for high-risk AI systems. Providers should verify current EASA, ESAs, and national MSA guidance as implementation proceeds.