2026-04-24·12 min read·sota.io team

EU AI Act Art.72: Post-Market Monitoring — Provider Obligations, Data Collection Plans, and Serious Incident Correlation (2026)

EU AI Act Article 72 establishes a continuous post-deployment safety feedback loop for high-risk AI systems. Unlike a one-time conformity assessment, Art.72 imposes an ongoing operational obligation: providers must systematically collect, analyse, and act on performance data throughout the entire lifetime of their deployed high-risk AI system. The post-market monitoring system is the AI Act's primary instrument for detecting whether a system that satisfied the pre-market conformity requirements continues to perform safely and in conformity with the Regulation once exposed to real-world operational conditions.

For developers and compliance teams, Art.72 has direct architectural implications. The monitoring system must be embedded in the quality management system (QMS) required by Art.9, feed into the serious incident reporting pipeline required by Art.65, and be documented as part of the technical documentation required by Art.11. This is not a post-launch dashboard — it is a compliance-critical system that NCAs can demand access to at any time under Art.58 and Art.64.

The Commission's guidance and the harmonised standards being developed under the AI Act treat post-market monitoring as a fundamental element of the risk management system, not a standalone obligation. Systems that fail to establish compliant monitoring face the same penalty exposure as systems that bypass conformity assessment entirely.


Art.72 in the Post-Deployment Obligations Architecture

Art.72 sits within Chapter VIII of the AI Act, which governs post-deployment obligations. The Chapter VIII framework links monitoring data directly to enforcement:

ArticleObligationRelation to Art.72
Art.9Quality management systemPost-market monitoring plan is a mandatory QMS component
Art.11Technical documentationMonitoring methodology must be documented in Annex IV documentation
Art.65Serious incident reportingMonitoring data triggers Art.65 reporting obligations
Art.58NCA investigative powersNCAs may request access to monitoring data and system logs
Art.64Access to data and documentationNCAs have a right to access monitoring records
Art.72Post-market monitoring plan and operationThis guide
Art.73Reporting obligations for providers of high-risk AI systemsBuilds on Art.72 monitoring output

Art.72(1): The Provider Obligation and QMS Integration

Art.72(1) establishes the core obligation: providers of high-risk AI systems shall establish and document a post-market monitoring system in a manner appropriate to the nature of the AI technologies and the risks of the high-risk AI system. That system must be part of the provider's QMS (Art.9).

Scope. The obligation applies to all high-risk AI systems listed in Annex III or covered by Annex I (Union harmonisation legislation). The obligation applies throughout the operational lifetime of the system — from first deployment through to decommissioning.

QMS integration. The post-market monitoring plan is not a separate document; it must be embedded within the QMS procedures required by Art.9(1). The QMS must include procedures for collecting and analysing post-deployment data, escalating findings to the risk management process, and updating the technical documentation when the monitoring data reveals a change in the system's risk profile.

Proportionality. Art.72(1) acknowledges that the monitoring methodology must be appropriate to the nature of the AI technologies used and the specific risks of the system. A computer vision system deployed in a medical device context requires a more intensive monitoring regime than a document classification tool used in human resources screening. Providers must document the proportionality rationale in the QMS.

SME considerations. Small and medium enterprises face the same Art.72(1) obligation as large providers, but the proportionality principle permits a lighter-weight monitoring methodology where the risk profile justifies it. SMEs may use simplified data collection tools provided they can demonstrate systematic coverage of the required monitoring dimensions.


Art.72(2): Data Collection Methodology and Analysis

Art.72(2) specifies what the post-market monitoring plan must cover in terms of data collection and analysis. The plan must enable the provider to:

Data types. In practice, the monitoring data will typically include:

Analysis cadence. The plan must specify how frequently the collected data is analysed and by whom. The analysis must feed into the risk management process under Art.9(2)(b) and trigger updates to the technical documentation when the risk profile changes.

Deployer feedback loop. Art.26(1)(g) requires deployers of high-risk AI systems to inform providers of any serious incidents or malfunctions. Providers must design their monitoring architecture to receive, integrate, and act on this deployer-sourced data. The monitoring plan must document the channel and protocol for receiving deployer reports.


Art.72(3): Serious Incident Correlation and the Art.65 Threshold

Art.72(3) establishes the link between post-market monitoring data and the serious incident reporting obligation under Art.65. Providers must ensure that their monitoring system includes a procedure for:

Art.3(49) serious incident definition. A serious incident is any incident or malfunction of a high-risk AI system that directly or indirectly causes: (a) the death of a person; (b) serious damage to a person's health; (c) serious damage to property or the environment; or (d) a breach of fundamental rights. It also covers the unintended sharing of confidential information or a failure that results in a discriminatory outcome in a context listed in Annex III categories 1, 2, 3, 5(a), or 6.

Near-miss escalation. The monitoring plan should include a near-miss protocol: events that did not meet the Art.3(49) threshold but came close must be documented, analysed for systemic root causes, and fed back into the risk management system. Near-misses do not trigger Art.65 reporting but are relevant to the ongoing conformity assessment and may inform the provider's obligation to notify the NCA of significant changes under Art.65(8).

Correlation algorithm considerations. Providers using automated monitoring systems must ensure that the correlation logic does not introduce classification biases that systematically under-classify events near the serious incident threshold. The monitoring plan must document the algorithm parameters and the human review process for borderline events.


Art.72(4): Annex I Sector-Specific Monitoring Obligations

Art.72(4) addresses the relationship between the AI Act monitoring obligation and Union harmonisation legislation listed in Annex I (sector-specific legislation). Where the post-market monitoring is established and carried out under Union harmonisation legislation applicable to the same high-risk AI system, the obligations under that sector legislation apply — the provider does not need to duplicate a separate AI Act monitoring plan where the sector plan already covers the required monitoring dimensions.

Annex I legislation examples. High-risk AI systems covered by both the AI Act and Annex I sector legislation include:

Minimum AI Act monitoring floor. Even where Art.72(4) permits reliance on sector-specific monitoring, providers must ensure that the sector monitoring system covers the AI Act-specific monitoring dimensions: input data distribution shifts, human oversight intervention rates, and AI-specific failure modes that may not be captured by hardware-focused sector frameworks. Where gaps exist, the provider must supplement the sector monitoring with AI-specific monitoring procedures.

Documentation requirement. Providers relying on Art.72(4) must document in their technical documentation (Annex IV) precisely which sector monitoring obligations are being relied on and how they satisfy each Art.72 monitoring dimension.


Art.72(5): Post-Market Monitoring Plan Content Requirements

Art.72(5) specifies the minimum content that the post-market monitoring plan must contain. The plan must include:

System performance indicators. Quantitative and qualitative indicators that describe the intended performance of the high-risk AI system in normal operational conditions, including accuracy thresholds, response time bounds, and output quality metrics.

Data collection procedures. A description of the technical and organisational mechanisms used to collect monitoring data, including the data sources (system logs, user feedback, deployer reports, third-party audit reports), the data collection cadence, and the data retention policy.

Serious incident correlation procedure. The methodology for correlating monitoring events against the Art.3(49) serious incident definition, including the human review process for borderline events.

Escalation and corrective action procedures. A documented procedure for escalating monitoring findings to the risk management team, initiating corrective actions (including system updates, operational restrictions, or temporary withdrawal), and updating the technical documentation.

Monitoring timeline. The planned schedule for monitoring data collection, analysis, and review, including interim review triggers (e.g., a threshold number of monitoring events, a change in the deployment context, a significant update to the system).

Deployer reporting channel. A documented mechanism for receiving, recording, and acting on serious incident and malfunction reports from deployers under Art.26(1)(g).


Art.72(6): Regulatory Interface and NCA Access

Art.72(6) establishes the regulatory interface between the provider's post-market monitoring system and the national competent authority. The NCA may require access to the post-market monitoring data and records under Art.58 and Art.64. This access right operates in parallel with:

Monitoring record retention. Providers should treat post-market monitoring records as subject to the same ten-year retention requirement that applies to conformity assessment records. The monitoring data is the primary evidence base for demonstrating continued compliance throughout the system's operational lifetime.

Commission access. Under Art.64(4), the AI Office may also request access to post-market monitoring data for GPAI model providers, to the extent that the GPAI model is incorporated into a high-risk AI system that is subject to Art.72 monitoring obligations.


CLOUD Act Implications for Post-Market Monitoring Data

Post-market monitoring data presents a significant CLOUD Act exposure for EU-based providers who store monitoring infrastructure on US cloud services.

Risk profile. Monitoring data typically includes:

All of these are potentially subject to US production orders under the CLOUD Act (18 U.S.C. § 2713), which requires US providers to disclose data regardless of where it is stored. A serious incident investigation that involves EU NCA access to monitoring data could run in parallel with a US government production order targeting the same monitoring infrastructure.

CLOUD Act vs. GDPR conflict. Art.72 monitoring data will typically include personal data from the operational context of the high-risk AI system (e.g., demographic data used to evaluate bias in an employment screening system). Disclosure under a CLOUD Act production order may conflict with GDPR transfer restrictions (Chapter V). EU providers face an unresolved legal conflict between US disclosure obligations and GDPR transfer restrictions.

Risk mitigation. The most effective mitigation is to store post-market monitoring data — particularly serious incident records and personally-identifiable monitoring data — on EU-based infrastructure not subject to CLOUD Act jurisdiction. Providers using EU-incorporated cloud providers (such as European PaaS platforms) for their monitoring infrastructure can ensure that the data does not fall within the scope of US production orders. Where US cloud infrastructure is used, providers should consider technical encryption controls that prevent provider-side decryption, combined with contractual MLAT notification clauses.


Python PostMarketMonitor Implementation

from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional


class MonitoringEventType(Enum):
    ACCURACY_DEGRADATION = "accuracy_degradation"
    DISTRIBUTION_SHIFT = "distribution_shift"
    HUMAN_OVERRIDE = "human_override"
    NEAR_MISS = "near_miss"
    DEPLOYER_REPORT = "deployer_report"
    SYSTEM_ERROR = "system_error"
    SERIOUS_INCIDENT_CANDIDATE = "serious_incident_candidate"


class SeriousIncidentCategory(Enum):
    DEATH = "death"
    HEALTH_DAMAGE = "serious_health_damage"
    PROPERTY_DAMAGE = "serious_property_damage"
    FUNDAMENTAL_RIGHTS_BREACH = "fundamental_rights_breach"
    DISCRIMINATION_ANNEX_III = "discrimination_annex_iii"
    CONFIDENTIALITY_BREACH = "confidentiality_breach"
    NOT_SERIOUS = "not_serious"


@dataclass
class MonitoringEvent:
    event_id: str
    event_type: MonitoringEventType
    timestamp: datetime
    description: str
    severity_score: float  # 0.0-1.0
    deployer_reported: bool = False
    source_system: str = ""
    raw_data: dict = field(default_factory=dict)


@dataclass
class SeriousIncidentAssessment:
    event_id: str
    art3_49_category: SeriousIncidentCategory
    threshold_met: bool
    borderline: bool  # requires human review
    assessment_timestamp: datetime
    reviewer: Optional[str] = None
    art65_notification_required: bool = False
    nca_notification_deadline: Optional[datetime] = None  # 15 days under Art.65(1)


class PostMarketMonitor:
    """
    Art.72 post-market monitoring system for high-risk AI systems.
    Integrates with QMS (Art.9), technical documentation (Art.11),
    and serious incident reporting (Art.65).
    """

    SERIOUS_INCIDENT_THRESHOLD = 0.75  # severity_score above which human review triggers
    BORDERLINE_BAND = 0.60  # severity_score between 0.60-0.75 = near-miss / borderline

    def __init__(self, system_id: str, annex_iii_category: str, is_annex_i: bool = False):
        self.system_id = system_id
        self.annex_iii_category = annex_iii_category
        self.is_annex_i = is_annex_i  # Art.72(4): sector legislation applies
        self.events: list[MonitoringEvent] = []
        self.assessments: list[SeriousIncidentAssessment] = []

    def record_event(self, event: MonitoringEvent) -> Optional[SeriousIncidentAssessment]:
        """Record a monitoring event and trigger serious incident correlation if needed."""
        self.events.append(event)

        if event.severity_score >= self.BORDERLINE_BAND:
            return self._correlate_serious_incident(event)
        return None

    def _correlate_serious_incident(self, event: MonitoringEvent) -> SeriousIncidentAssessment:
        """Art.72(3): Correlate event against Art.3(49) serious incident definition."""
        threshold_met = event.severity_score >= self.SERIOUS_INCIDENT_THRESHOLD
        borderline = self.BORDERLINE_BAND <= event.severity_score < self.SERIOUS_INCIDENT_THRESHOLD

        # Map event type to Art.3(49) category
        category_map = {
            MonitoringEventType.SERIOUS_INCIDENT_CANDIDATE: SeriousIncidentCategory.HEALTH_DAMAGE,
            MonitoringEventType.NEAR_MISS: SeriousIncidentCategory.NOT_SERIOUS,
            MonitoringEventType.DEPLOYER_REPORT: SeriousIncidentCategory.FUNDAMENTAL_RIGHTS_BREACH,
        }
        category = category_map.get(event.event_type, SeriousIncidentCategory.NOT_SERIOUS)

        # Art.65(1): 15 working days notification window for serious incidents
        notification_deadline = None
        if threshold_met:
            notification_deadline = event.timestamp + timedelta(days=15)

        assessment = SeriousIncidentAssessment(
            event_id=event.event_id,
            art3_49_category=category,
            threshold_met=threshold_met,
            borderline=borderline,
            assessment_timestamp=datetime.now(),
            art65_notification_required=threshold_met,
            nca_notification_deadline=notification_deadline,
        )
        self.assessments.append(assessment)
        return assessment

    def pending_art65_notifications(self) -> list[SeriousIncidentAssessment]:
        """Return assessments requiring Art.65 NCA notification."""
        return [a for a in self.assessments if a.art65_notification_required and a.reviewer is None]

    def overdue_notifications(self) -> list[SeriousIncidentAssessment]:
        """Return Art.65 notifications past their 15-day deadline."""
        now = datetime.now()
        return [
            a for a in self.assessments
            if a.art65_notification_required
            and a.nca_notification_deadline is not None
            and a.nca_notification_deadline < now
            and a.reviewer is None
        ]

    def near_miss_events(self) -> list[MonitoringEvent]:
        """Return events in the near-miss band for root-cause analysis."""
        return [
            e for e in self.events
            if self.BORDERLINE_BAND <= e.severity_score < self.SERIOUS_INCIDENT_THRESHOLD
        ]

    def monitoring_summary(self) -> dict:
        """Art.72(5): Structured monitoring plan summary for technical documentation."""
        return {
            "system_id": self.system_id,
            "annex_iii_category": self.annex_iii_category,
            "annex_i_applies": self.is_annex_i,
            "total_events": len(self.events),
            "serious_incident_candidates": len([a for a in self.assessments if a.threshold_met]),
            "near_misses": len(self.near_miss_events()),
            "pending_art65_notifications": len(self.pending_art65_notifications()),
            "overdue_notifications": len(self.overdue_notifications()),
            "human_override_events": len([e for e in self.events if e.event_type == MonitoringEventType.HUMAN_OVERRIDE]),
            "deployer_reports": len([e for e in self.events if e.deployer_reported]),
            "generated_at": datetime.now().isoformat(),
        }

Art.71 in the Series: Chapter VIII Post-Deployment Obligations

ArticleTopicStatus
Art.57National Competent AuthoritiesGuide
Art.58NCA Investigative PowersGuide
Art.59European AI BoardGuide
Art.60EU AI DatabaseGuide
Art.61Scientific PanelGuide
Art.62AI Office Enforcement PowersGuide
Art.63Advisory ForumGuide
Art.64Access to Data and DocumentationGuide
Art.65Serious Incident ReportingGuide
Art.66Market Surveillance Information ExchangeGuide
Art.67Union Safeguard ProcedureGuide
Art.68AI Regulatory SandboxesGuide
Art.69Codes of ConductGuide
Art.70PenaltiesGuide
Art.71Exercise of the DelegationGuide
Art.72Post-Market MonitoringThis guide
Art.73Obligations of DeployersGuide

10-Item Art.72 Compliance Checklist

Use this checklist to verify your Art.72 post-market monitoring posture:

  1. QMS integration — Post-market monitoring plan is embedded in the Art.9 QMS documentation, not maintained as a standalone document.
  2. Plan content completeness — Plan covers all Art.72(5) minimum requirements: performance indicators, data collection procedures, serious incident correlation methodology, escalation procedures, monitoring timeline, and deployer reporting channel.
  3. Deployer feedback channel — A documented channel exists for receiving Art.26(1)(g) deployer incident and malfunction reports, with a defined response SLA.
  4. Art.3(49) correlation algorithm — A written procedure exists for correlating monitoring events against the serious incident definition, with documented parameters and human review triggers for borderline events.
  5. Near-miss documentation — Near-miss events (below the Art.3(49) threshold but within the borderline band) are documented, root-cause analysed, and fed back into the Art.9 risk management process.
  6. 15-day Art.65 notification tracking — A system exists to track the 15 working-day Art.65 notification deadline from the moment a serious incident candidate is identified.
  7. Annex I alignment — If the system is also subject to Annex I sector legislation (MDR, IVDR, Machinery Regulation, RED), the monitoring plan documents which sector obligations are relied on and where AI Act-specific gaps are supplemented.
  8. Record retention — Monitoring records are retained for the same period as conformity assessment records (minimum 10 years from market placement) and are accessible to NCAs on request.
  9. CLOUD Act risk assessment — Monitoring data storage infrastructure has been assessed for CLOUD Act exposure, and personally-identifiable monitoring data is stored on EU-jurisdiction infrastructure where feasible.
  10. Technical documentation update trigger — A procedure exists for updating the Annex IV technical documentation when monitoring data reveals a change in the system's risk profile or a new use-case pattern not covered in the original conformity assessment.