2026-04-16·12 min read·

EU AI Act Art.72: Post-Market Monitoring Plan for High-Risk AI Systems — Developer Guide (2026)

EU AI Act Article 72 establishes the post-market monitoring (PMM) system and monitoring plan obligation for providers of high-risk AI systems within the market surveillance framework of Chapter IX. Where Art.30 (Chapter III, Section 4) defines the provider's general PMM obligation as part of the core requirements for placing a high-risk AI system on the EU market, Art.72 governs the systematic monitoring architecture from the market surveillance perspective: what the PMM plan must document, how monitoring findings link to continued compliance evaluation against Chapter III Section 2 requirements (Art.9–15), and how the system must feed detected vigilance events into Art.73 incident reporting.

The practical difference is structural. Art.30 asks: "Does this provider have a PMM system?" Art.72 asks: "Does that system continuously evaluate whether the AI still meets Art.9–15 requirements in production?" Art.72 frames post-market monitoring not merely as incident detection but as an ongoing conformity verification loop — the deployed system must continuously demonstrate that it continues to satisfy risk management (Art.9), data quality (Art.10), technical documentation currency (Art.11), logging accuracy (Art.12), transparency (Art.13), human oversight (Art.14), and robustness (Art.15).

This guide covers Art.72(1)–(4) in full, the Art.72 monitoring plan mandatory content (Annex IV Section 6), the proportionality framework, the Art.72 × Art.9 feedback loop, Art.72 × Art.73 vigilance trigger architecture, the cross-provider risk pattern discovery mechanism, CLOUD Act jurisdiction risk for PMM data, Python implementation for PostMarketMonitoringPlan and VigilanceEventClassifier, and the 40-item Art.72 compliance checklist.

Art.72 became applicable on 2 August 2026 as part of the Chapter IX post-market and market surveillance framework. All providers with high-risk AI systems on the EU market from that date must have an operational PMM system and a documented plan meeting Art.72 requirements.


Art.72 vs Art.30: The Developer's Distinction

DimensionArt.30 (Chapter III)Art.72 (Chapter IX)
Chapter contextProvider obligations (market placement requirements)Market surveillance framework
Primary question"Is there a PMM system?""Does the PMM system continuously verify conformity?"
Conformity linkGeneral performance data collectionExplicit evaluation against Art.9–15 requirements
Risk proportionalityReferencedExplicit proportionality framework
Cross-provider discoveryNot addressedArt.72(4): cross-provider risk pattern reporting
Plan documentPMM plan in Annex IVPMM plan structure and mandatory content
Vigilance triggerPMM → Art.73 (general)PMM → Art.73 vigilance system (specific trigger conditions)
Enforcement basisArt.99(3): €15M / 3% global turnoverArt.99(3) + Art.74 MSA powers

Both articles are required. Art.30 establishes the obligation; Art.72 defines the systematic architecture that makes that obligation operational. A provider can comply with Art.30 by having a PMM system. They comply with Art.72 by ensuring that system specifically evaluates Chapter III Section 2 conformity in production.


Art.72 at a Glance

ProvisionContentDeveloper Obligation
Art.72(1)Providers establish and document a PMM system proportionate to the nature of AI technology and risksDesign PMM system at system architecture level, document in QMS (Art.17)
Art.72(2)PMM system actively collects, documents, and analyses data on performance; evaluates continued compliance with Art.9–15Operational data pipeline from deployment environments; conformity dashboard
Art.72(3)Providers establish a PMM plan as part of technical documentation (Annex IV)PMM plan as mandatory Annex IV Section 6 document
Art.72(4)Market surveillance authorities informed when multiple providers' systems show similar risks; providers must correct and reportCross-system risk signals → Art.73 reports + MSA notification

Art.72(1): Proportionate PMM System Obligation

Art.72(1) requires every provider of a high-risk AI system to establish and maintain a post-market monitoring system. Two requirements are explicit:

  1. Documentation: The system must be documented — not merely operational. Documentation links to the technical documentation obligation (Annex IV) and the QMS (Art.17). An undocumented PMM process fails Art.72(1) even if monitoring activities occur informally.

  2. Proportionality: The system must be proportionate to the nature of the AI technology and the risks of the high-risk AI system. This creates a calibrated obligation: the PMM architecture for a low-risk Annex III category 8 (law enforcement biometric ID) system requires more intensive monitoring than for a category 5 (safety component in non-critical infrastructure) system, even though both are formally high-risk.

The Proportionality Framework

Art.72(1) proportionality operates across four dimensions:

DimensionLow-Risk CalibrationHigh-Risk Calibration
Monitoring frequencyQuarterly performance analysisReal-time drift detection + daily summary
Data sourcesDeployer summary reports (Art.30(5))Live telemetry + automated anomaly detection
Conformity re-evaluationAnnual Art.9 reviewEvent-triggered review + rolling 90-day assessment
Art.73 vigilanceQuarterly incident screeningAutomated real-time incident detection pipeline

Providers must document their proportionality rationale — why the chosen monitoring intensity is appropriate for their system's Annex III category, risk level, and deployment context. Market surveillance authorities reviewing Art.72 compliance will look for this documentation.

Documentation in the QMS

Art.72(1) documentation does not stand alone — it is part of the Quality Management System (Art.17). The QMS must include procedures for post-market monitoring. Art.17(1)(j) explicitly requires QMS documentation to cover the PMM system. Providers building a standalone PMM process without QMS integration satisfy neither Art.17 nor Art.72(1).

The PMM system documentation should specify:


Art.72(2): Continued Compliance Evaluation

Art.72(2) is the article's most operationally significant provision. It requires that the PMM system shall:

  1. Actively collect data on the performance of the high-risk AI system throughout its lifetime
  2. Document and analyse that data
  3. Evaluate continued compliance with the requirements set out in Chapter III, Section 2

The third obligation — evaluating continued compliance with Art.9–15 — transforms post-market monitoring from generic incident detection into a rolling conformity audit. The system deployed on 2 August 2026 must still satisfy Art.9–15 on 2 August 2027 and 2 August 2028. Drift, context change, or deployment expansion can all invalidate initial conformity.

The Art.72(2) Conformity Evaluation Matrix

Art.9–15 RequirementPMM Data PointCompliance Failure Indicator
Art.9: Risk managementObserved harm rates vs. residual risk estimatesHarm rate exceeds residual risk threshold
Art.10: Data governanceDeployer-reported data quality degradationData drift → distribution shift in input data
Art.11: Technical documentationChange log completenessUndocumented modifications to system architecture
Art.12: LoggingLog completeness rates from deployersLogging gaps → undetectable Art.73 events
Art.13: TransparencyUser complaint analysis re: system explainabilityTransparency gaps in new deployment contexts
Art.14: Human oversightOversight failure incidents from deployersHuman oversight bypass rate exceeds threshold
Art.15: Robustness/cybersecurityAdversarial probe results; security incident rateRobustness degradation → increased vulnerability

Art.72(2) compliance means having a documented methodology for collecting these data points and a process for evaluating whether they indicate continued conformity. A provider who only monitors for serious incidents (Art.73) but does not evaluate Art.9–15 conformity drift fails Art.72(2).


Art.72(3): The Post-Market Monitoring Plan

Art.72(3) requires providers to establish a post-market monitoring plan as part of the technical documentation required by Annex IV. The plan is not a separate document produced post-launch — it is a design-time artifact that must be drafted before market placement and updated as the system evolves.

Mandatory Plan Content

The PMM plan must cover at minimum:

Plan SectionContentRegulatory Basis
System identificationSystem name, version, Annex III category, EUAIDB registration number (Art.71)Annex IV Section 1
Monitoring objectivesWhat Art.9–15 requirements the plan actively monitors for continued complianceArt.72(2)
Data collection methodsSources, frequency, format, retention period for performance dataArt.72(2)
Deployer cooperationHow deployer reports (Art.30(5)) feed into PMM analysisArt.30(5) × Art.72(3)
Analysis methodologyThresholds, statistical methods, automated vs. manual reviewArt.72(2)
Vigilance event criteriaWhat monitoring findings trigger Art.73 serious incident reportingArt.72(4) × Art.73
Art.9 feedback procedureHow PMM findings update the risk management systemArt.9(9) × Art.72(2)
Corrective action protocolWhat happens when monitoring finds noncomplianceArt.20 × Art.72
Plan version controlWhen and how the plan is updated (system change, significant incident, market expansion)Art.11 × Art.72(3)
Responsible partiesWho owns each PMM function and who has authority to trigger escalationArt.17 QMS integration

The Annex IV Connection

Annex IV specifies the required content of technical documentation. Section 6 of Annex IV is specifically reserved for post-market monitoring. The PMM plan must be in Annex IV Section 6 — not in a separate operational document. When a market surveillance authority requests technical documentation under Art.21, the PMM plan must be immediately producible.

Critical implication: the PMM plan is updated whenever the system changes. If a substantial modification under Art.6(3) triggers a new conformity assessment (Art.43) and new technical documentation, the PMM plan must be updated to reflect the modified system's new risk profile and monitoring requirements.


Art.72(4): Vigilance Events and Cross-Provider Risk Signals

Art.72(4) addresses two distinct scenarios:

Scenario A: Vigilance Events Triggering Art.73

When PMM activities detect what could be a serious incident under Art.3(49), the PMM system must immediately trigger the Art.73 reporting workflow. Art.72(4) creates the bridge between the monitoring system (Art.72) and the incident reporting obligation (Art.73).

The trigger is two-stage:

  1. Detection: PMM data indicates an event meeting the Art.3(49) definition (death, serious health harm, fundamental rights violation, or major infrastructure disruption)
  2. Escalation: Art.73 reporting clock starts from the moment the provider becomes aware — and PMM system detection constitutes awareness

Providers whose PMM system has automated incident detection must therefore ensure the Art.73 reporting clock starts from automated detection time, not from when a human reviews the automated alert. A PMM system that detects a serious incident on Tuesday but has the report reviewed by human counsel on Thursday has a Thursday-minus-two awareness date — but only if the Monday-Tuesday automated detection was not yet specific enough to constitute "awareness" of a serious incident.

Best practice: define a vigilance event severity threshold in the PMM plan that distinguishes:

Scenario B: Cross-Provider Risk Pattern Discovery

Art.72(4) also addresses a less commonly discussed scenario: when a market surveillance authority discovers that multiple providers' AI systems — performing similar functions — are exhibiting similar risks. In this case:

For developers, this creates a cross-system monitoring risk: your system may face regulatory action based on risk patterns discovered from competitors' products in the same category. Maintaining a PMM system that provides robust evidence of your system's performance — distinct from category-wide patterns — is the only defence.


Art.72 × Art.9: The Risk Management Feedback Loop

Art.9(9) requires the risk management system to be updated throughout the high-risk AI system's lifecycle. Art.72(2) requires PMM to evaluate continued conformity with Art.9 requirements. These two obligations create a mandatory feedback loop:

Art.9 Risk Assessment
       ↓
System Deployed → Art.72 PMM Active
       ↓
PMM Data Analysis
       ↓
  ┌────────────────────────────────────────────┐
  │ Conformity finding:                         │
  │ • Risk profile unchanged → document + continue │
  │ • New risk identified → trigger Art.9 update │
  │ • Art.9–15 drift → corrective action (Art.20) │
  └────────────────────────────────────────────┘
       ↓
Art.9 Updated → Technical Documentation Updated (Annex IV)
       ↓
If substantial: New conformity assessment (Art.43) + EUAIDB update (Art.71)

This loop has a critical timing implication: Art.9 updates triggered by PMM findings must be documented before the next PMM reporting cycle. A gap between PMM finding and Art.9 update is evidence of noncompliance with both Art.72(2) and Art.9(9).

PMM Trigger Conditions for Art.9 Update

PMM FindingRequired ResponseTimeline
Harm rate exceeds residual risk estimate by >10%Art.9 risk re-assessment mandatoryWithin 30 days of detection
New deployment context (sector, geography, population)Art.9 update for new contextBefore deployment in new context
Adversarial attack succeeds in productionArt.9 update + Art.15 reviewWithin 15 days
Deployer reports systematic oversight failureArt.9 update + Art.14 reviewWithin 30 days
Substantial modification of AI systemFull Art.9 updateBefore modification deployment

Art.72 × Art.30: Deployer Data Contribution

Art.30(5) requires deployers to "monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers about serious incidents." This deployer obligation is the primary external data input for the provider's Art.72 PMM system.

The Art.72 PMM plan must specify how deployer reports are:

  1. Received — what channel, what format, what SLA for provider acknowledgment
  2. Categorized — routine performance data vs. anomaly vs. potential serious incident
  3. Integrated into the PMM analysis pipeline
  4. Acted upon — feedback to the deployer confirming receipt + action taken

Providers who design their PMM system without a formal deployer reporting intake process fail both Art.72(3) (PMM plan must cover deployer cooperation) and the spirit of Art.30(5) (deployer obligation to report is only meaningful if the provider has infrastructure to receive and act on reports).

Deployer Cooperation Agreement

Best practice: include a PMM cooperation clause in the deployer contract specifying:

This clause has dual regulatory benefit: it provides the deployer with documentation that they have fulfilled Art.30(5), and it ensures the provider receives data required under Art.72(2).


Art.72 × Art.17: QMS Integration

Art.17 requires providers to implement a Quality Management System. Art.17(1)(j) explicitly lists the PMM system as a QMS component. Art.72 defines what that PMM system must do.

The QMS-PMM integration means:

  1. PMM procedures are QMS procedures: The Art.72 monitoring workflows must be documented as QMS procedures, not as separate operational runbooks. QMS audits include PMM audit.
  2. PMM findings are QMS inputs: Nonconformities detected by PMM feed into the QMS corrective action process (Art.17(1)(h)).
  3. PMM plan is QMS-controlled: Updates to the PMM plan require QMS-level change management — version control, approval process, distribution to affected parties.
  4. PMM records are QMS records: Performance data, analysis reports, and escalation decisions are QMS records subject to the 10-year retention requirement (Art.18(2) by reference to Annex IV).

Providers building PMM infrastructure outside the QMS framework create a documentation gap that market surveillance authorities will identify during audits. The Art.74(3) right of access to technical documentation includes QMS records — a PMM system that exists but is not QMS-integrated provides weaker compliance evidence.


Art.72 × Art.21: Market Surveillance Access to PMM Data

Art.21 requires providers to cooperate with competent authorities and supply information required for compliance assessment. This includes PMM data and reports.

For Art.72, this means:


CLOUD Act × Art.72: PMM Data Jurisdiction Risk

PMM systems collect and store continuous performance data from deployed AI systems. Where that data is stored matters for CLOUD Act analysis.

PMM Data Types and Jurisdiction Risk

PMM Data TypeCLOUD Act Risk on US InfrastructureEU-Native PMM
Deployer performance reportsMedium — US authorities can compel provider's stored recordsNot CLOUD Act reachable
Real-time telemetry from AI systemHigh — continuous stream of operational dataNot CLOUD Act reachable
Art.73 pre-report incident evidenceVery High — exactly what enforcement investigations wantNot CLOUD Act reachable
Art.9 risk assessment updates triggered by PMMHigh — documents provider knowledge of risk evolutionNot CLOUD Act reachable
PMM plan documents (Annex IV Section 6)Medium — design documentationNot CLOUD Act reachable

The Dual Compellability Risk

A PMM system hosted on US cloud infrastructure creates a dual compellability scenario: EU market surveillance authorities can request PMM data under Art.21; US authorities can compel the same data under CLOUD Act. The provider is simultaneously subject to two legal regimes for the same data. If those regimes produce conflicting demands — EU confidentiality obligations vs. US disclosure orders — the provider has no legal exit.

EU-native infrastructure (where the cloud provider is not subject to US ECS-provider status) eliminates this dual compellability risk. For PMM data specifically — which includes continuous documentation of how an AI system performs in production — EU-native hosting means single-jurisdiction access: only EU competent authorities under Art.21/Art.74.

This is particularly significant for Art.73 pre-incident evidence. PMM systems that detect a potential serious incident generate exactly the records that enforcement authorities seek. EU-native storage ensures those records are only accessible under the Art.70 confidentiality framework.


Python Implementation

1. PostMarketMonitoringPlan

from dataclasses import dataclass, field
from datetime import date, datetime
from enum import Enum
from typing import Optional
import json


class AnnexIIICategory(Enum):
    CAT_1_BIOMETRIC = "1_biometric_categorisation"
    CAT_2_CRITICAL_INFRA = "2_critical_infrastructure"
    CAT_3_EDUCATION = "3_education_training"
    CAT_4_EMPLOYMENT = "4_employment_worker_management"
    CAT_5_ESSENTIAL_SERVICES = "5_essential_services"
    CAT_6_LAW_ENFORCEMENT = "6_law_enforcement"
    CAT_7_MIGRATION = "7_migration_asylum_border"
    CAT_8_JUSTICE = "8_justice_democratic_processes"


class MonitoringIntensity(Enum):
    STANDARD = "standard"       # Quarterly analysis, deployer reports
    ENHANCED = "enhanced"       # Monthly analysis, automated anomaly detection
    INTENSIVE = "intensive"     # Real-time telemetry, daily summary, 24h on-call


@dataclass
class PMM_DataSource:
    """A single data source feeding the PMM system."""
    source_id: str
    source_type: str           # "deployer_report", "telemetry", "user_feedback", "regulator"
    collection_frequency: str  # "real_time", "daily", "weekly", "monthly", "quarterly"
    format: str                # "json_structured", "api_webhook", "form_submission", "email"
    retention_years: int = 10  # Default matches Art.18(2) 10-year requirement


@dataclass
class ConformityCheck:
    """Maps an Art.9-15 requirement to a PMM observable."""
    article: str              # "Art.9", "Art.10", etc.
    requirement_summary: str
    pmm_data_source: str      # source_id from PMM_DataSource
    metric: str               # What is measured
    threshold: str            # What constitutes a conformity failure
    escalation_action: str    # What happens when threshold is breached


@dataclass
class PostMarketMonitoringPlan:
    """
    Post-Market Monitoring Plan as required by EU AI Act Art.72(3).
    Must be documented in Annex IV Section 6 of technical documentation.
    """
    # System identification (Annex IV Section 1)
    system_name: str
    system_version: str
    annex_iii_category: AnnexIIICategory
    euaidb_registration_number: Optional[str]  # Art.71 EUAIDB number
    provider_name: str
    plan_version: str
    plan_date: date

    # Art.72(1): Proportionality basis
    monitoring_intensity: MonitoringIntensity
    proportionality_rationale: str  # Must document WHY this intensity is appropriate

    # Art.72(2): Data collection and conformity evaluation
    data_sources: list[PMM_DataSource] = field(default_factory=list)
    conformity_checks: list[ConformityCheck] = field(default_factory=list)

    # Art.72(3): Plan mandatory sections
    deployer_reporting_channel: str = ""    # How deployers submit Art.30(5) reports
    deployer_reporting_sla_days: int = 5    # Provider response SLA
    deployer_report_format: str = "json"    # Structured preferred

    # Art.72(4): Vigilance triggers
    art73_trigger_criteria: list[str] = field(default_factory=list)
    art9_update_triggers: list[str] = field(default_factory=list)

    # Art.20: Corrective action protocol
    corrective_action_escalation: str = ""  # Who is notified and within what time

    # Plan lifecycle
    review_triggers: list[str] = field(default_factory=list)  # When plan is updated

    def validate(self) -> list[str]:
        """Returns list of compliance gaps. Empty list = plan is Art.72(3) compliant."""
        gaps = []

        if not self.proportionality_rationale:
            gaps.append("Art.72(1): No proportionality rationale documented")

        if not self.data_sources:
            gaps.append("Art.72(2): No data sources defined — cannot collect performance data")

        # Check all Art.9-15 requirements are covered
        covered_articles = {c.article for c in self.conformity_checks}
        required_articles = {"Art.9", "Art.10", "Art.11", "Art.12", "Art.13", "Art.14", "Art.15"}
        missing = required_articles - covered_articles
        if missing:
            gaps.append(f"Art.72(2): No conformity check for: {', '.join(sorted(missing))}")

        if not self.deployer_reporting_channel:
            gaps.append("Art.72(3): No deployer reporting channel defined — Art.30(5) cooperation impossible")

        if not self.art73_trigger_criteria:
            gaps.append("Art.72(4): No Art.73 vigilance trigger criteria defined")

        if not self.art9_update_triggers:
            gaps.append("Art.72 × Art.9: No Art.9 update trigger conditions defined")

        if not self.euaidb_registration_number:
            gaps.append("Art.71: No EUAIDB registration number — required before plan goes operational")

        if not self.corrective_action_escalation:
            gaps.append("Art.20: No corrective action escalation protocol")

        if not self.review_triggers:
            gaps.append("Art.72(3): No plan review triggers — plan may become stale after system changes")

        return gaps

    def is_ready_for_market_placement(self) -> bool:
        """Returns True if plan is ready for Annex IV Section 6 inclusion."""
        return len(self.validate()) == 0

    def to_annex_iv_section_6(self) -> str:
        """Serializes plan to the format required for Annex IV Section 6."""
        gaps = self.validate()
        if gaps:
            raise ValueError(
                f"Plan has {len(gaps)} compliance gap(s) — cannot produce Annex IV Section 6:\n"
                + "\n".join(f"  - {g}" for g in gaps)
            )
        return json.dumps({
            "annex_iv_section": 6,
            "system": f"{self.system_name} v{self.system_version}",
            "annex_iii_category": self.annex_iii_category.value,
            "euaidb_registration": self.euaidb_registration_number,
            "monitoring_intensity": self.monitoring_intensity.value,
            "proportionality_rationale": self.proportionality_rationale,
            "data_sources": len(self.data_sources),
            "conformity_checks": len(self.conformity_checks),
            "articles_monitored": sorted({c.article for c in self.conformity_checks}),
            "deployer_channel": self.deployer_reporting_channel,
            "art73_triggers": len(self.art73_trigger_criteria),
            "plan_version": self.plan_version,
            "plan_date": str(self.plan_date),
        }, indent=2)

2. VigilanceEventClassifier

from dataclasses import dataclass
from enum import Enum
from datetime import datetime
from typing import Optional


class VigilanceSeverity(Enum):
    GREEN = "green"    # Normal range — continue monitoring
    YELLOW = "yellow"  # Anomaly — human review within 24h
    ORANGE = "orange"  # Potential harm — pre-Art.73 review, counsel within 4h
    RED = "red"        # Serious incident criteria met — Art.73 clock starts NOW


@dataclass
class MonitoringObservation:
    """A single data point from the PMM system."""
    observation_id: str
    timestamp: datetime
    source_id: str             # Links to PMM_DataSource.source_id
    article_monitored: str     # "Art.9", "Art.12", etc.
    metric_name: str
    metric_value: float
    threshold_value: float
    deployer_id: Optional[str] = None
    raw_data: Optional[str] = None


@dataclass
class VigilanceEvent:
    """
    A classified monitoring event. If severity is RED, Art.73 clock has started.
    Awareness datetime = observation.timestamp for automated detection.
    """
    event_id: str
    observation: MonitoringObservation
    severity: VigilanceSeverity
    classification_datetime: datetime
    classification_rationale: str
    art73_clock_started: bool
    art73_awareness_datetime: Optional[datetime]   # = observation.timestamp if RED
    required_action: str
    escalation_deadline: Optional[datetime]        # Absolute datetime for RED events

    def days_remaining_for_art73_report(self) -> Optional[float]:
        """
        Returns days remaining to file Art.73 preliminary report.
        Returns None if event is not RED (no Art.73 obligation).
        Art.73 requires 2 working days (death/health) or 15 calendar days (other).
        Uses 15-day calendar default; caller must check for 2-day cases.
        """
        if not self.art73_clock_started or self.art73_awareness_datetime is None:
            return None
        elapsed = (datetime.now() - self.art73_awareness_datetime).total_seconds() / 86400
        return max(0.0, 15.0 - elapsed)  # 15-day calendar default

    def is_overdue_for_2_day_reporting(self) -> Optional[bool]:
        """
        Returns True if this RED event involves death/health/safety and 2 working days have passed.
        Returns None if not applicable.
        """
        if not self.art73_clock_started or self.art73_awareness_datetime is None:
            return None
        elapsed_hours = (datetime.now() - self.art73_awareness_datetime).total_seconds() / 3600
        return elapsed_hours > 48.0  # Conservative: uses calendar hours not working hours


class VigilanceEventClassifier:
    """
    Classifies monitoring observations as vigilance events.
    Implements the Art.72(4) trigger architecture with four severity levels.
    """

    def __init__(self, pmm_plan: "PostMarketMonitoringPlan"):
        self.plan = pmm_plan
        self._events: list[VigilanceEvent] = []

    def classify(self, obs: MonitoringObservation) -> VigilanceEvent:
        """Classify a monitoring observation and record it."""
        severity, rationale = self._assess_severity(obs)
        art73_clock = severity == VigilanceSeverity.RED
        awareness_dt = obs.timestamp if art73_clock else None

        escalation_deadline = None
        if art73_clock and awareness_dt:
            from datetime import timedelta
            # 15-day default; 2-day applies if provider determines death/health risk
            escalation_deadline = datetime(
                awareness_dt.year, awareness_dt.month, awareness_dt.day
            ) + timedelta(days=15)

        required_action = {
            VigilanceSeverity.GREEN: "Continue normal monitoring — no escalation",
            VigilanceSeverity.YELLOW: "Assign to PMM analyst — human review within 24h",
            VigilanceSeverity.ORANGE: "Escalate to legal counsel — pre-Art.73 review within 4h",
            VigilanceSeverity.RED: "FILE Art.73 report — clock started at observation timestamp",
        }[severity]

        event = VigilanceEvent(
            event_id=f"VE-{obs.observation_id}",
            observation=obs,
            severity=severity,
            classification_datetime=datetime.now(),
            classification_rationale=rationale,
            art73_clock_started=art73_clock,
            art73_awareness_datetime=awareness_dt,
            required_action=required_action,
            escalation_deadline=escalation_deadline,
        )
        self._events.append(event)
        return event

    def _assess_severity(
        self, obs: MonitoringObservation
    ) -> tuple[VigilanceSeverity, str]:
        """Determine severity based on threshold exceedance and article monitored."""
        exceedance_ratio = (
            (obs.metric_value - obs.threshold_value) / max(obs.threshold_value, 0.001)
            if obs.metric_value > obs.threshold_value else 0.0
        )

        # Check for direct Art.73 serious incident criteria
        if obs.article_monitored in ("Art.9", "Art.14") and exceedance_ratio > 1.0:
            return (
                VigilanceSeverity.RED,
                f"{obs.article_monitored} metric {obs.metric_name} exceeded threshold by "
                f"{exceedance_ratio:.0%} — serious incident criteria likely met. "
                "Art.73 clock started."
            )

        if exceedance_ratio > 0.5:
            return (
                VigilanceSeverity.ORANGE,
                f"{obs.article_monitored} metric {obs.metric_name} exceeded threshold by "
                f"{exceedance_ratio:.0%} — pre-Art.73 legal review required."
            )

        if exceedance_ratio > 0.1:
            return (
                VigilanceSeverity.YELLOW,
                f"{obs.article_monitored} metric {obs.metric_name} exceeded threshold by "
                f"{exceedance_ratio:.0%} — anomaly detected, human review required."
            )

        return (
            VigilanceSeverity.GREEN,
            f"{obs.article_monitored} metric {obs.metric_name} within normal range "
            f"({obs.metric_value:.3f} vs threshold {obs.threshold_value:.3f})."
        )

    def get_red_events(self) -> list[VigilanceEvent]:
        """Return all RED events — each requires Art.73 preliminary report."""
        return [e for e in self._events if e.severity == VigilanceSeverity.RED]

    def get_overdue_art73_events(self) -> list[VigilanceEvent]:
        """Return RED events past their 15-day (or 2-day) reporting deadline."""
        return [
            e for e in self.get_red_events()
            if e.days_remaining_for_art73_report() is not None
            and e.days_remaining_for_art73_report() <= 0
        ]

    def art9_update_required(self) -> bool:
        """Returns True if any event indicates Art.9 risk profile update is needed."""
        art9_events = [
            e for e in self._events
            if e.observation.article_monitored == "Art.9"
            and e.severity in (VigilanceSeverity.ORANGE, VigilanceSeverity.RED)
        ]
        return len(art9_events) > 0

Compliance Checklist (40 Items)

Section 1: PMM System Establishment (Art.72(1)) — 8 Items

Section 2: Continued Compliance Evaluation (Art.72(2)) — 8 Items

Section 3: PMM Plan (Art.72(3)) — 10 Items

Section 4: Vigilance Triggers and Art.73 Integration (Art.72(4)) — 8 Items

Section 5: Data and Infrastructure (Art.72 × CLOUD Act) — 6 Items


Cross-Article Intersection Matrix

Art.72 ProvisionConnected ArticleIntersectionAction Required
Art.72(1) PMS obligationArt.17 QMSPMM system is a QMS componentIntegrate PMM into QMS procedures
Art.72(1) proportionalityArt.9 risk classificationHigher risk → more intensive PMMCalibrate PMM to Art.9 residual risk level
Art.72(2) continued complianceArt.9–15 requirementsPMM evaluates all 7 requirementsCreate conformity check for each article
Art.72(2) data collectionArt.30(5) deployer reportsDeployer data is primary PMM inputDesign deployer reporting intake pipeline
Art.72(3) PMM planAnnex IV Section 6PMM plan is technical documentationDraft before market placement, maintain in Annex IV
Art.72(3) PMM planArt.43/Art.48PMM plan reviewed in conformity assessmentInclude PMM plan review in Art.43 scope
Art.72(4) vigilance triggerArt.73PMM detection → Art.73 clockAutomated detection = awareness start
Art.72(4) vigilance triggerArt.20 corrective actionPMM nonconformity → Art.20 obligationCorrective action protocol in PMM plan
Art.72(4) cross-providerArt.74 MSA powersMSAs share cross-provider risk signalsProcess for responding to MSA Art.72(4) notifications
Art.72 generalArt.21 cooperationPMM data producible to MSAArchive PMM reports with instant retrieval
Art.72 generalArt.71 EUAIDBEUAIDB number required in PMM planRegister in EUAIDB before PMM goes operational
Art.72 generalCLOUD ActPMM data on US cloud = dual compellabilityEU-native infrastructure for PMM data

Registration Timeline

MilestoneArt.72 PMM Requirement
System design phasePMM plan drafted — proportionality rationale, data sources, conformity checks
Pre-market conformity assessment (Art.43)PMM plan reviewed as Annex IV Section 6
EUAIDB registration (Art.71)Registration number added to PMM plan
Market placementPMM system operational — data collection active from day 1
First deployer onboardingDeployer reporting channel activated — Art.30(5) cooperation commences
30 days post-placementFirst PMM analysis cycle — Art.9–15 conformity check
Any serious incident detectedArt.73 clock started — PMM switches to incident documentation mode
Any Art.9–15 drift detectedCorrective action (Art.20) triggered + Art.9 update initiated
Substantial modification (Art.6(3))PMM plan updated before redeployment

What Developers Should Do Now

Before market placement:

  1. Draft the PMM plan now — treat it as a design artifact, not a post-launch document. Include in Annex IV Section 6.
  2. Register in EUAIDB (Art.71) and add registration number to the PMM plan. Without it, the plan is incomplete.
  3. Calibrate monitoring intensity to your Annex III category and Art.9 residual risk level. Document the proportionality rationale.

At market placement: 4. Activate all PMM data pipelines from day 1. A PMM system that starts 30 days post-launch has a 30-day compliance gap. 5. Brief deployers on Art.30(5) obligations and provide them with your structured reporting channel.

Ongoing: 6. Run a quarterly Art.9–15 conformity review using PMM data — even if no incidents are detected. 7. Treat every Orange-tier PMM event as a pre-Art.73 event. Engage legal counsel within 4 hours. 8. Update the PMM plan when the AI system changes, when a new deployer context is added, or when an Art.73 event occurs.


See Also