EU AI Act Art.72: Post-Market Monitoring Plan for High-Risk AI Systems — Developer Guide (2026)
EU AI Act Article 72 establishes the post-market monitoring (PMM) system and monitoring plan obligation for providers of high-risk AI systems within the market surveillance framework of Chapter IX. Where Art.30 (Chapter III, Section 4) defines the provider's general PMM obligation as part of the core requirements for placing a high-risk AI system on the EU market, Art.72 governs the systematic monitoring architecture from the market surveillance perspective: what the PMM plan must document, how monitoring findings link to continued compliance evaluation against Chapter III Section 2 requirements (Art.9–15), and how the system must feed detected vigilance events into Art.73 incident reporting.
The practical difference is structural. Art.30 asks: "Does this provider have a PMM system?" Art.72 asks: "Does that system continuously evaluate whether the AI still meets Art.9–15 requirements in production?" Art.72 frames post-market monitoring not merely as incident detection but as an ongoing conformity verification loop — the deployed system must continuously demonstrate that it continues to satisfy risk management (Art.9), data quality (Art.10), technical documentation currency (Art.11), logging accuracy (Art.12), transparency (Art.13), human oversight (Art.14), and robustness (Art.15).
This guide covers Art.72(1)–(4) in full, the Art.72 monitoring plan mandatory content (Annex IV Section 6), the proportionality framework, the Art.72 × Art.9 feedback loop, Art.72 × Art.73 vigilance trigger architecture, the cross-provider risk pattern discovery mechanism, CLOUD Act jurisdiction risk for PMM data, Python implementation for PostMarketMonitoringPlan and VigilanceEventClassifier, and the 40-item Art.72 compliance checklist.
Art.72 became applicable on 2 August 2026 as part of the Chapter IX post-market and market surveillance framework. All providers with high-risk AI systems on the EU market from that date must have an operational PMM system and a documented plan meeting Art.72 requirements.
Art.72 vs Art.30: The Developer's Distinction
| Dimension | Art.30 (Chapter III) | Art.72 (Chapter IX) |
|---|---|---|
| Chapter context | Provider obligations (market placement requirements) | Market surveillance framework |
| Primary question | "Is there a PMM system?" | "Does the PMM system continuously verify conformity?" |
| Conformity link | General performance data collection | Explicit evaluation against Art.9–15 requirements |
| Risk proportionality | Referenced | Explicit proportionality framework |
| Cross-provider discovery | Not addressed | Art.72(4): cross-provider risk pattern reporting |
| Plan document | PMM plan in Annex IV | PMM plan structure and mandatory content |
| Vigilance trigger | PMM → Art.73 (general) | PMM → Art.73 vigilance system (specific trigger conditions) |
| Enforcement basis | Art.99(3): €15M / 3% global turnover | Art.99(3) + Art.74 MSA powers |
Both articles are required. Art.30 establishes the obligation; Art.72 defines the systematic architecture that makes that obligation operational. A provider can comply with Art.30 by having a PMM system. They comply with Art.72 by ensuring that system specifically evaluates Chapter III Section 2 conformity in production.
Art.72 at a Glance
| Provision | Content | Developer Obligation |
|---|---|---|
| Art.72(1) | Providers establish and document a PMM system proportionate to the nature of AI technology and risks | Design PMM system at system architecture level, document in QMS (Art.17) |
| Art.72(2) | PMM system actively collects, documents, and analyses data on performance; evaluates continued compliance with Art.9–15 | Operational data pipeline from deployment environments; conformity dashboard |
| Art.72(3) | Providers establish a PMM plan as part of technical documentation (Annex IV) | PMM plan as mandatory Annex IV Section 6 document |
| Art.72(4) | Market surveillance authorities informed when multiple providers' systems show similar risks; providers must correct and report | Cross-system risk signals → Art.73 reports + MSA notification |
Art.72(1): Proportionate PMM System Obligation
Art.72(1) requires every provider of a high-risk AI system to establish and maintain a post-market monitoring system. Two requirements are explicit:
-
Documentation: The system must be documented — not merely operational. Documentation links to the technical documentation obligation (Annex IV) and the QMS (Art.17). An undocumented PMM process fails Art.72(1) even if monitoring activities occur informally.
-
Proportionality: The system must be proportionate to the nature of the AI technology and the risks of the high-risk AI system. This creates a calibrated obligation: the PMM architecture for a low-risk Annex III category 8 (law enforcement biometric ID) system requires more intensive monitoring than for a category 5 (safety component in non-critical infrastructure) system, even though both are formally high-risk.
The Proportionality Framework
Art.72(1) proportionality operates across four dimensions:
| Dimension | Low-Risk Calibration | High-Risk Calibration |
|---|---|---|
| Monitoring frequency | Quarterly performance analysis | Real-time drift detection + daily summary |
| Data sources | Deployer summary reports (Art.30(5)) | Live telemetry + automated anomaly detection |
| Conformity re-evaluation | Annual Art.9 review | Event-triggered review + rolling 90-day assessment |
| Art.73 vigilance | Quarterly incident screening | Automated real-time incident detection pipeline |
Providers must document their proportionality rationale — why the chosen monitoring intensity is appropriate for their system's Annex III category, risk level, and deployment context. Market surveillance authorities reviewing Art.72 compliance will look for this documentation.
Documentation in the QMS
Art.72(1) documentation does not stand alone — it is part of the Quality Management System (Art.17). The QMS must include procedures for post-market monitoring. Art.17(1)(j) explicitly requires QMS documentation to cover the PMM system. Providers building a standalone PMM process without QMS integration satisfy neither Art.17 nor Art.72(1).
The PMM system documentation should specify:
- Roles and responsibilities for PMM operations
- Data sources (deployer reports, telemetry, user feedback, regulator communications)
- Analysis methods and thresholds for escalation
- Feedback channels to Art.9 risk management and Art.73 incident reporting
- Version control for the PMM plan when the system or monitoring approach changes
Art.72(2): Continued Compliance Evaluation
Art.72(2) is the article's most operationally significant provision. It requires that the PMM system shall:
- Actively collect data on the performance of the high-risk AI system throughout its lifetime
- Document and analyse that data
- Evaluate continued compliance with the requirements set out in Chapter III, Section 2
The third obligation — evaluating continued compliance with Art.9–15 — transforms post-market monitoring from generic incident detection into a rolling conformity audit. The system deployed on 2 August 2026 must still satisfy Art.9–15 on 2 August 2027 and 2 August 2028. Drift, context change, or deployment expansion can all invalidate initial conformity.
The Art.72(2) Conformity Evaluation Matrix
| Art.9–15 Requirement | PMM Data Point | Compliance Failure Indicator |
|---|---|---|
| Art.9: Risk management | Observed harm rates vs. residual risk estimates | Harm rate exceeds residual risk threshold |
| Art.10: Data governance | Deployer-reported data quality degradation | Data drift → distribution shift in input data |
| Art.11: Technical documentation | Change log completeness | Undocumented modifications to system architecture |
| Art.12: Logging | Log completeness rates from deployers | Logging gaps → undetectable Art.73 events |
| Art.13: Transparency | User complaint analysis re: system explainability | Transparency gaps in new deployment contexts |
| Art.14: Human oversight | Oversight failure incidents from deployers | Human oversight bypass rate exceeds threshold |
| Art.15: Robustness/cybersecurity | Adversarial probe results; security incident rate | Robustness degradation → increased vulnerability |
Art.72(2) compliance means having a documented methodology for collecting these data points and a process for evaluating whether they indicate continued conformity. A provider who only monitors for serious incidents (Art.73) but does not evaluate Art.9–15 conformity drift fails Art.72(2).
Art.72(3): The Post-Market Monitoring Plan
Art.72(3) requires providers to establish a post-market monitoring plan as part of the technical documentation required by Annex IV. The plan is not a separate document produced post-launch — it is a design-time artifact that must be drafted before market placement and updated as the system evolves.
Mandatory Plan Content
The PMM plan must cover at minimum:
| Plan Section | Content | Regulatory Basis |
|---|---|---|
| System identification | System name, version, Annex III category, EUAIDB registration number (Art.71) | Annex IV Section 1 |
| Monitoring objectives | What Art.9–15 requirements the plan actively monitors for continued compliance | Art.72(2) |
| Data collection methods | Sources, frequency, format, retention period for performance data | Art.72(2) |
| Deployer cooperation | How deployer reports (Art.30(5)) feed into PMM analysis | Art.30(5) × Art.72(3) |
| Analysis methodology | Thresholds, statistical methods, automated vs. manual review | Art.72(2) |
| Vigilance event criteria | What monitoring findings trigger Art.73 serious incident reporting | Art.72(4) × Art.73 |
| Art.9 feedback procedure | How PMM findings update the risk management system | Art.9(9) × Art.72(2) |
| Corrective action protocol | What happens when monitoring finds noncompliance | Art.20 × Art.72 |
| Plan version control | When and how the plan is updated (system change, significant incident, market expansion) | Art.11 × Art.72(3) |
| Responsible parties | Who owns each PMM function and who has authority to trigger escalation | Art.17 QMS integration |
The Annex IV Connection
Annex IV specifies the required content of technical documentation. Section 6 of Annex IV is specifically reserved for post-market monitoring. The PMM plan must be in Annex IV Section 6 — not in a separate operational document. When a market surveillance authority requests technical documentation under Art.21, the PMM plan must be immediately producible.
Critical implication: the PMM plan is updated whenever the system changes. If a substantial modification under Art.6(3) triggers a new conformity assessment (Art.43) and new technical documentation, the PMM plan must be updated to reflect the modified system's new risk profile and monitoring requirements.
Art.72(4): Vigilance Events and Cross-Provider Risk Signals
Art.72(4) addresses two distinct scenarios:
Scenario A: Vigilance Events Triggering Art.73
When PMM activities detect what could be a serious incident under Art.3(49), the PMM system must immediately trigger the Art.73 reporting workflow. Art.72(4) creates the bridge between the monitoring system (Art.72) and the incident reporting obligation (Art.73).
The trigger is two-stage:
- Detection: PMM data indicates an event meeting the Art.3(49) definition (death, serious health harm, fundamental rights violation, or major infrastructure disruption)
- Escalation: Art.73 reporting clock starts from the moment the provider becomes aware — and PMM system detection constitutes awareness
Providers whose PMM system has automated incident detection must therefore ensure the Art.73 reporting clock starts from automated detection time, not from when a human reviews the automated alert. A PMM system that detects a serious incident on Tuesday but has the report reviewed by human counsel on Thursday has a Thursday-minus-two awareness date — but only if the Monday-Tuesday automated detection was not yet specific enough to constitute "awareness" of a serious incident.
Best practice: define a vigilance event severity threshold in the PMM plan that distinguishes:
- Green: Performance data within normal range — no escalation
- Yellow: Anomaly detected — enhanced monitoring, human review within 24h
- Orange: Potential harm signal — pre-Art.73 review, legal counsel within 4h
- Red: Confirmed serious incident criteria met — Art.73 clock starts
Scenario B: Cross-Provider Risk Pattern Discovery
Art.72(4) also addresses a less commonly discussed scenario: when a market surveillance authority discovers that multiple providers' AI systems — performing similar functions — are exhibiting similar risks. In this case:
- The MSA notifies other MSAs and the European Commission (and the AI Office for GPAI)
- The Commission may request additional PMM data from affected providers
- Individual providers may receive corrective action orders even if their specific system has not yet caused an incident
For developers, this creates a cross-system monitoring risk: your system may face regulatory action based on risk patterns discovered from competitors' products in the same category. Maintaining a PMM system that provides robust evidence of your system's performance — distinct from category-wide patterns — is the only defence.
Art.72 × Art.9: The Risk Management Feedback Loop
Art.9(9) requires the risk management system to be updated throughout the high-risk AI system's lifecycle. Art.72(2) requires PMM to evaluate continued conformity with Art.9 requirements. These two obligations create a mandatory feedback loop:
Art.9 Risk Assessment
↓
System Deployed → Art.72 PMM Active
↓
PMM Data Analysis
↓
┌────────────────────────────────────────────┐
│ Conformity finding: │
│ • Risk profile unchanged → document + continue │
│ • New risk identified → trigger Art.9 update │
│ • Art.9–15 drift → corrective action (Art.20) │
└────────────────────────────────────────────┘
↓
Art.9 Updated → Technical Documentation Updated (Annex IV)
↓
If substantial: New conformity assessment (Art.43) + EUAIDB update (Art.71)
This loop has a critical timing implication: Art.9 updates triggered by PMM findings must be documented before the next PMM reporting cycle. A gap between PMM finding and Art.9 update is evidence of noncompliance with both Art.72(2) and Art.9(9).
PMM Trigger Conditions for Art.9 Update
| PMM Finding | Required Response | Timeline |
|---|---|---|
| Harm rate exceeds residual risk estimate by >10% | Art.9 risk re-assessment mandatory | Within 30 days of detection |
| New deployment context (sector, geography, population) | Art.9 update for new context | Before deployment in new context |
| Adversarial attack succeeds in production | Art.9 update + Art.15 review | Within 15 days |
| Deployer reports systematic oversight failure | Art.9 update + Art.14 review | Within 30 days |
| Substantial modification of AI system | Full Art.9 update | Before modification deployment |
Art.72 × Art.30: Deployer Data Contribution
Art.30(5) requires deployers to "monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers about serious incidents." This deployer obligation is the primary external data input for the provider's Art.72 PMM system.
The Art.72 PMM plan must specify how deployer reports are:
- Received — what channel, what format, what SLA for provider acknowledgment
- Categorized — routine performance data vs. anomaly vs. potential serious incident
- Integrated into the PMM analysis pipeline
- Acted upon — feedback to the deployer confirming receipt + action taken
Providers who design their PMM system without a formal deployer reporting intake process fail both Art.72(3) (PMM plan must cover deployer cooperation) and the spirit of Art.30(5) (deployer obligation to report is only meaningful if the provider has infrastructure to receive and act on reports).
Deployer Cooperation Agreement
Best practice: include a PMM cooperation clause in the deployer contract specifying:
- Obligation to report anomalies within 5 business days (serious incidents under Art.30(5) have no explicit timeline, but 5 days is a reasonable good-faith standard)
- Format of performance reports (structured JSON vs. freeform description — structured is preferable for automated PMM analysis)
- What constitutes a "malfunction" triggering immediate notification vs. routine reporting
- Provider's obligation to respond to deployer reports within X days
This clause has dual regulatory benefit: it provides the deployer with documentation that they have fulfilled Art.30(5), and it ensures the provider receives data required under Art.72(2).
Art.72 × Art.17: QMS Integration
Art.17 requires providers to implement a Quality Management System. Art.17(1)(j) explicitly lists the PMM system as a QMS component. Art.72 defines what that PMM system must do.
The QMS-PMM integration means:
- PMM procedures are QMS procedures: The Art.72 monitoring workflows must be documented as QMS procedures, not as separate operational runbooks. QMS audits include PMM audit.
- PMM findings are QMS inputs: Nonconformities detected by PMM feed into the QMS corrective action process (Art.17(1)(h)).
- PMM plan is QMS-controlled: Updates to the PMM plan require QMS-level change management — version control, approval process, distribution to affected parties.
- PMM records are QMS records: Performance data, analysis reports, and escalation decisions are QMS records subject to the 10-year retention requirement (Art.18(2) by reference to Annex IV).
Providers building PMM infrastructure outside the QMS framework create a documentation gap that market surveillance authorities will identify during audits. The Art.74(3) right of access to technical documentation includes QMS records — a PMM system that exists but is not QMS-integrated provides weaker compliance evidence.
Art.72 × Art.21: Market Surveillance Access to PMM Data
Art.21 requires providers to cooperate with competent authorities and supply information required for compliance assessment. This includes PMM data and reports.
For Art.72, this means:
- PMM reports must be immediately producible: When an MSA requests PMM data, the provider must produce it without delay. A PMM system that produces data but does not archive reports fails Art.21 cooperation requirements.
- PMM data retention: Art.72 does not specify a retention period, but Art.18(2) requires technical documentation to be kept for 10 years after the last product is placed on the market. PMM data — as part of Annex IV Section 6 — is subject to this 10-year standard.
- Real-time access in investigations: Under Art.74(8), market surveillance authorities may request real-time access to AI systems' automatically generated logs (Art.12). In practice, PMM telemetry and logs may need to be producible at MSA request.
CLOUD Act × Art.72: PMM Data Jurisdiction Risk
PMM systems collect and store continuous performance data from deployed AI systems. Where that data is stored matters for CLOUD Act analysis.
PMM Data Types and Jurisdiction Risk
| PMM Data Type | CLOUD Act Risk on US Infrastructure | EU-Native PMM |
|---|---|---|
| Deployer performance reports | Medium — US authorities can compel provider's stored records | Not CLOUD Act reachable |
| Real-time telemetry from AI system | High — continuous stream of operational data | Not CLOUD Act reachable |
| Art.73 pre-report incident evidence | Very High — exactly what enforcement investigations want | Not CLOUD Act reachable |
| Art.9 risk assessment updates triggered by PMM | High — documents provider knowledge of risk evolution | Not CLOUD Act reachable |
| PMM plan documents (Annex IV Section 6) | Medium — design documentation | Not CLOUD Act reachable |
The Dual Compellability Risk
A PMM system hosted on US cloud infrastructure creates a dual compellability scenario: EU market surveillance authorities can request PMM data under Art.21; US authorities can compel the same data under CLOUD Act. The provider is simultaneously subject to two legal regimes for the same data. If those regimes produce conflicting demands — EU confidentiality obligations vs. US disclosure orders — the provider has no legal exit.
EU-native infrastructure (where the cloud provider is not subject to US ECS-provider status) eliminates this dual compellability risk. For PMM data specifically — which includes continuous documentation of how an AI system performs in production — EU-native hosting means single-jurisdiction access: only EU competent authorities under Art.21/Art.74.
This is particularly significant for Art.73 pre-incident evidence. PMM systems that detect a potential serious incident generate exactly the records that enforcement authorities seek. EU-native storage ensures those records are only accessible under the Art.70 confidentiality framework.
Python Implementation
1. PostMarketMonitoringPlan
from dataclasses import dataclass, field
from datetime import date, datetime
from enum import Enum
from typing import Optional
import json
class AnnexIIICategory(Enum):
CAT_1_BIOMETRIC = "1_biometric_categorisation"
CAT_2_CRITICAL_INFRA = "2_critical_infrastructure"
CAT_3_EDUCATION = "3_education_training"
CAT_4_EMPLOYMENT = "4_employment_worker_management"
CAT_5_ESSENTIAL_SERVICES = "5_essential_services"
CAT_6_LAW_ENFORCEMENT = "6_law_enforcement"
CAT_7_MIGRATION = "7_migration_asylum_border"
CAT_8_JUSTICE = "8_justice_democratic_processes"
class MonitoringIntensity(Enum):
STANDARD = "standard" # Quarterly analysis, deployer reports
ENHANCED = "enhanced" # Monthly analysis, automated anomaly detection
INTENSIVE = "intensive" # Real-time telemetry, daily summary, 24h on-call
@dataclass
class PMM_DataSource:
"""A single data source feeding the PMM system."""
source_id: str
source_type: str # "deployer_report", "telemetry", "user_feedback", "regulator"
collection_frequency: str # "real_time", "daily", "weekly", "monthly", "quarterly"
format: str # "json_structured", "api_webhook", "form_submission", "email"
retention_years: int = 10 # Default matches Art.18(2) 10-year requirement
@dataclass
class ConformityCheck:
"""Maps an Art.9-15 requirement to a PMM observable."""
article: str # "Art.9", "Art.10", etc.
requirement_summary: str
pmm_data_source: str # source_id from PMM_DataSource
metric: str # What is measured
threshold: str # What constitutes a conformity failure
escalation_action: str # What happens when threshold is breached
@dataclass
class PostMarketMonitoringPlan:
"""
Post-Market Monitoring Plan as required by EU AI Act Art.72(3).
Must be documented in Annex IV Section 6 of technical documentation.
"""
# System identification (Annex IV Section 1)
system_name: str
system_version: str
annex_iii_category: AnnexIIICategory
euaidb_registration_number: Optional[str] # Art.71 EUAIDB number
provider_name: str
plan_version: str
plan_date: date
# Art.72(1): Proportionality basis
monitoring_intensity: MonitoringIntensity
proportionality_rationale: str # Must document WHY this intensity is appropriate
# Art.72(2): Data collection and conformity evaluation
data_sources: list[PMM_DataSource] = field(default_factory=list)
conformity_checks: list[ConformityCheck] = field(default_factory=list)
# Art.72(3): Plan mandatory sections
deployer_reporting_channel: str = "" # How deployers submit Art.30(5) reports
deployer_reporting_sla_days: int = 5 # Provider response SLA
deployer_report_format: str = "json" # Structured preferred
# Art.72(4): Vigilance triggers
art73_trigger_criteria: list[str] = field(default_factory=list)
art9_update_triggers: list[str] = field(default_factory=list)
# Art.20: Corrective action protocol
corrective_action_escalation: str = "" # Who is notified and within what time
# Plan lifecycle
review_triggers: list[str] = field(default_factory=list) # When plan is updated
def validate(self) -> list[str]:
"""Returns list of compliance gaps. Empty list = plan is Art.72(3) compliant."""
gaps = []
if not self.proportionality_rationale:
gaps.append("Art.72(1): No proportionality rationale documented")
if not self.data_sources:
gaps.append("Art.72(2): No data sources defined — cannot collect performance data")
# Check all Art.9-15 requirements are covered
covered_articles = {c.article for c in self.conformity_checks}
required_articles = {"Art.9", "Art.10", "Art.11", "Art.12", "Art.13", "Art.14", "Art.15"}
missing = required_articles - covered_articles
if missing:
gaps.append(f"Art.72(2): No conformity check for: {', '.join(sorted(missing))}")
if not self.deployer_reporting_channel:
gaps.append("Art.72(3): No deployer reporting channel defined — Art.30(5) cooperation impossible")
if not self.art73_trigger_criteria:
gaps.append("Art.72(4): No Art.73 vigilance trigger criteria defined")
if not self.art9_update_triggers:
gaps.append("Art.72 × Art.9: No Art.9 update trigger conditions defined")
if not self.euaidb_registration_number:
gaps.append("Art.71: No EUAIDB registration number — required before plan goes operational")
if not self.corrective_action_escalation:
gaps.append("Art.20: No corrective action escalation protocol")
if not self.review_triggers:
gaps.append("Art.72(3): No plan review triggers — plan may become stale after system changes")
return gaps
def is_ready_for_market_placement(self) -> bool:
"""Returns True if plan is ready for Annex IV Section 6 inclusion."""
return len(self.validate()) == 0
def to_annex_iv_section_6(self) -> str:
"""Serializes plan to the format required for Annex IV Section 6."""
gaps = self.validate()
if gaps:
raise ValueError(
f"Plan has {len(gaps)} compliance gap(s) — cannot produce Annex IV Section 6:\n"
+ "\n".join(f" - {g}" for g in gaps)
)
return json.dumps({
"annex_iv_section": 6,
"system": f"{self.system_name} v{self.system_version}",
"annex_iii_category": self.annex_iii_category.value,
"euaidb_registration": self.euaidb_registration_number,
"monitoring_intensity": self.monitoring_intensity.value,
"proportionality_rationale": self.proportionality_rationale,
"data_sources": len(self.data_sources),
"conformity_checks": len(self.conformity_checks),
"articles_monitored": sorted({c.article for c in self.conformity_checks}),
"deployer_channel": self.deployer_reporting_channel,
"art73_triggers": len(self.art73_trigger_criteria),
"plan_version": self.plan_version,
"plan_date": str(self.plan_date),
}, indent=2)
2. VigilanceEventClassifier
from dataclasses import dataclass
from enum import Enum
from datetime import datetime
from typing import Optional
class VigilanceSeverity(Enum):
GREEN = "green" # Normal range — continue monitoring
YELLOW = "yellow" # Anomaly — human review within 24h
ORANGE = "orange" # Potential harm — pre-Art.73 review, counsel within 4h
RED = "red" # Serious incident criteria met — Art.73 clock starts NOW
@dataclass
class MonitoringObservation:
"""A single data point from the PMM system."""
observation_id: str
timestamp: datetime
source_id: str # Links to PMM_DataSource.source_id
article_monitored: str # "Art.9", "Art.12", etc.
metric_name: str
metric_value: float
threshold_value: float
deployer_id: Optional[str] = None
raw_data: Optional[str] = None
@dataclass
class VigilanceEvent:
"""
A classified monitoring event. If severity is RED, Art.73 clock has started.
Awareness datetime = observation.timestamp for automated detection.
"""
event_id: str
observation: MonitoringObservation
severity: VigilanceSeverity
classification_datetime: datetime
classification_rationale: str
art73_clock_started: bool
art73_awareness_datetime: Optional[datetime] # = observation.timestamp if RED
required_action: str
escalation_deadline: Optional[datetime] # Absolute datetime for RED events
def days_remaining_for_art73_report(self) -> Optional[float]:
"""
Returns days remaining to file Art.73 preliminary report.
Returns None if event is not RED (no Art.73 obligation).
Art.73 requires 2 working days (death/health) or 15 calendar days (other).
Uses 15-day calendar default; caller must check for 2-day cases.
"""
if not self.art73_clock_started or self.art73_awareness_datetime is None:
return None
elapsed = (datetime.now() - self.art73_awareness_datetime).total_seconds() / 86400
return max(0.0, 15.0 - elapsed) # 15-day calendar default
def is_overdue_for_2_day_reporting(self) -> Optional[bool]:
"""
Returns True if this RED event involves death/health/safety and 2 working days have passed.
Returns None if not applicable.
"""
if not self.art73_clock_started or self.art73_awareness_datetime is None:
return None
elapsed_hours = (datetime.now() - self.art73_awareness_datetime).total_seconds() / 3600
return elapsed_hours > 48.0 # Conservative: uses calendar hours not working hours
class VigilanceEventClassifier:
"""
Classifies monitoring observations as vigilance events.
Implements the Art.72(4) trigger architecture with four severity levels.
"""
def __init__(self, pmm_plan: "PostMarketMonitoringPlan"):
self.plan = pmm_plan
self._events: list[VigilanceEvent] = []
def classify(self, obs: MonitoringObservation) -> VigilanceEvent:
"""Classify a monitoring observation and record it."""
severity, rationale = self._assess_severity(obs)
art73_clock = severity == VigilanceSeverity.RED
awareness_dt = obs.timestamp if art73_clock else None
escalation_deadline = None
if art73_clock and awareness_dt:
from datetime import timedelta
# 15-day default; 2-day applies if provider determines death/health risk
escalation_deadline = datetime(
awareness_dt.year, awareness_dt.month, awareness_dt.day
) + timedelta(days=15)
required_action = {
VigilanceSeverity.GREEN: "Continue normal monitoring — no escalation",
VigilanceSeverity.YELLOW: "Assign to PMM analyst — human review within 24h",
VigilanceSeverity.ORANGE: "Escalate to legal counsel — pre-Art.73 review within 4h",
VigilanceSeverity.RED: "FILE Art.73 report — clock started at observation timestamp",
}[severity]
event = VigilanceEvent(
event_id=f"VE-{obs.observation_id}",
observation=obs,
severity=severity,
classification_datetime=datetime.now(),
classification_rationale=rationale,
art73_clock_started=art73_clock,
art73_awareness_datetime=awareness_dt,
required_action=required_action,
escalation_deadline=escalation_deadline,
)
self._events.append(event)
return event
def _assess_severity(
self, obs: MonitoringObservation
) -> tuple[VigilanceSeverity, str]:
"""Determine severity based on threshold exceedance and article monitored."""
exceedance_ratio = (
(obs.metric_value - obs.threshold_value) / max(obs.threshold_value, 0.001)
if obs.metric_value > obs.threshold_value else 0.0
)
# Check for direct Art.73 serious incident criteria
if obs.article_monitored in ("Art.9", "Art.14") and exceedance_ratio > 1.0:
return (
VigilanceSeverity.RED,
f"{obs.article_monitored} metric {obs.metric_name} exceeded threshold by "
f"{exceedance_ratio:.0%} — serious incident criteria likely met. "
"Art.73 clock started."
)
if exceedance_ratio > 0.5:
return (
VigilanceSeverity.ORANGE,
f"{obs.article_monitored} metric {obs.metric_name} exceeded threshold by "
f"{exceedance_ratio:.0%} — pre-Art.73 legal review required."
)
if exceedance_ratio > 0.1:
return (
VigilanceSeverity.YELLOW,
f"{obs.article_monitored} metric {obs.metric_name} exceeded threshold by "
f"{exceedance_ratio:.0%} — anomaly detected, human review required."
)
return (
VigilanceSeverity.GREEN,
f"{obs.article_monitored} metric {obs.metric_name} within normal range "
f"({obs.metric_value:.3f} vs threshold {obs.threshold_value:.3f})."
)
def get_red_events(self) -> list[VigilanceEvent]:
"""Return all RED events — each requires Art.73 preliminary report."""
return [e for e in self._events if e.severity == VigilanceSeverity.RED]
def get_overdue_art73_events(self) -> list[VigilanceEvent]:
"""Return RED events past their 15-day (or 2-day) reporting deadline."""
return [
e for e in self.get_red_events()
if e.days_remaining_for_art73_report() is not None
and e.days_remaining_for_art73_report() <= 0
]
def art9_update_required(self) -> bool:
"""Returns True if any event indicates Art.9 risk profile update is needed."""
art9_events = [
e for e in self._events
if e.observation.article_monitored == "Art.9"
and e.severity in (VigilanceSeverity.ORANGE, VigilanceSeverity.RED)
]
return len(art9_events) > 0
Compliance Checklist (40 Items)
Section 1: PMM System Establishment (Art.72(1)) — 8 Items
- 1. PMM system established and documented before market placement
- 2. PMM system proportionality rationale documented (why monitoring intensity is appropriate for this system's risk)
- 3. PMM system documented as QMS procedure (Art.17(1)(j) integration)
- 4. Roles and responsibilities for PMM operations defined and assigned
- 5. PMM system version-controlled — changes tracked with rationale
- 6. PMM system includes all four data source types (deployer, telemetry, user, regulator)
- 7. PMM system operational from day 1 of market placement (not retrofitted)
- 8. PMM system architecture reviewed and approved by legal counsel before go-live
Section 2: Continued Compliance Evaluation (Art.72(2)) — 8 Items
- 9. PMM system actively collects data (not passive — requires active data pipelines)
- 10. Art.9 (risk management) monitored via PMM — harm rate vs. residual risk threshold tracked
- 11. Art.10 (data governance) monitored — data drift detection for input data distribution
- 12. Art.11 (technical documentation) monitored — undocumented changes flagged
- 13. Art.12 (logging) monitored — log completeness rates from deployer environments
- 14. Art.13 (transparency) monitored — user explainability complaints tracked
- 15. Art.14 (human oversight) monitored — oversight failure/bypass rate tracked
- 16. Art.15 (robustness) monitored — adversarial probe results and security incident rate
Section 3: PMM Plan (Art.72(3)) — 10 Items
- 17. PMM plan documented in Annex IV Section 6 (not a separate standalone document)
- 18. PMM plan includes all 10 mandatory content sections (see table above)
- 19. PMM plan version-controlled with QMS change management process
- 20. PMM plan includes EUAIDB registration number (Art.71 link)
- 21. Deployer reporting channel defined in PMM plan (format + SLA + intake process)
- 22. Deployer cooperation clause included in deployer contracts (Art.30(5) operationalization)
- 23. PMM plan review triggers documented (system change, substantial modification, Art.73 event)
- 24. PMM plan immediately producible on Art.21 MSA request (no retrieval delay)
- 25. PMM data retained for 10 years (matches Art.18(2) technical documentation requirement)
- 26. PMM plan updated after every substantial modification (Art.6(3)) before redeployment
Section 4: Vigilance Triggers and Art.73 Integration (Art.72(4)) — 8 Items
- 27. Vigilance event severity tiers defined (Green/Yellow/Orange/Red or equivalent)
- 28. Red tier criteria explicitly mapped to Art.3(49) serious incident definition
- 29. Art.73 clock starts from automated PMM detection (not from human review of alert)
- 30. 2-working-day vs. 15-calendar-day determination logic implemented in PMM system
- 31. Art.9 update trigger conditions defined — PMM finding → Art.9 review obligation
- 32. Corrective action protocol (Art.20) documented with escalation timelines
- 33. Cross-provider risk monitoring: process for responding to MSA Art.72(4) notifications
- 34. Pre-Art.73 Orange tier: legal counsel notification within 4h
Section 5: Data and Infrastructure (Art.72 × CLOUD Act) — 6 Items
- 35. PMM data storage jurisdiction documented (EU vs. US vs. multi-region)
- 36. CLOUD Act risk assessment completed for PMM data — dual compellability risk mapped
- 37. EU-native infrastructure evaluated for PMM data storage (single-regime access)
- 38. Art.73 pre-incident evidence (Orange/Red events) stored on EU-native infrastructure if possible
- 39. Art.21 response protocol documented — PMM data producible to MSA without delay
- 40. QMS records retention process includes PMM data (10-year retention enforced)
Cross-Article Intersection Matrix
| Art.72 Provision | Connected Article | Intersection | Action Required |
|---|---|---|---|
| Art.72(1) PMS obligation | Art.17 QMS | PMM system is a QMS component | Integrate PMM into QMS procedures |
| Art.72(1) proportionality | Art.9 risk classification | Higher risk → more intensive PMM | Calibrate PMM to Art.9 residual risk level |
| Art.72(2) continued compliance | Art.9–15 requirements | PMM evaluates all 7 requirements | Create conformity check for each article |
| Art.72(2) data collection | Art.30(5) deployer reports | Deployer data is primary PMM input | Design deployer reporting intake pipeline |
| Art.72(3) PMM plan | Annex IV Section 6 | PMM plan is technical documentation | Draft before market placement, maintain in Annex IV |
| Art.72(3) PMM plan | Art.43/Art.48 | PMM plan reviewed in conformity assessment | Include PMM plan review in Art.43 scope |
| Art.72(4) vigilance trigger | Art.73 | PMM detection → Art.73 clock | Automated detection = awareness start |
| Art.72(4) vigilance trigger | Art.20 corrective action | PMM nonconformity → Art.20 obligation | Corrective action protocol in PMM plan |
| Art.72(4) cross-provider | Art.74 MSA powers | MSAs share cross-provider risk signals | Process for responding to MSA Art.72(4) notifications |
| Art.72 general | Art.21 cooperation | PMM data producible to MSA | Archive PMM reports with instant retrieval |
| Art.72 general | Art.71 EUAIDB | EUAIDB number required in PMM plan | Register in EUAIDB before PMM goes operational |
| Art.72 general | CLOUD Act | PMM data on US cloud = dual compellability | EU-native infrastructure for PMM data |
Registration Timeline
| Milestone | Art.72 PMM Requirement |
|---|---|
| System design phase | PMM plan drafted — proportionality rationale, data sources, conformity checks |
| Pre-market conformity assessment (Art.43) | PMM plan reviewed as Annex IV Section 6 |
| EUAIDB registration (Art.71) | Registration number added to PMM plan |
| Market placement | PMM system operational — data collection active from day 1 |
| First deployer onboarding | Deployer reporting channel activated — Art.30(5) cooperation commences |
| 30 days post-placement | First PMM analysis cycle — Art.9–15 conformity check |
| Any serious incident detected | Art.73 clock started — PMM switches to incident documentation mode |
| Any Art.9–15 drift detected | Corrective action (Art.20) triggered + Art.9 update initiated |
| Substantial modification (Art.6(3)) | PMM plan updated before redeployment |
What Developers Should Do Now
Before market placement:
- Draft the PMM plan now — treat it as a design artifact, not a post-launch document. Include in Annex IV Section 6.
- Register in EUAIDB (Art.71) and add registration number to the PMM plan. Without it, the plan is incomplete.
- Calibrate monitoring intensity to your Annex III category and Art.9 residual risk level. Document the proportionality rationale.
At market placement: 4. Activate all PMM data pipelines from day 1. A PMM system that starts 30 days post-launch has a 30-day compliance gap. 5. Brief deployers on Art.30(5) obligations and provide them with your structured reporting channel.
Ongoing: 6. Run a quarterly Art.9–15 conformity review using PMM data — even if no incidents are detected. 7. Treat every Orange-tier PMM event as a pre-Art.73 event. Engage legal counsel within 4 hours. 8. Update the PMM plan when the AI system changes, when a new deployer context is added, or when an Art.73 event occurs.
See Also
- EU AI Act Art.74: Market Surveillance Authority Powers & Developer Obligations — Developer Guide — Art.74(2) empowers MSAs to request PMM data directly from providers; your Art.72 plan is the primary document they will demand
- EU AI Act Art.30: Post-Market Monitoring System for High-Risk AI — Developer Guide
- EU AI Act Art.73: Serious Incident Reporting for High-Risk AI Systems — Developer Guide
- EU AI Act Art.71: EU Database for High-Risk AI Systems (EUAIDB) — Developer Guide
- EU AI Act Art.9: Risk Management System for High-Risk AI — Formal Verification Guide