EU AI Act Art.30 Post-Market Monitoring for High-Risk AI: Developer Guide (2026)
EU AI Act Article 30 establishes the post-market monitoring (PMM) obligation for providers of high-risk AI systems. Once a high-risk AI system is placed on the market or put into service, Art.30 requires providers to actively collect, analyse, and act on operational performance data for the system's entire commercial lifecycle. PMM is not an optional quality assurance exercise — it is a legal obligation with direct links to Art.73 serious incident reporting, Art.9 risk management updates, and Art.17 quality management system (QMS) integration.
The practical consequence for developers: every high-risk AI system you build and distribute requires a Post-Market Monitoring Plan (PMMP) as part of the Annex IV technical documentation, plus an operational infrastructure to collect and analyse performance data from deployers. Art.30 compliance cannot be retrofit after complaints arrive — it must be architected before market placement.
Fines for violation of Art.30 obligations fall under Art.99(3): €15 million or 3 % of global annual turnover (whichever is higher) for providers. For SMEs, the fine cap is €7.5 million. Market surveillance authorities under Art.74 can also order temporary withdrawal from service while a provider remedies PMM deficiencies. Art.30 compliance is therefore a precondition for sustained EU market access, not merely a box-ticking exercise.
This guide covers Art.30(1)–(5) obligations in full, the Art.30 × Art.9/12/73/17 intersection matrix, deployer cooperation obligations, the CLOUD Act jurisdiction risk for PMM data stored on US infrastructure, Python implementation for PostMarketMonitoringSystem, IncidentDetector, and PMM_PlanRecord, and the 40-item Art.30 compliance checklist.
Art.30 in the High-Risk AI Compliance Architecture
Art.30 is the lifecycle-extension article. All the pre-market obligations (Art.9 risk management, Art.10 data governance, Art.11 technical documentation, Art.17 QMS, Art.19 conformity assessment) focus on getting the AI system compliant before deployment. Art.30 extends the compliance obligation into production and creates a feedback loop back to Art.9:
| Phase | Key Articles | Art.30 Role |
|---|---|---|
| Design & development | Art.9, Art.10, Art.11 | PMM plan drafted as part of Annex IV |
| Conformity assessment | Art.19, Art.43, Art.48, Art.49 | PMMP reviewed as part of conformity documentation |
| Market placement / put into service | Art.16, Art.22 | PMM system activated |
| Operational lifecycle | Art.30 | Active data collection, analysis, Art.9 updates |
| Serious incident | Art.20, Art.73 | PMM triggers incident detection → 15-day / 2-day report |
| Significant change / withdrawal | Art.23, Art.20(3) | PMM findings trigger provider response |
Art.30 compliance therefore spans the entire post-deployment phase and feeds back into pre-market obligations when findings require risk management updates (Art.9), corrective actions (Art.20), or re-conformity assessment (Art.19).
Art.30(1): Establishing the Post-Market Monitoring System
Art.30(1) imposes a direct obligation on providers of high-risk AI systems: they must establish and document a post-market monitoring system that is appropriate for the risk profile of the AI system and that actively gathers, documents, and analyses relevant data from deployers and other sources.
What "Appropriate to the Risk Profile" Means
Art.30(1) does not mandate a one-size-fits-all monitoring system. The PMM system must be scaled to:
| Risk Dimension | Low-Risk Profile Example | High-Risk Profile Example |
|---|---|---|
| Deployment scale | < 100 deployers, niche sector | > 10,000 deployers, critical infrastructure |
| User interaction | Occasional use by professionals | Continuous 24/7 use in safety-critical decisions |
| Output consequences | Advisory outputs reviewed by humans | Autonomous decisions with immediate legal or safety impact |
| Affected population | Industry specialists | General public, vulnerable groups |
| Annex III category | Education/training tools (Art.6(2)) | Biometric identification, law enforcement, medical devices |
For biometric identification systems (Annex III category 1), critical infrastructure AI (category 2), and law enforcement AI (category 6), the monitoring system must be substantially more comprehensive than for employment screening tools (category 4) or credit scoring systems (category 5).
Minimum PMM System Requirements
Regardless of risk profile, Art.30(1) implies a minimum set of technical capabilities that providers must implement:
- Data ingestion pipeline: mechanisms to receive operational data from deployers (via API webhook, log forwarding, or structured report submission)
- Performance tracking: ongoing measurement of accuracy, precision, recall, and other KPIs specified in the PMMP
- Drift detection: monitoring for statistical drift between training distribution and production distribution
- Incident flag registry: structured capture of incidents, near-misses, and anomalous outputs reported by deployers
- Data retention: PMM data must be retained for 10 years under Art.18(1) (technical documentation), creating a storage infrastructure requirement
- Audit access: PMM data must be accessible to market surveillance authorities under Art.21(2) without undue delay
Art.30(2): Active Data Collection from Deployers
Art.30(2) specifies that the PMM system must actively collect data on the operational performance of the high-risk AI system. "Active" is the operative word — Art.30 does not allow providers to wait passively for deployer feedback or incident reports. The provider must implement structured data collection.
Types of Operational Data Art.30(2) Requires
| Data Category | Description | Collection Method |
|---|---|---|
| Performance metrics | Accuracy, error rates, false positive/negative rates in production | Automated telemetry from deployer integration |
| User feedback | End-user complaints, corrections, overrides of AI outputs | Structured feedback forms, deployer-submitted reports |
| Edge case outputs | Outputs that fall outside expected performance ranges | Automated outlier detection in deployer pipelines |
| Demographic performance | Disaggregated performance across gender, age, ethnicity (for bias monitoring under Art.9(7)) | Deployer-submitted demographic performance data |
| Environmental drift | Changes in input data distribution over time | Statistical drift detection pipelines |
| Incident signals | Near-misses, anomalous behaviours, close calls before serious incident threshold | Structured incident flag submissions from deployers |
Provider vs Deployer Responsibility for Data Collection
Art.30 creates a layered data collection architecture. Providers are responsible for establishing the system and collecting data; deployers are obligated to cooperate under Art.30(5). The practical split:
| What Provider Does | What Deployer Does |
|---|---|
| Provides API endpoints or log-submission tooling for performance data | Integrates data submission into their operational pipeline |
| Defines what data to submit and in what format (as part of Instructions for Use, Art.13) | Submits structured reports on the schedule specified by provider |
| Analyses aggregated data across all deployments | Reports incidents and anomalies as they occur |
| Initiates Art.9 risk updates when findings warrant | Flags when performance falls below thresholds specified in IFU |
The provider cannot delegate the PMM analysis obligation to deployers — the analysis, conclusions, and risk management updates remain with the provider. Deployers are data contributors, not analysis owners.
Art.30(3): The Post-Market Monitoring Plan (PMMP)
Art.30(3) requires that the post-market monitoring plan is documented and forms part of the Annex IV technical documentation. This means the PMMP must exist before the conformity assessment under Art.43, before the declaration of conformity under Art.48, and before CE marking under Art.49.
Mandatory PMMP Content
The PMMP must document:
| PMMP Element | What to Include | Why Required |
|---|---|---|
| Monitoring objectives | Which performance metrics are monitored and why | Art.9(9) risk management integration |
| KPI thresholds | Quantitative thresholds that trigger corrective action | Art.20 corrective action trigger definition |
| Data collection schedule | Frequency of data collection from deployers | Art.30(2) active collection requirement |
| Deployer reporting obligations | What deployers must submit and when | Art.30(5) deployer cooperation specification |
| Serious incident criteria | Definition of what constitutes a serious incident under Art.73 | Art.73 reporting trigger |
| Data retention policy | How long PMM data is retained and where | Art.18(1) 10-year retention requirement |
| Escalation procedures | Who is responsible for PMM analysis and escalation decisions | Art.17 QMS integration |
| Update triggers | Events that trigger PMMP revision (major drift, new deployer category, significant change) | Lifecycle management |
The PMMP is a living document. Each time a major finding triggers Art.9 risk management updates, the PMMP must also be reviewed and updated to reflect any change in monitoring scope or thresholds.
PMMP as Part of Annex IV Documentation
Annex IV of the EU AI Act lists the content of technical documentation for high-risk AI systems. Section 8 of Annex IV explicitly requires the technical documentation to include "a post-market monitoring plan." This means:
- The PMMP is not a separate document submitted independently — it is embedded in the Annex IV technical documentation
- It must be available for review by notified bodies during conformity assessment (Art.43)
- It must be made available to market surveillance authorities under Art.21 upon request
- Any material change to the PMMP may trigger re-assessment under Art.23 (significant changes)
Art.30(4): Sector-Specific Alignment
Art.30(4) establishes that for AI systems that are safety components of products regulated by sector-specific EU legislation (medical devices under MDR/IVDR, machinery under the Machinery Regulation, aviation under EASA rules), the PMM system must align with the post-market surveillance requirements of the applicable sectoral law.
Dual PMM Regimes for Safety Component AI
| Sector | Relevant Legislation | Key PMM Requirement | AI Act Art.30 Addition |
|---|---|---|---|
| Medical AI (diagnostic, treatment) | MDR (2017/745), IVDR (2017/746) | PMCF plan, trend reports, PSUR | Expanded to capture AI-specific metrics (model drift, performance disaggregation) |
| Autonomous vehicles / ADAS | UNECE Regulation 155/157, Regulation (EU) 2022/1426 | Incident reporting to national approval authority | AI Act Art.73 serious incident reporting in parallel |
| Aviation AI (ATC, flight control) | EASA Regulation 2018/1139 | Safety occurrence reporting | EU AI Act PMM data must be reconcilable with EASA safety data |
| Industrial machinery AI | Machinery Regulation 2023/1230 | Economic operator incident reporting | AI Act PMM layered on top of Machinery Regulation reporting |
The alignment obligation in Art.30(4) does not create a single unified PMM system — it creates an obligation to not contradict the sector-specific PMM requirements. Where sector law is more stringent than Art.30, the sector law prevails. Where sector law is silent on AI-specific issues (model drift, distributional shift), Art.30 adds obligations that the sector law does not address.
For providers shipping AI into multiple sectors, the PMMP must explicitly map each deployment context to the applicable sector-specific requirements and document how the Art.30 PMM complements (rather than conflicts with) the sectoral PMM obligations.
Art.30(5): Deployer Cooperation Obligation
Art.30(5) creates an explicit obligation for deployers of high-risk AI systems to cooperate with providers in the PMM data collection process. This is a significant provision because it makes the deployer an active participant in the provider's compliance process, not merely a passive user.
What Deployer Cooperation Means in Practice
| Cooperation Obligation | Deployer Must | Provider Must Specify In IFU |
|---|---|---|
| Performance data submission | Submit structured performance reports on schedule | What data, what format, what schedule |
| Incident reporting to provider | Report serious incidents and near-misses immediately | Definition of reportable incident, reporting channel |
| Anomaly flagging | Flag unexpected outputs, unusual behaviour | What counts as anomaly, escalation threshold |
| Access to logs | Provide access to Art.12 automatically generated logs | Log format, retention period, access method |
| Cooperation with investigations | Allow provider access to deployment environment for investigation | Investigation rights scope |
Art.30(5) creates contractual and regulatory obligations simultaneously. Providers should include PMM cooperation obligations in deployer agreements (commercial contracts), because breach of the cooperation obligation by a deployer does not relieve the provider of their Art.30 PMM obligation. The provider must still maintain a functional PMM system even if a particular deployer fails to cooperate — which means fallback data collection methods are needed.
Instructions for Use (IFU) as the PMM Bridge
Art.13 requires providers to produce Instructions for Use (IFU) that enable deployers to understand and operate the AI system appropriately. Art.30(5) turns the IFU into the PMM cooperation specification — the IFU must include:
- What performance data the deployer must submit
- What constitutes a reportable serious incident under Art.73
- The reporting schedule and submission mechanisms
- Deployer's Art.12 log retention obligations (minimum 6 months under Art.26(7))
Art.30 × Art.9: Risk Management Integration
Art.30 does not operate in isolation from the pre-market risk management system — it feeds directly back into Art.9. The closed-loop relationship:
Art.9 Risk Assessment (pre-market)
↓ identifies risks
Art.30 PMM System (post-market)
↓ detects new/evolved risks
Art.9 Risk Management Update
↓ may trigger
Art.23 Significant Change Assessment
↓ may trigger
Art.19 Re-conformity Assessment
Art.30 PMM Findings That Trigger Art.9 Updates
| PMM Finding | Art.9 Response Required | Art.30 Action |
|---|---|---|
| Model drift exceeds threshold | Update risk assessment to reflect reduced reliability | Update PMMP KPI thresholds |
| New bias pattern detected across demographic group | Update Art.9(7) bias risk assessment | Expand demographic monitoring scope |
| Unexpected failure mode in production not in training | Add new risk to Art.9 risk register | Update Art.9 residual risk documentation |
| Deployer misuse pattern identified | Update foreseeable misuse scenarios in Art.9 | Add misuse prevention to IFU |
| Performance degradation in specific deployment context | Context-specific risk assessment | Narrow permitted deployment contexts in IFU |
The key developer implication: Art.30 PMM findings create a legal obligation to update Art.9 documentation. Teams that treat PMM as "monitoring for bugs" rather than "monitoring for regulatory triggers" will find themselves non-compliant when MSAs request Art.9 documentation that does not reflect production experience.
Art.30 × Art.12: Logging as PMM Data Source
Art.12 requires providers of high-risk AI systems to build in logging capabilities that automatically record AI system operation. Art.12 logs are the primary technical data source for the Art.30 PMM system.
What Art.12 Logs Must Capture
| Log Category | Art.12 Requirement | Art.30 PMM Use |
|---|---|---|
| Input data | Log the data used by the AI system for each decision | Performance drift detection against training distribution |
| Output data | Log the AI system's output for each decision | Accuracy tracking, anomaly detection |
| Decision identifiers | Unique identifier per decision | Traceability for incident investigation |
| Context variables | Operating conditions (time, environment, user context) | Context-specific performance analysis |
| Human oversight actions | Cases where human overrides or corrects AI output | Override rate tracking as performance metric |
Art.12 logs retained for 6 months minimum by deployers (Art.26(7)) must feed into the provider's Art.30 PMM system. The provider's data collection architecture must account for:
- Secure log transfer: Art.12 logs may contain personal data — transfer must be GDPR-compliant (data minimisation, pseudonymisation, purpose limitation)
- Aggregation without re-identification: providers typically need aggregated statistics, not individual decision logs
- Log format standardisation: providers must specify the Art.12 log format in the IFU so all deployers produce compatible PMM input data
Art.30 × Art.73: Serious Incident Reporting Trigger
Art.73 creates a mandatory serious incident reporting obligation that is directly triggered by Art.30 PMM findings. A "serious incident" under Art.73 includes any incident where the AI system has caused or contributed to:
- Death of a person
- Serious harm to health (hospitalisation, permanent injury)
- Serious damage to critical infrastructure
- Breach of fundamental rights obligations
- Serious damage to property or the environment
PMM → Art.73 Trigger Timeline
| PMM Finding | Art.73 Response | Reporting Timeline |
|---|---|---|
| Death or serious harm to persons detected | Provider must notify national MSA immediately | 2 working days from provider awareness |
| Serious infrastructure damage | Notify national MSA and RAPEX/ICSMS | 2 working days |
| Breach of fundamental rights obligation | Notify national MSA | 15 working days from provider awareness |
| Non-serious harm / degraded performance | No Art.73 notification; document in PMM | — |
The 2-working-day notification deadline for death/serious harm incidents is exceptionally tight. This requires the PMM system to have real-time incident detection capabilities — a batch-processing PMM that analyses weekly log snapshots will fail to meet the Art.73 timeline if a serious incident occurs between batches.
Art.73 Notification Content Requirements
The serious incident report must include:
- Nature of the incident and its consequences
- AI system identifier (Annex IV reference ID, if applicable EU database entry)
- Deployer identity (if not the same as provider)
- Corrective actions already taken
- Preliminary root cause assessment
- If the incident is ongoing: real-time updates as investigation progresses
Providers whose Art.30 PMM systems are inadequate will fail to detect serious incidents in time to meet Art.73 timelines — creating a compound violation (Art.30 + Art.73) that dramatically increases enforcement exposure.
Art.30 × Art.17: QMS Integration
Art.17 requires providers of high-risk AI systems to implement a Quality Management System (QMS). Art.30 PMM is a core component of the Art.17 QMS — specifically, the QMS must include:
- Post-market monitoring procedures (explicitly required in Art.17(1)(h))
- Incident reporting procedures that connect PMM findings to Art.73 notifications
- Processes for updating technical documentation when PMM findings are significant
- Audit trails that demonstrate PMM system was operational throughout the product lifecycle
For developers implementing Art.17-compliant QMS frameworks using ISO/IEC 42001:2023 (AI Management Systems), the PMM requirements map directly to:
| ISO/IEC 42001 Control | Art.30 PMM Mapping |
|---|---|
| 8.6 (AI system monitoring) | Art.30(1) PMM system establishment |
| 8.7 (AI system operation) | Art.30(2) active data collection |
| 9.1 (Monitoring, measurement, analysis) | Art.30(3) PMMP documentation |
| 10.2 (Nonconformity and corrective action) | Art.30 → Art.9 update loop |
CLOUD Act Jurisdiction Risk for PMM Data
Every PMM data collection system that runs on US cloud infrastructure (AWS, Azure, GCP) or uses US-based SaaS tools creates a CLOUD Act risk for EU providers. The CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 18 U.S.C. § 2523) allows US law enforcement to compel US cloud providers to produce data stored anywhere in the world — including PMM data collected from EU-deployed high-risk AI systems.
What PMM Data Is at Risk
| PMM Data Category | Sensitivity | CLOUD Act Risk |
|---|---|---|
| Aggregate performance statistics | Low | Low commercial sensitivity but reveals deployment scale |
| Incident reports and near-miss logs | High | Contains details of AI system failures in production |
| Deployer identity data | Medium | Reveals customer list — commercial intelligence |
| Art.12 logs with input/output data | Very high | May contain personal data of EU users (GDPR collision) |
| Art.9 risk management updates | High | Reveals known vulnerabilities in the AI system |
PMM data stored on US cloud infrastructure is subject to CLOUD Act compellability even if stored in an EU-region data centre operated by a US company. The CLOUD Act does not contain a "stored in Europe" exemption — it applies based on who controls the data, not where the data is physically stored.
EU AI Act Art.30 × CLOUD Act Double Compellability
When a serious incident occurs, the EU AI Act requires the provider to cooperate with EU market surveillance authorities under Art.21. If PMM data is stored on US cloud infrastructure, the same data may simultaneously be compellable by:
- EU MSA under Art.21 — MSA requests incident reports and PMM logs
- US law enforcement under CLOUD Act — compelled from the US cloud provider
This creates a situation where the AI provider has complied with both Art.30 (PMM system) and Art.21 (MSA cooperation) using EU-accessible PMM data — but the US government can simultaneously access the same data through the cloud provider, bypassing the EU investigation entirely.
EU-Native Infrastructure as Compliance Architecture
Providers building high-risk AI systems for EU markets can eliminate CLOUD Act exposure for PMM data by using EU-incorporated infrastructure that is not subject to US jurisdiction:
- EU-native PaaS (e.g., sota.io): Operated under EU corporate law, not subject to CLOUD Act compellability
- PMM data stays within EU jurisdiction: Art.21 MSA access is the only compellability mechanism
- GDPR-compliant by architecture: No US data transfer for PMM data containing personal data from EU users
- Single audit trail: Art.30 PMM data and Art.73 incident reports are produced under EU law with no US parallel-access risk
Providers using US infrastructure face the additional operational burden of maintaining dual CLOUD Act / EU MSA response protocols — a significant compliance overhead that EU-native infrastructure eliminates by design.
Python Implementation
PMM_PlanRecord: Documentation Model for the PMMP
from dataclasses import dataclass, field
from datetime import date
from typing import Optional
from enum import Enum
class AnnexIIICategory(Enum):
BIOMETRIC_ID = "1_biometric_identification"
CRITICAL_INFRA = "2_critical_infrastructure"
EDUCATION = "3_education_training"
EMPLOYMENT = "4_employment"
ESSENTIAL_SERVICES = "5_essential_services"
LAW_ENFORCEMENT = "6_law_enforcement"
MIGRATION_ASYLUM = "7_migration_asylum"
JUSTICE = "8_justice_democratic"
@dataclass
class PMM_KPI:
metric_name: str
baseline_value: float
alert_threshold: float # triggers Art.20 corrective action review
serious_incident_threshold: Optional[float] # triggers Art.73 reporting
measurement_frequency: str # e.g. "daily", "weekly", "per_decision"
collection_method: str # e.g. "automated_telemetry", "deployer_report"
@dataclass
class PMM_PlanRecord:
"""Post-Market Monitoring Plan as required by EU AI Act Art.30(3) / Annex IV Section 8."""
ai_system_id: str
system_name: str
annex_iii_categories: list[AnnexIIICategory]
pmm_plan_version: str
plan_date: date
# Art.30(2): Data collection specification
kpis: list[PMM_KPI]
data_collection_schedule: str # e.g. "weekly deployer reports + daily automated telemetry"
deployer_reporting_obligations: str # what deployers must submit per Art.30(5)
# Art.30(3): Documentation
pmmp_location_in_annex_iv: str # section reference in technical documentation
# Art.30(4): Sector alignment
sector_specific_legislation: list[str] # e.g. ["MDR 2017/745", "Machinery Regulation 2023/1230"]
sector_pmm_alignment_notes: str
# Art.73 integration
serious_incident_criteria: list[str] # definitions of reportable incidents
art73_reporting_timeline_days: int = 2 # 2 working days for death/serious harm
# Retention
data_retention_years: int = 10 # Art.18(1) minimum
def to_annex_iv_section(self) -> dict:
return {
"section": "8_post_market_monitoring_plan",
"ai_system_id": self.ai_system_id,
"version": self.pmm_plan_version,
"plan_date": str(self.plan_date),
"annex_iii_categories": [c.value for c in self.annex_iii_categories],
"kpi_count": len(self.kpis),
"serious_incident_criteria": self.serious_incident_criteria,
"art73_notification_window_days": self.art73_reporting_timeline_days,
"data_retention_years": self.data_retention_years,
"sector_alignment": self.sector_specific_legislation,
}
PostMarketMonitoringSystem: PMM Data Collection and Analysis
import hashlib
from datetime import datetime, timedelta
@dataclass
class PMM_DataPoint:
deployer_id: str # pseudonymised deployer identifier
timestamp: datetime
metric_name: str
metric_value: float
context: dict # deployment context variables
@dataclass
class PMM_IncidentFlag:
flag_id: str
deployer_id: str
timestamp: datetime
description: str
severity: str # "near_miss", "non_serious", "potentially_serious", "serious"
art73_reportable: bool
investigation_status: str # "open", "under_investigation", "resolved", "reported_to_msa"
class PostMarketMonitoringSystem:
"""Art.30 PMM system for high-risk AI providers."""
def __init__(self, plan: PMM_PlanRecord):
self.plan = plan
self.data_points: list[PMM_DataPoint] = []
self.incident_flags: list[PMM_IncidentFlag] = []
self.kpi_baselines = {kpi.metric_name: kpi for kpi in plan.kpis}
def ingest_deployer_report(
self,
raw_deployer_id: str,
report_data: dict,
report_timestamp: datetime
) -> list[str]:
"""
Process structured deployer PMM report.
Returns list of triggered alerts (Art.20 or Art.73 level).
"""
deployer_id = self._pseudonymise(raw_deployer_id)
alerts = []
for metric_name, metric_value in report_data.get("metrics", {}).items():
point = PMM_DataPoint(
deployer_id=deployer_id,
timestamp=report_timestamp,
metric_name=metric_name,
metric_value=float(metric_value),
context=report_data.get("context", {}),
)
self.data_points.append(point)
if metric_name in self.kpi_baselines:
kpi = self.kpi_baselines[metric_name]
if float(metric_value) < kpi.alert_threshold:
alerts.append(f"ART20_ALERT:{metric_name}={metric_value:.3f} below threshold {kpi.alert_threshold}")
if kpi.serious_incident_threshold and float(metric_value) < kpi.serious_incident_threshold:
alerts.append(f"ART73_CANDIDATE:{metric_name}={metric_value:.3f} below serious threshold {kpi.serious_incident_threshold}")
for raw_incident in report_data.get("incidents", []):
self._process_incident_flag(deployer_id, raw_incident, report_timestamp, alerts)
return alerts
def _process_incident_flag(
self, deployer_id: str, incident_data: dict, timestamp: datetime, alerts: list
):
flag_id = hashlib.sha256(f"{deployer_id}{timestamp}{incident_data.get('description', '')}".encode()).hexdigest()[:12]
severity = self._assess_severity(incident_data)
art73_reportable = severity == "serious"
flag = PMM_IncidentFlag(
flag_id=flag_id,
deployer_id=deployer_id,
timestamp=timestamp,
description=incident_data.get("description", ""),
severity=severity,
art73_reportable=art73_reportable,
investigation_status="open",
)
self.incident_flags.append(flag)
if art73_reportable:
deadline = timestamp + timedelta(days=self.plan.art73_reporting_timeline_days)
alerts.append(f"ART73_REPORT_REQUIRED:flag_id={flag_id},deadline={deadline.date()}")
def _assess_severity(self, incident_data: dict) -> str:
description = incident_data.get("description", "").lower()
if any(kw in description for kw in ["death", "fatal", "serious injury", "hospitalisation"]):
return "serious"
if any(kw in description for kw in ["injury", "harm", "fundamental rights", "infrastructure"]):
return "potentially_serious"
if any(kw in description for kw in ["near miss", "close call", "anomaly", "unexpected"]):
return "near_miss"
return "non_serious"
def _pseudonymise(self, raw_id: str) -> str:
return hashlib.sha256(raw_id.encode()).hexdigest()[:16]
def art9_update_report(self) -> dict:
"""Generate report of PMM findings that require Art.9 risk management updates."""
open_art73 = [f for f in self.incident_flags if f.art73_reportable and f.investigation_status == "open"]
breaches = [p for p in self.data_points if p.metric_name in self.kpi_baselines
and p.metric_value < self.kpi_baselines[p.metric_name].alert_threshold]
return {
"ai_system_id": self.plan.ai_system_id,
"report_generated": datetime.utcnow().isoformat(),
"total_data_points": len(self.data_points),
"total_incident_flags": len(self.incident_flags),
"art73_reportable_open": len(open_art73),
"art73_flag_ids": [f.flag_id for f in open_art73],
"kpi_threshold_breaches": len(breaches),
"art9_update_required": len(open_art73) > 0 or len(breaches) > 3,
"recommended_action": (
"IMMEDIATE: File Art.73 report and initiate Art.9 review"
if len(open_art73) > 0
else "SCHEDULED: Review KPI breaches in next Art.9 cycle"
if len(breaches) > 0
else "OK: No Art.9 updates triggered"
),
}
IncidentDetector: Real-Time Serious Incident Detection for Art.73
from dataclasses import dataclass, field
from datetime import datetime, timedelta
@dataclass
class SeriousIncidentReport:
"""Draft Art.73 serious incident report for MSA submission."""
incident_id: str
ai_system_id: str
ai_system_name: str
deployer_id: str # pseudonymised
incident_timestamp: datetime
provider_awareness_timestamp: datetime
art73_reporting_deadline: datetime
incident_description: str
consequences: list[str]
preliminary_root_cause: str
corrective_actions_taken: list[str]
investigation_status: str
report_submitted: bool = False
submission_timestamp: Optional[datetime] = None
class IncidentDetector:
"""Real-time serious incident detection feeding Art.73 reporting pipeline."""
SERIOUS_INCIDENT_KEYWORDS = [
"death", "fatal", "killed", "hospitalised", "hospitalised",
"serious injury", "serious harm", "fundamental rights breach",
"critical infrastructure damage", "mass surveillance",
]
NEAR_MISS_KEYWORDS = [
"near miss", "close call", "potential harm", "significant error",
"system failure", "unexpected output", "bias detected",
]
def __init__(self, ai_system_id: str, ai_system_name: str, reporting_days: int = 2):
self.ai_system_id = ai_system_id
self.ai_system_name = ai_system_name
self.reporting_days = reporting_days
self.pending_reports: list[SeriousIncidentReport] = []
self.overdue_reports: list[SeriousIncidentReport] = []
def evaluate_incident(
self,
incident_description: str,
deployer_id: str,
incident_timestamp: datetime,
) -> Optional[SeriousIncidentReport]:
"""
Evaluate whether an incident qualifies as Art.73 serious incident.
Returns SeriousIncidentReport if reportable, None otherwise.
"""
description_lower = incident_description.lower()
is_serious = any(kw in description_lower for kw in self.SERIOUS_INCIDENT_KEYWORDS)
if not is_serious:
return None
awareness = datetime.utcnow()
deadline = awareness + timedelta(days=self.reporting_days)
report = SeriousIncidentReport(
incident_id=f"ART73-{self.ai_system_id}-{awareness.strftime('%Y%m%d%H%M%S')}",
ai_system_id=self.ai_system_id,
ai_system_name=self.ai_system_name,
deployer_id=deployer_id,
incident_timestamp=incident_timestamp,
provider_awareness_timestamp=awareness,
art73_reporting_deadline=deadline,
incident_description=incident_description,
consequences=[],
preliminary_root_cause="Under investigation",
corrective_actions_taken=[],
investigation_status="open",
)
self.pending_reports.append(report)
return report
def check_reporting_deadlines(self) -> list[dict]:
"""Check which Art.73 reports are approaching or past deadline."""
now = datetime.utcnow()
deadline_alerts = []
for report in self.pending_reports:
if report.report_submitted:
continue
hours_remaining = (report.art73_reporting_deadline - now).total_seconds() / 3600
status = (
"OVERDUE" if hours_remaining < 0
else "CRITICAL" if hours_remaining < 8
else "WARNING" if hours_remaining < 24
else "OK"
)
if status != "OK":
deadline_alerts.append({
"incident_id": report.incident_id,
"deadline": report.art73_reporting_deadline.isoformat(),
"hours_remaining": round(hours_remaining, 1),
"status": status,
"action": "FILE ART.73 REPORT IMMEDIATELY" if status == "CRITICAL" else "File Art.73 report",
})
if status == "OVERDUE" and report not in self.overdue_reports:
self.overdue_reports.append(report)
return deadline_alerts
def compliance_summary(self) -> dict:
return {
"ai_system_id": self.ai_system_id,
"pending_art73_reports": len([r for r in self.pending_reports if not r.report_submitted]),
"submitted_art73_reports": len([r for r in self.pending_reports if r.report_submitted]),
"overdue_reports": len(self.overdue_reports),
"overdue_incident_ids": [r.incident_id for r in self.overdue_reports],
"reporting_deadline_days": self.reporting_days,
"deadline_alerts": self.check_reporting_deadlines(),
}
Art.30 Compliance Checklist
Practical 40-item checklist for high-risk AI providers implementing Art.30 post-market monitoring:
PMM System Establishment (Art.30(1))
- 1. Established a post-market monitoring system appropriate to the risk profile of the high-risk AI system
- 2. Documented the PMM system architecture in the Annex IV technical documentation
- 3. Assigned organisational responsibility for PMM system operation (Art.17 QMS integration)
- 4. Implemented data ingestion pipeline for receiving deployer performance data
- 5. Implemented automated anomaly detection for real-time performance monitoring
- 6. Implemented 10-year data retention for all PMM data (Art.18(1))
- 7. Implemented audit access to PMM data for market surveillance authorities (Art.21)
- 8. Validated PMM system operation before first market placement
Active Data Collection (Art.30(2))
- 9. Defined structured deployer performance report format in Instructions for Use
- 10. Specified data collection schedule (frequency and submission mechanism) in IFU
- 11. Implemented deployer report ingestion with automated KPI extraction
- 12. Implemented drift detection comparing production data distribution to training distribution
- 13. Implemented demographic performance disaggregation monitoring (Art.9(7) bias detection)
- 14. Implemented edge case / outlier output capture system
- 15. Implemented Art.12 log aggregation pipeline from deployers
PMM Plan Documentation (Art.30(3))
- 16. Drafted Post-Market Monitoring Plan (PMMP) as a section in Annex IV technical documentation
- 17. Defined all KPIs with baseline values and alert thresholds in PMMP
- 18. Defined serious incident criteria aligned with Art.73 definitions
- 19. Documented deployer reporting obligations in PMMP
- 20. Documented Art.73 notification timeline (2 working days for death/serious harm) in PMMP
- 21. Documented Art.9 update triggers based on PMM findings in PMMP
- 22. Documented PMMP update procedure for significant changes (Art.23)
- 23. PMMP reviewed as part of conformity assessment (Art.43)
Sector-Specific Alignment (Art.30(4))
- 24. Identified all sector-specific EU legislation applicable to the AI system's deployment context
- 25. Documented alignment between Art.30 PMMP and sector-specific PMM requirements
- 26. Identified where sector law is more stringent than Art.30 and applied higher standard
- 27. Implemented PMM data structure compatible with sector-specific reporting formats
Deployer Cooperation (Art.30(5))
- 28. Included PMM cooperation obligations in deployer commercial agreements
- 29. Specified deployer cooperation requirements in Instructions for Use (Art.13)
- 30. Established deployer incident reporting channel with documented submission process
- 31. Implemented fallback data collection mechanisms for non-cooperating deployers
- 32. Documented consequences of deployer non-cooperation in IFU
Art.73 Integration
- 33. Implemented real-time incident detection with automated Art.73 threshold assessment
- 34. Implemented Art.73 reporting deadline tracking (2 working day countdown)
- 35. Prepared Art.73 report template aligned with MSA expected format
- 36. Tested end-to-end incident → detection → Art.73 report pipeline
Art.9 Feedback Loop
- 37. Implemented PMM → Art.9 update trigger mechanism
- 38. Documented process for updating Art.9 risk register based on PMM findings
- 39. Implemented process for reviewing PMMP following each Art.9 update
Infrastructure
- 40. Assessed PMM data storage infrastructure for CLOUD Act jurisdiction risk and implemented EU-native storage for PMM data containing EU user data
See Also
- EU AI Act Art.29 Obligations for Providers of General-Purpose AI Models: Developer Guide — GPAI provider obligations including technical documentation and downstream provider access
- EU AI Act Art.26 Obligations for Deployers: Developer Guide — Art.26 deployer cooperation obligations that feed Art.30 PMM data collection
- EU AI Act Art.27 Fundamental Rights Impact Assessment (FRIA): Developer Guide — Art.27 FRIA documentation that integrates with Art.30 PMM demographic performance tracking
- EU AI Act Art.22 EU Database of High-Risk AI Systems: Developer Guide — Art.22 registration obligations triggered by serious incidents detected in Art.30 PMM
- EU NIS2 + AI Act Critical Infrastructure Double Compliance Guide — How NIS2 Art.21 ICT risk management and EU AI Act Art.30 PMM create dual monitoring obligations for critical infrastructure AI