2026-04-16·12 min read·

EU AI Act Art.30 Post-Market Monitoring for High-Risk AI: Developer Guide (2026)

EU AI Act Article 30 establishes the post-market monitoring (PMM) obligation for providers of high-risk AI systems. Once a high-risk AI system is placed on the market or put into service, Art.30 requires providers to actively collect, analyse, and act on operational performance data for the system's entire commercial lifecycle. PMM is not an optional quality assurance exercise — it is a legal obligation with direct links to Art.73 serious incident reporting, Art.9 risk management updates, and Art.17 quality management system (QMS) integration.

The practical consequence for developers: every high-risk AI system you build and distribute requires a Post-Market Monitoring Plan (PMMP) as part of the Annex IV technical documentation, plus an operational infrastructure to collect and analyse performance data from deployers. Art.30 compliance cannot be retrofit after complaints arrive — it must be architected before market placement.

Fines for violation of Art.30 obligations fall under Art.99(3): €15 million or 3 % of global annual turnover (whichever is higher) for providers. For SMEs, the fine cap is €7.5 million. Market surveillance authorities under Art.74 can also order temporary withdrawal from service while a provider remedies PMM deficiencies. Art.30 compliance is therefore a precondition for sustained EU market access, not merely a box-ticking exercise.

This guide covers Art.30(1)–(5) obligations in full, the Art.30 × Art.9/12/73/17 intersection matrix, deployer cooperation obligations, the CLOUD Act jurisdiction risk for PMM data stored on US infrastructure, Python implementation for PostMarketMonitoringSystem, IncidentDetector, and PMM_PlanRecord, and the 40-item Art.30 compliance checklist.


Art.30 in the High-Risk AI Compliance Architecture

Art.30 is the lifecycle-extension article. All the pre-market obligations (Art.9 risk management, Art.10 data governance, Art.11 technical documentation, Art.17 QMS, Art.19 conformity assessment) focus on getting the AI system compliant before deployment. Art.30 extends the compliance obligation into production and creates a feedback loop back to Art.9:

PhaseKey ArticlesArt.30 Role
Design & developmentArt.9, Art.10, Art.11PMM plan drafted as part of Annex IV
Conformity assessmentArt.19, Art.43, Art.48, Art.49PMMP reviewed as part of conformity documentation
Market placement / put into serviceArt.16, Art.22PMM system activated
Operational lifecycleArt.30Active data collection, analysis, Art.9 updates
Serious incidentArt.20, Art.73PMM triggers incident detection → 15-day / 2-day report
Significant change / withdrawalArt.23, Art.20(3)PMM findings trigger provider response

Art.30 compliance therefore spans the entire post-deployment phase and feeds back into pre-market obligations when findings require risk management updates (Art.9), corrective actions (Art.20), or re-conformity assessment (Art.19).


Art.30(1): Establishing the Post-Market Monitoring System

Art.30(1) imposes a direct obligation on providers of high-risk AI systems: they must establish and document a post-market monitoring system that is appropriate for the risk profile of the AI system and that actively gathers, documents, and analyses relevant data from deployers and other sources.

What "Appropriate to the Risk Profile" Means

Art.30(1) does not mandate a one-size-fits-all monitoring system. The PMM system must be scaled to:

Risk DimensionLow-Risk Profile ExampleHigh-Risk Profile Example
Deployment scale< 100 deployers, niche sector> 10,000 deployers, critical infrastructure
User interactionOccasional use by professionalsContinuous 24/7 use in safety-critical decisions
Output consequencesAdvisory outputs reviewed by humansAutonomous decisions with immediate legal or safety impact
Affected populationIndustry specialistsGeneral public, vulnerable groups
Annex III categoryEducation/training tools (Art.6(2))Biometric identification, law enforcement, medical devices

For biometric identification systems (Annex III category 1), critical infrastructure AI (category 2), and law enforcement AI (category 6), the monitoring system must be substantially more comprehensive than for employment screening tools (category 4) or credit scoring systems (category 5).

Minimum PMM System Requirements

Regardless of risk profile, Art.30(1) implies a minimum set of technical capabilities that providers must implement:

  1. Data ingestion pipeline: mechanisms to receive operational data from deployers (via API webhook, log forwarding, or structured report submission)
  2. Performance tracking: ongoing measurement of accuracy, precision, recall, and other KPIs specified in the PMMP
  3. Drift detection: monitoring for statistical drift between training distribution and production distribution
  4. Incident flag registry: structured capture of incidents, near-misses, and anomalous outputs reported by deployers
  5. Data retention: PMM data must be retained for 10 years under Art.18(1) (technical documentation), creating a storage infrastructure requirement
  6. Audit access: PMM data must be accessible to market surveillance authorities under Art.21(2) without undue delay

Art.30(2): Active Data Collection from Deployers

Art.30(2) specifies that the PMM system must actively collect data on the operational performance of the high-risk AI system. "Active" is the operative word — Art.30 does not allow providers to wait passively for deployer feedback or incident reports. The provider must implement structured data collection.

Types of Operational Data Art.30(2) Requires

Data CategoryDescriptionCollection Method
Performance metricsAccuracy, error rates, false positive/negative rates in productionAutomated telemetry from deployer integration
User feedbackEnd-user complaints, corrections, overrides of AI outputsStructured feedback forms, deployer-submitted reports
Edge case outputsOutputs that fall outside expected performance rangesAutomated outlier detection in deployer pipelines
Demographic performanceDisaggregated performance across gender, age, ethnicity (for bias monitoring under Art.9(7))Deployer-submitted demographic performance data
Environmental driftChanges in input data distribution over timeStatistical drift detection pipelines
Incident signalsNear-misses, anomalous behaviours, close calls before serious incident thresholdStructured incident flag submissions from deployers

Provider vs Deployer Responsibility for Data Collection

Art.30 creates a layered data collection architecture. Providers are responsible for establishing the system and collecting data; deployers are obligated to cooperate under Art.30(5). The practical split:

What Provider DoesWhat Deployer Does
Provides API endpoints or log-submission tooling for performance dataIntegrates data submission into their operational pipeline
Defines what data to submit and in what format (as part of Instructions for Use, Art.13)Submits structured reports on the schedule specified by provider
Analyses aggregated data across all deploymentsReports incidents and anomalies as they occur
Initiates Art.9 risk updates when findings warrantFlags when performance falls below thresholds specified in IFU

The provider cannot delegate the PMM analysis obligation to deployers — the analysis, conclusions, and risk management updates remain with the provider. Deployers are data contributors, not analysis owners.


Art.30(3): The Post-Market Monitoring Plan (PMMP)

Art.30(3) requires that the post-market monitoring plan is documented and forms part of the Annex IV technical documentation. This means the PMMP must exist before the conformity assessment under Art.43, before the declaration of conformity under Art.48, and before CE marking under Art.49.

Mandatory PMMP Content

The PMMP must document:

PMMP ElementWhat to IncludeWhy Required
Monitoring objectivesWhich performance metrics are monitored and whyArt.9(9) risk management integration
KPI thresholdsQuantitative thresholds that trigger corrective actionArt.20 corrective action trigger definition
Data collection scheduleFrequency of data collection from deployersArt.30(2) active collection requirement
Deployer reporting obligationsWhat deployers must submit and whenArt.30(5) deployer cooperation specification
Serious incident criteriaDefinition of what constitutes a serious incident under Art.73Art.73 reporting trigger
Data retention policyHow long PMM data is retained and whereArt.18(1) 10-year retention requirement
Escalation proceduresWho is responsible for PMM analysis and escalation decisionsArt.17 QMS integration
Update triggersEvents that trigger PMMP revision (major drift, new deployer category, significant change)Lifecycle management

The PMMP is a living document. Each time a major finding triggers Art.9 risk management updates, the PMMP must also be reviewed and updated to reflect any change in monitoring scope or thresholds.

PMMP as Part of Annex IV Documentation

Annex IV of the EU AI Act lists the content of technical documentation for high-risk AI systems. Section 8 of Annex IV explicitly requires the technical documentation to include "a post-market monitoring plan." This means:


Art.30(4): Sector-Specific Alignment

Art.30(4) establishes that for AI systems that are safety components of products regulated by sector-specific EU legislation (medical devices under MDR/IVDR, machinery under the Machinery Regulation, aviation under EASA rules), the PMM system must align with the post-market surveillance requirements of the applicable sectoral law.

Dual PMM Regimes for Safety Component AI

SectorRelevant LegislationKey PMM RequirementAI Act Art.30 Addition
Medical AI (diagnostic, treatment)MDR (2017/745), IVDR (2017/746)PMCF plan, trend reports, PSURExpanded to capture AI-specific metrics (model drift, performance disaggregation)
Autonomous vehicles / ADASUNECE Regulation 155/157, Regulation (EU) 2022/1426Incident reporting to national approval authorityAI Act Art.73 serious incident reporting in parallel
Aviation AI (ATC, flight control)EASA Regulation 2018/1139Safety occurrence reportingEU AI Act PMM data must be reconcilable with EASA safety data
Industrial machinery AIMachinery Regulation 2023/1230Economic operator incident reportingAI Act PMM layered on top of Machinery Regulation reporting

The alignment obligation in Art.30(4) does not create a single unified PMM system — it creates an obligation to not contradict the sector-specific PMM requirements. Where sector law is more stringent than Art.30, the sector law prevails. Where sector law is silent on AI-specific issues (model drift, distributional shift), Art.30 adds obligations that the sector law does not address.

For providers shipping AI into multiple sectors, the PMMP must explicitly map each deployment context to the applicable sector-specific requirements and document how the Art.30 PMM complements (rather than conflicts with) the sectoral PMM obligations.


Art.30(5): Deployer Cooperation Obligation

Art.30(5) creates an explicit obligation for deployers of high-risk AI systems to cooperate with providers in the PMM data collection process. This is a significant provision because it makes the deployer an active participant in the provider's compliance process, not merely a passive user.

What Deployer Cooperation Means in Practice

Cooperation ObligationDeployer MustProvider Must Specify In IFU
Performance data submissionSubmit structured performance reports on scheduleWhat data, what format, what schedule
Incident reporting to providerReport serious incidents and near-misses immediatelyDefinition of reportable incident, reporting channel
Anomaly flaggingFlag unexpected outputs, unusual behaviourWhat counts as anomaly, escalation threshold
Access to logsProvide access to Art.12 automatically generated logsLog format, retention period, access method
Cooperation with investigationsAllow provider access to deployment environment for investigationInvestigation rights scope

Art.30(5) creates contractual and regulatory obligations simultaneously. Providers should include PMM cooperation obligations in deployer agreements (commercial contracts), because breach of the cooperation obligation by a deployer does not relieve the provider of their Art.30 PMM obligation. The provider must still maintain a functional PMM system even if a particular deployer fails to cooperate — which means fallback data collection methods are needed.

Instructions for Use (IFU) as the PMM Bridge

Art.13 requires providers to produce Instructions for Use (IFU) that enable deployers to understand and operate the AI system appropriately. Art.30(5) turns the IFU into the PMM cooperation specification — the IFU must include:


Art.30 × Art.9: Risk Management Integration

Art.30 does not operate in isolation from the pre-market risk management system — it feeds directly back into Art.9. The closed-loop relationship:

Art.9 Risk Assessment (pre-market)
         ↓ identifies risks
Art.30 PMM System (post-market)
         ↓ detects new/evolved risks
Art.9 Risk Management Update
         ↓ may trigger
Art.23 Significant Change Assessment
         ↓ may trigger
Art.19 Re-conformity Assessment

Art.30 PMM Findings That Trigger Art.9 Updates

PMM FindingArt.9 Response RequiredArt.30 Action
Model drift exceeds thresholdUpdate risk assessment to reflect reduced reliabilityUpdate PMMP KPI thresholds
New bias pattern detected across demographic groupUpdate Art.9(7) bias risk assessmentExpand demographic monitoring scope
Unexpected failure mode in production not in trainingAdd new risk to Art.9 risk registerUpdate Art.9 residual risk documentation
Deployer misuse pattern identifiedUpdate foreseeable misuse scenarios in Art.9Add misuse prevention to IFU
Performance degradation in specific deployment contextContext-specific risk assessmentNarrow permitted deployment contexts in IFU

The key developer implication: Art.30 PMM findings create a legal obligation to update Art.9 documentation. Teams that treat PMM as "monitoring for bugs" rather than "monitoring for regulatory triggers" will find themselves non-compliant when MSAs request Art.9 documentation that does not reflect production experience.


Art.30 × Art.12: Logging as PMM Data Source

Art.12 requires providers of high-risk AI systems to build in logging capabilities that automatically record AI system operation. Art.12 logs are the primary technical data source for the Art.30 PMM system.

What Art.12 Logs Must Capture

Log CategoryArt.12 RequirementArt.30 PMM Use
Input dataLog the data used by the AI system for each decisionPerformance drift detection against training distribution
Output dataLog the AI system's output for each decisionAccuracy tracking, anomaly detection
Decision identifiersUnique identifier per decisionTraceability for incident investigation
Context variablesOperating conditions (time, environment, user context)Context-specific performance analysis
Human oversight actionsCases where human overrides or corrects AI outputOverride rate tracking as performance metric

Art.12 logs retained for 6 months minimum by deployers (Art.26(7)) must feed into the provider's Art.30 PMM system. The provider's data collection architecture must account for:

  1. Secure log transfer: Art.12 logs may contain personal data — transfer must be GDPR-compliant (data minimisation, pseudonymisation, purpose limitation)
  2. Aggregation without re-identification: providers typically need aggregated statistics, not individual decision logs
  3. Log format standardisation: providers must specify the Art.12 log format in the IFU so all deployers produce compatible PMM input data

Art.30 × Art.73: Serious Incident Reporting Trigger

Art.73 creates a mandatory serious incident reporting obligation that is directly triggered by Art.30 PMM findings. A "serious incident" under Art.73 includes any incident where the AI system has caused or contributed to:

PMM → Art.73 Trigger Timeline

PMM FindingArt.73 ResponseReporting Timeline
Death or serious harm to persons detectedProvider must notify national MSA immediately2 working days from provider awareness
Serious infrastructure damageNotify national MSA and RAPEX/ICSMS2 working days
Breach of fundamental rights obligationNotify national MSA15 working days from provider awareness
Non-serious harm / degraded performanceNo Art.73 notification; document in PMM

The 2-working-day notification deadline for death/serious harm incidents is exceptionally tight. This requires the PMM system to have real-time incident detection capabilities — a batch-processing PMM that analyses weekly log snapshots will fail to meet the Art.73 timeline if a serious incident occurs between batches.

Art.73 Notification Content Requirements

The serious incident report must include:

Providers whose Art.30 PMM systems are inadequate will fail to detect serious incidents in time to meet Art.73 timelines — creating a compound violation (Art.30 + Art.73) that dramatically increases enforcement exposure.


Art.30 × Art.17: QMS Integration

Art.17 requires providers of high-risk AI systems to implement a Quality Management System (QMS). Art.30 PMM is a core component of the Art.17 QMS — specifically, the QMS must include:

For developers implementing Art.17-compliant QMS frameworks using ISO/IEC 42001:2023 (AI Management Systems), the PMM requirements map directly to:

ISO/IEC 42001 ControlArt.30 PMM Mapping
8.6 (AI system monitoring)Art.30(1) PMM system establishment
8.7 (AI system operation)Art.30(2) active data collection
9.1 (Monitoring, measurement, analysis)Art.30(3) PMMP documentation
10.2 (Nonconformity and corrective action)Art.30 → Art.9 update loop

CLOUD Act Jurisdiction Risk for PMM Data

Every PMM data collection system that runs on US cloud infrastructure (AWS, Azure, GCP) or uses US-based SaaS tools creates a CLOUD Act risk for EU providers. The CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 18 U.S.C. § 2523) allows US law enforcement to compel US cloud providers to produce data stored anywhere in the world — including PMM data collected from EU-deployed high-risk AI systems.

What PMM Data Is at Risk

PMM Data CategorySensitivityCLOUD Act Risk
Aggregate performance statisticsLowLow commercial sensitivity but reveals deployment scale
Incident reports and near-miss logsHighContains details of AI system failures in production
Deployer identity dataMediumReveals customer list — commercial intelligence
Art.12 logs with input/output dataVery highMay contain personal data of EU users (GDPR collision)
Art.9 risk management updatesHighReveals known vulnerabilities in the AI system

PMM data stored on US cloud infrastructure is subject to CLOUD Act compellability even if stored in an EU-region data centre operated by a US company. The CLOUD Act does not contain a "stored in Europe" exemption — it applies based on who controls the data, not where the data is physically stored.

EU AI Act Art.30 × CLOUD Act Double Compellability

When a serious incident occurs, the EU AI Act requires the provider to cooperate with EU market surveillance authorities under Art.21. If PMM data is stored on US cloud infrastructure, the same data may simultaneously be compellable by:

  1. EU MSA under Art.21 — MSA requests incident reports and PMM logs
  2. US law enforcement under CLOUD Act — compelled from the US cloud provider

This creates a situation where the AI provider has complied with both Art.30 (PMM system) and Art.21 (MSA cooperation) using EU-accessible PMM data — but the US government can simultaneously access the same data through the cloud provider, bypassing the EU investigation entirely.

EU-Native Infrastructure as Compliance Architecture

Providers building high-risk AI systems for EU markets can eliminate CLOUD Act exposure for PMM data by using EU-incorporated infrastructure that is not subject to US jurisdiction:

Providers using US infrastructure face the additional operational burden of maintaining dual CLOUD Act / EU MSA response protocols — a significant compliance overhead that EU-native infrastructure eliminates by design.


Python Implementation

PMM_PlanRecord: Documentation Model for the PMMP

from dataclasses import dataclass, field
from datetime import date
from typing import Optional
from enum import Enum


class AnnexIIICategory(Enum):
    BIOMETRIC_ID = "1_biometric_identification"
    CRITICAL_INFRA = "2_critical_infrastructure"
    EDUCATION = "3_education_training"
    EMPLOYMENT = "4_employment"
    ESSENTIAL_SERVICES = "5_essential_services"
    LAW_ENFORCEMENT = "6_law_enforcement"
    MIGRATION_ASYLUM = "7_migration_asylum"
    JUSTICE = "8_justice_democratic"


@dataclass
class PMM_KPI:
    metric_name: str
    baseline_value: float
    alert_threshold: float  # triggers Art.20 corrective action review
    serious_incident_threshold: Optional[float]  # triggers Art.73 reporting
    measurement_frequency: str  # e.g. "daily", "weekly", "per_decision"
    collection_method: str  # e.g. "automated_telemetry", "deployer_report"


@dataclass
class PMM_PlanRecord:
    """Post-Market Monitoring Plan as required by EU AI Act Art.30(3) / Annex IV Section 8."""
    
    ai_system_id: str
    system_name: str
    annex_iii_categories: list[AnnexIIICategory]
    pmm_plan_version: str
    plan_date: date
    
    # Art.30(2): Data collection specification
    kpis: list[PMM_KPI]
    data_collection_schedule: str  # e.g. "weekly deployer reports + daily automated telemetry"
    deployer_reporting_obligations: str  # what deployers must submit per Art.30(5)
    
    # Art.30(3): Documentation
    pmmp_location_in_annex_iv: str  # section reference in technical documentation
    
    # Art.30(4): Sector alignment
    sector_specific_legislation: list[str]  # e.g. ["MDR 2017/745", "Machinery Regulation 2023/1230"]
    sector_pmm_alignment_notes: str
    
    # Art.73 integration
    serious_incident_criteria: list[str]  # definitions of reportable incidents
    art73_reporting_timeline_days: int = 2  # 2 working days for death/serious harm
    
    # Retention
    data_retention_years: int = 10  # Art.18(1) minimum

    def to_annex_iv_section(self) -> dict:
        return {
            "section": "8_post_market_monitoring_plan",
            "ai_system_id": self.ai_system_id,
            "version": self.pmm_plan_version,
            "plan_date": str(self.plan_date),
            "annex_iii_categories": [c.value for c in self.annex_iii_categories],
            "kpi_count": len(self.kpis),
            "serious_incident_criteria": self.serious_incident_criteria,
            "art73_notification_window_days": self.art73_reporting_timeline_days,
            "data_retention_years": self.data_retention_years,
            "sector_alignment": self.sector_specific_legislation,
        }

PostMarketMonitoringSystem: PMM Data Collection and Analysis

import hashlib
from datetime import datetime, timedelta


@dataclass
class PMM_DataPoint:
    deployer_id: str  # pseudonymised deployer identifier
    timestamp: datetime
    metric_name: str
    metric_value: float
    context: dict  # deployment context variables


@dataclass
class PMM_IncidentFlag:
    flag_id: str
    deployer_id: str
    timestamp: datetime
    description: str
    severity: str  # "near_miss", "non_serious", "potentially_serious", "serious"
    art73_reportable: bool
    investigation_status: str  # "open", "under_investigation", "resolved", "reported_to_msa"


class PostMarketMonitoringSystem:
    """Art.30 PMM system for high-risk AI providers."""

    def __init__(self, plan: PMM_PlanRecord):
        self.plan = plan
        self.data_points: list[PMM_DataPoint] = []
        self.incident_flags: list[PMM_IncidentFlag] = []
        self.kpi_baselines = {kpi.metric_name: kpi for kpi in plan.kpis}

    def ingest_deployer_report(
        self,
        raw_deployer_id: str,
        report_data: dict,
        report_timestamp: datetime
    ) -> list[str]:
        """
        Process structured deployer PMM report.
        Returns list of triggered alerts (Art.20 or Art.73 level).
        """
        deployer_id = self._pseudonymise(raw_deployer_id)
        alerts = []

        for metric_name, metric_value in report_data.get("metrics", {}).items():
            point = PMM_DataPoint(
                deployer_id=deployer_id,
                timestamp=report_timestamp,
                metric_name=metric_name,
                metric_value=float(metric_value),
                context=report_data.get("context", {}),
            )
            self.data_points.append(point)

            if metric_name in self.kpi_baselines:
                kpi = self.kpi_baselines[metric_name]
                if float(metric_value) < kpi.alert_threshold:
                    alerts.append(f"ART20_ALERT:{metric_name}={metric_value:.3f} below threshold {kpi.alert_threshold}")
                if kpi.serious_incident_threshold and float(metric_value) < kpi.serious_incident_threshold:
                    alerts.append(f"ART73_CANDIDATE:{metric_name}={metric_value:.3f} below serious threshold {kpi.serious_incident_threshold}")

        for raw_incident in report_data.get("incidents", []):
            self._process_incident_flag(deployer_id, raw_incident, report_timestamp, alerts)

        return alerts

    def _process_incident_flag(
        self, deployer_id: str, incident_data: dict, timestamp: datetime, alerts: list
    ):
        flag_id = hashlib.sha256(f"{deployer_id}{timestamp}{incident_data.get('description', '')}".encode()).hexdigest()[:12]
        severity = self._assess_severity(incident_data)
        art73_reportable = severity == "serious"

        flag = PMM_IncidentFlag(
            flag_id=flag_id,
            deployer_id=deployer_id,
            timestamp=timestamp,
            description=incident_data.get("description", ""),
            severity=severity,
            art73_reportable=art73_reportable,
            investigation_status="open",
        )
        self.incident_flags.append(flag)

        if art73_reportable:
            deadline = timestamp + timedelta(days=self.plan.art73_reporting_timeline_days)
            alerts.append(f"ART73_REPORT_REQUIRED:flag_id={flag_id},deadline={deadline.date()}")

    def _assess_severity(self, incident_data: dict) -> str:
        description = incident_data.get("description", "").lower()
        if any(kw in description for kw in ["death", "fatal", "serious injury", "hospitalisation"]):
            return "serious"
        if any(kw in description for kw in ["injury", "harm", "fundamental rights", "infrastructure"]):
            return "potentially_serious"
        if any(kw in description for kw in ["near miss", "close call", "anomaly", "unexpected"]):
            return "near_miss"
        return "non_serious"

    def _pseudonymise(self, raw_id: str) -> str:
        return hashlib.sha256(raw_id.encode()).hexdigest()[:16]

    def art9_update_report(self) -> dict:
        """Generate report of PMM findings that require Art.9 risk management updates."""
        open_art73 = [f for f in self.incident_flags if f.art73_reportable and f.investigation_status == "open"]
        breaches = [p for p in self.data_points if p.metric_name in self.kpi_baselines
                    and p.metric_value < self.kpi_baselines[p.metric_name].alert_threshold]

        return {
            "ai_system_id": self.plan.ai_system_id,
            "report_generated": datetime.utcnow().isoformat(),
            "total_data_points": len(self.data_points),
            "total_incident_flags": len(self.incident_flags),
            "art73_reportable_open": len(open_art73),
            "art73_flag_ids": [f.flag_id for f in open_art73],
            "kpi_threshold_breaches": len(breaches),
            "art9_update_required": len(open_art73) > 0 or len(breaches) > 3,
            "recommended_action": (
                "IMMEDIATE: File Art.73 report and initiate Art.9 review"
                if len(open_art73) > 0
                else "SCHEDULED: Review KPI breaches in next Art.9 cycle"
                if len(breaches) > 0
                else "OK: No Art.9 updates triggered"
            ),
        }

IncidentDetector: Real-Time Serious Incident Detection for Art.73

from dataclasses import dataclass, field
from datetime import datetime, timedelta


@dataclass
class SeriousIncidentReport:
    """Draft Art.73 serious incident report for MSA submission."""
    incident_id: str
    ai_system_id: str
    ai_system_name: str
    deployer_id: str  # pseudonymised
    incident_timestamp: datetime
    provider_awareness_timestamp: datetime
    art73_reporting_deadline: datetime
    incident_description: str
    consequences: list[str]
    preliminary_root_cause: str
    corrective_actions_taken: list[str]
    investigation_status: str
    report_submitted: bool = False
    submission_timestamp: Optional[datetime] = None


class IncidentDetector:
    """Real-time serious incident detection feeding Art.73 reporting pipeline."""

    SERIOUS_INCIDENT_KEYWORDS = [
        "death", "fatal", "killed", "hospitalised", "hospitalised",
        "serious injury", "serious harm", "fundamental rights breach",
        "critical infrastructure damage", "mass surveillance",
    ]

    NEAR_MISS_KEYWORDS = [
        "near miss", "close call", "potential harm", "significant error",
        "system failure", "unexpected output", "bias detected",
    ]

    def __init__(self, ai_system_id: str, ai_system_name: str, reporting_days: int = 2):
        self.ai_system_id = ai_system_id
        self.ai_system_name = ai_system_name
        self.reporting_days = reporting_days
        self.pending_reports: list[SeriousIncidentReport] = []
        self.overdue_reports: list[SeriousIncidentReport] = []

    def evaluate_incident(
        self,
        incident_description: str,
        deployer_id: str,
        incident_timestamp: datetime,
    ) -> Optional[SeriousIncidentReport]:
        """
        Evaluate whether an incident qualifies as Art.73 serious incident.
        Returns SeriousIncidentReport if reportable, None otherwise.
        """
        description_lower = incident_description.lower()
        is_serious = any(kw in description_lower for kw in self.SERIOUS_INCIDENT_KEYWORDS)

        if not is_serious:
            return None

        awareness = datetime.utcnow()
        deadline = awareness + timedelta(days=self.reporting_days)

        report = SeriousIncidentReport(
            incident_id=f"ART73-{self.ai_system_id}-{awareness.strftime('%Y%m%d%H%M%S')}",
            ai_system_id=self.ai_system_id,
            ai_system_name=self.ai_system_name,
            deployer_id=deployer_id,
            incident_timestamp=incident_timestamp,
            provider_awareness_timestamp=awareness,
            art73_reporting_deadline=deadline,
            incident_description=incident_description,
            consequences=[],
            preliminary_root_cause="Under investigation",
            corrective_actions_taken=[],
            investigation_status="open",
        )
        self.pending_reports.append(report)
        return report

    def check_reporting_deadlines(self) -> list[dict]:
        """Check which Art.73 reports are approaching or past deadline."""
        now = datetime.utcnow()
        deadline_alerts = []

        for report in self.pending_reports:
            if report.report_submitted:
                continue
            hours_remaining = (report.art73_reporting_deadline - now).total_seconds() / 3600
            status = (
                "OVERDUE" if hours_remaining < 0
                else "CRITICAL" if hours_remaining < 8
                else "WARNING" if hours_remaining < 24
                else "OK"
            )
            if status != "OK":
                deadline_alerts.append({
                    "incident_id": report.incident_id,
                    "deadline": report.art73_reporting_deadline.isoformat(),
                    "hours_remaining": round(hours_remaining, 1),
                    "status": status,
                    "action": "FILE ART.73 REPORT IMMEDIATELY" if status == "CRITICAL" else "File Art.73 report",
                })
                if status == "OVERDUE" and report not in self.overdue_reports:
                    self.overdue_reports.append(report)

        return deadline_alerts

    def compliance_summary(self) -> dict:
        return {
            "ai_system_id": self.ai_system_id,
            "pending_art73_reports": len([r for r in self.pending_reports if not r.report_submitted]),
            "submitted_art73_reports": len([r for r in self.pending_reports if r.report_submitted]),
            "overdue_reports": len(self.overdue_reports),
            "overdue_incident_ids": [r.incident_id for r in self.overdue_reports],
            "reporting_deadline_days": self.reporting_days,
            "deadline_alerts": self.check_reporting_deadlines(),
        }

Art.30 Compliance Checklist

Practical 40-item checklist for high-risk AI providers implementing Art.30 post-market monitoring:

PMM System Establishment (Art.30(1))

Active Data Collection (Art.30(2))

PMM Plan Documentation (Art.30(3))

Sector-Specific Alignment (Art.30(4))

Deployer Cooperation (Art.30(5))

Art.73 Integration

Art.9 Feedback Loop

Infrastructure


See Also