2026-04-16·12 min read·

EU AI Act Art.73: Serious Incident Reporting for High-Risk AI Systems — Developer Guide (2026)

EU AI Act Article 73 establishes a mandatory incident reporting obligation for providers of high-risk AI systems operating in the EU market. When a high-risk AI system causes or contributes to a serious incident — defined as an unintended malfunction linked to death, serious health harm, fundamental rights violations, or major infrastructure disruption — the provider must report to national market surveillance authorities within 2 working days (death or health/safety risk) or 15 calendar days (other serious harm).

This is not an abstract compliance checkbox. Art.73 creates a real-time regulatory reporting obligation that requires operational infrastructure: incident detection systems, classification workflows, reporting pipelines, and legal counsel readiness. The same incident that triggers Art.73 (high-risk AI) may also trigger Art.53(1)(b) (GPAI models with systemic risk) — creating dual notification to two different authorities under two different timelines.

This guide covers Art.73 in full: the serious incident definition, provider and deployer obligations, reporting timelines, the Art.73 × Art.53(1)(b) dual reporting framework for GPAI components, CLOUD Act jurisdiction risk for incident records, and Python implementation for a production-grade incident reporting system.

Art.73 became applicable on 2 August 2026 (high-risk AI systems in Annex III from the date of application; the general Annex III application period under Art.111 extends some categories). The post-market monitoring framework (Chapter VIII, Art.72-74) applies to all high-risk AI systems placed on the EU market from that date.


Why Art.73 Matters for Developers

Most AI Act developer guides focus on pre-deployment compliance (Art.9 risk management, Art.10 training data, Art.12 logging). Art.73 is different — it is a live operational obligation that activates when something goes wrong after deployment.

For developers and AI providers, Art.73 means:

  1. You need incident detection infrastructure — you cannot report what you cannot detect. Art.12 logging + Art.30 post-market monitoring feed Art.73 reporting
  2. The clock starts at awareness, not investigation completion — "became aware" triggers the timeline; 2 or 15 days is short
  3. Deployers can trigger your obligation — if a deployer reports an incident to you under Art.73(3), your Art.73(1) clock starts
  4. A GPAI component doubles your reporting burden — Art.73 (national MSA) + Art.53(1)(b) (AI Office) simultaneously
  5. Failure to report has significant penalties — non-compliance with post-market monitoring and incident reporting provisions falls under Art.99(5): fines up to €15M or 3% of global annual turnover

Art.73 at a Glance

ProvisionSubjectKey Developer Obligation
Art.73(1)Provider reporting obligationReport serious incident to national MSA where it occurred
Art.73(2)Reporting timeline2 working days (death/health risk), 15 calendar days (other)
Art.73(3)Deployer obligationNotify provider + relevant national authority immediately
Art.73(4)MSA cascadeMSA notifies Commission + other Member States
Art.73(5)Provider follow-upSubmit complete information + corrective actions to MSA
Art.73(6)Critical infrastructureAdditional sectoral authority reporting requirement
Art.53(1)(b)GPAI systemic risk parallelCommission/AI Office notification for GPAI incidents (separate timeline)

What Is a "Serious Incident"? — Art.3(49)

The EU AI Act defines "serious incident" in Art.3(49):

An incident or a malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following: (a) the death of a person or serious harm to a person's health; (b) a serious and irreversible disruption of the management and operation of critical infrastructure; (c) the infringement of obligations under Union law intended to protect fundamental rights; (d) serious damage to property or the environment.

Unpacking the Definition for Developers

Four harm categories — each with distinct developer implications:

Category (a): Death or serious health harm This is the highest-urgency trigger. "Serious harm to health" includes:

Developer implication: Medical AI, triage systems, drug interaction checkers, radiation dosage assistants, autonomous surgical guidance — any high-risk AI system in Annex III category 5 (health and safety) must have incident classification logic for health outcomes.

Category (b): Critical infrastructure disruption This covers irreversible disruption of energy, water, transport, financial, or digital infrastructure systems. "Serious and irreversible" is the qualifying threshold — transient outages likely do not qualify.

Developer implication: AI systems managing SCADA operations, grid balancing, traffic management, or financial market stability algorithms have specific incident categorization obligations under both Art.73 and NIS2 (dual reporting — see our NIS2 × AI Act guide).

Category (c): Fundamental rights infringement This is the broadest category. Union law protecting fundamental rights includes GDPR, the Charter of Fundamental Rights, anti-discrimination directives, and consumer protection legislation. An AI decision that violates GDPR automated decision rights (Art.22) or discriminates on prohibited grounds can qualify as a serious incident.

Developer implication: High-risk AI systems in Annex III categories 1 (biometrics), 2 (critical infrastructure), 6 (employment), and 8 (law enforcement) have elevated exposure here.

Category (d): Serious damage to property or environment Major material damage or environmental harm caused by or contributed to by the AI system's malfunction or erroneous output.

The "Might Have Led" Standard

Note the standard: Art.3(49) includes incidents that "might have led" to the listed harms — not only those that actually did. Near-miss incidents where serious harm was narrowly avoided are within scope. A drone navigation AI that nearly caused a collision but corrected course is a potential Art.73 incident even if no harm materialized.

This broadens the reporting obligation significantly: you are not reporting only post-harm events, but also near-misses where the causal chain was present but harm did not fully materialize.


Art.73(1): Provider Reporting Obligation

Who Must Report

Art.73(1) applies to providers of high-risk AI systems placed on the Union market. The definition of "provider" (Art.3(3)) means:

If you develop a high-risk AI system and license it to customers who deploy it, you are the provider. If a serious incident occurs in a customer deployment, the Art.73 notification obligation rests primarily with you.

What to Report

The incident report to the national MSA must include:

  1. Identity of the provider and the AI system (with EU database registration number under Art.71)
  2. Nature of the serious incident — the harm category under Art.3(49)
  3. Time and location of the incident
  4. Causal analysis — the link between the AI system and the harm (or the reasonable likelihood of such a link)
  5. Affected persons and systems — number and nature of those impacted
  6. Corrective actions taken or planned — immediate containment and longer-term remediation

Art.73(1) specifies the reporting obligation activates when the provider "has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link."

This is a dual trigger:

The "reasonable likelihood" standard means you cannot defer reporting until investigation is complete. If a patient died and your AI-assisted diagnostic tool was involved in the care pathway, a reasonable likelihood exists immediately — you must report within the applicable timeline while investigation continues. You can submit updated and corrected reports as investigation progresses.


Art.73(2): Reporting Timelines — The 2-Day and 15-Day Rules

Art.73(2) establishes two reporting timelines:

Harm TypeReporting DeadlineClock Start
Death of a person or risk to health/safety2 working daysDate of becoming aware
Other serious incidents (property, environment, fundamental rights, infrastructure)15 calendar daysDate of becoming aware

The 2-Working-Day Rule

This is the most urgent obligation in the EU AI Act. If a serious incident involves:

...the provider must file the initial report within 2 working days of becoming aware.

"Working days" excludes weekends and public holidays in the Member State where the incident occurred. Practically, this means your incident response team needs to be able to initiate regulatory notification within hours of incident detection — not days. Legal review, report drafting, and submission workflows must be pre-built, not constructed ad hoc after an incident.

What "becoming aware" means: Awareness is triggered at the organizational level — when any person within the provider's organization with relevant responsibility becomes aware, the clock starts. This includes:

You cannot delay the clock by keeping awareness confined to non-reporting personnel.

The 15-Calendar-Day Rule

For serious incidents not involving death or health/safety risk — primarily fundamental rights violations, property damage, environmental harm, and critical infrastructure disruption — the reporting window is 15 calendar days from awareness.

15 days is still operationally tight. Initial investigations, root cause identification, and initial corrective actions must be underway within this window. The initial report can be preliminary; Art.73(5) allows submission of follow-up reports with additional information and final corrective action plans.

MSA May Shorten Deadlines

National market surveillance authorities may — and some are expected to — set shorter deadlines than Art.73(2) for specific sectors or incident categories. Healthcare regulators in Germany (BfArM), France (ANSM), and Ireland (HPRA) have existing incident reporting frameworks under MDR/IVDR that may be integrated or extended to AI Act requirements, potentially with shorter timelines for medical device-adjacent AI systems.


Art.73(3): Deployer Obligation — Notifying Providers

Deployers who use a high-risk AI system (as defined in Art.3(4)) and "have reason to believe a serious incident has occurred" must:

  1. Immediately notify the provider of the AI system
  2. Notify the relevant market surveillance authority in the Member State where they operate

The deployer notification to the provider triggers the provider's Art.73(1) clock. This creates a direct operational dependency: if you build a high-risk AI system and license it to enterprise customers, those customers may initiate your Art.73 reporting obligation. You need:


Art.73(4-5): The Reporting Cascade

Art.73(4): MSA to Commission + Member States

Upon receiving an Art.73 incident report, the national MSA is required to:

  1. Immediately notify the European Commission
  2. Notify other Member States through the RAPEX/ICSMS information exchange system
  3. Inform relevant public authorities monitoring fundamental rights obligations

This cascade means that an incident reported in Germany reaches the Irish MSA, the Italian MSA, and the Commission within days. If your AI system is deployed across multiple EU member states, a single incident creates multi-jurisdictional regulatory awareness simultaneously.

Art.73(5): Provider Follow-Up Reporting

After the initial (possibly preliminary) report, the provider must submit to the MSA:

There is no fixed deadline for the follow-up report in Art.73(5), but "without undue delay" applies — regulators expect follow-up within weeks, not months.


Art.73(6): Critical Infrastructure — Sectoral Authority Reporting

For high-risk AI systems used in critical infrastructure (Annex III, category 2), Art.73(6) adds an additional layer: providers must also report to the competent authority designated under sector-specific Union legislation.

This means:

The NIS2 Directive creates a parallel incident reporting pathway for essential and important entities with 24h early warning + 72h full notification to CSIRTs. An AI incident in critical infrastructure may simultaneously trigger:

These are not substitutable — each serves a different regulatory purpose and goes to a different authority.


Dual Reporting: Art.73 × Art.53(1)(b) for GPAI Components

This is the highest-complexity scenario in EU AI Act incident reporting: a high-risk AI system that uses a GPAI model with systemic risk as a core component.

When Both Apply

If your high-risk AI product integrates a GPAI model that:

  1. Has been classified as having systemic risk (Art.51) — typically models trained with >10^25 FLOPs
  2. That GPAI model's behavior contributed to or caused the serious incident

...then two parallel reporting obligations activate:

ObligationAuthorityTimelineWho Triggers
Art.73(1) EU AI ActNational MSA of incident location2 days (health/death) / 15 days (other)Provider of the high-risk AI system
Art.53(1)(b) EU AI ActEuropean Commission / AI Office"Without undue delay"Provider of the GPAI model

The Two Different Providers

In a typical GPAI integration scenario:

Your Art.73 report to the national MSA must include information about the GPAI component — model ID, version, the specific capability involved. The national MSA will likely coordinate with the AI Office if a GPAI component is implicated.

If You Are Both Provider and GPAI Operator

If you build and deploy your own GPAI-based high-risk AI system — for example, you fine-tuned a base model past the 10^25 FLOPs threshold and deployed it in a medical diagnosis context — you have obligations under:

A single incident in this configuration generates:

  1. Art.73 report → national MSA (2 or 15 days)
  2. Art.53(1)(b) report → AI Office/Commission (without undue delay)
  3. Potentially Art.73(6) report → sectoral authority (if critical infrastructure)

Practical implication: Your incident response runbook must have parallel tracks for national MSA notification and AI Office notification, with coordinated messaging to ensure consistency.


Art.73 Incident Response Runbook — 7-Step Developer Guide

Step 1: Detection (T+0)

Your Art.12 logging infrastructure generates the signal. Incident sources:

Immediate action: Create timestamped incident ticket. Assign incident commander. The Art.73 clock starts now.

Step 2: Triage — Serious Incident Classification (T+0 to T+2h)

Classify against Art.3(49):

Determine timeline bucket:

Notify:

Identify the national MSA of the member state where the incident occurred (Art.73(1)). If incident occurred in multiple member states simultaneously, notify the primary state first; cross-border cascade follows through Commission notification (Art.73(4)).

Step 4: Initial Report Preparation (T+4h to T+24h)

Draft initial Art.73 report:

Submit preliminary report if 2-day deadline applies and investigation is not yet complete. You can and should submit supplementary reports as investigation progresses.

Step 5: GPAI Component Check (T+4h to T+24h — parallel)

Determine if a GPAI model with systemic risk is involved:

Step 6: MSA Submission (before deadline)

Submit through the national MSA's designated reporting channel (portal, email, registered post — each MSA has a defined mechanism; verify for each target market). Include:

Step 7: Follow-Up Reporting and Root Cause Analysis (T+15d to T+30d)

Complete and submit the Art.73(5) follow-up report:

Document the full incident in your Art.30 post-market monitoring system with a closed-loop finding: incident → report → corrective action → verification.


CLOUD Act × Art.73: The Dual Jurisdiction Problem

Art.73 incident reports and all post-incident records created for EU regulatory purposes are subject to Art.70 confidentiality protections when submitted to the AI Office. However, if your incident response infrastructure — investigation logs, corrective action records, internal post-mortem documents — runs on US cloud infrastructure, those records carry dual-compellability risk:

Record TypeArt.73/EU AccessCLOUD Act ExposureRisk Level
Art.12 audit logs (pre-incident)National MSA (Art.73/Art.69)YES — if on US infraHIGH
Internal incident investigation recordsNational MSA investigationYES — if on US infraHIGH
Art.73 draft reports + legal memosAttorney-client privilege (limited)YES — if on US emailMEDIUM
Art.53(1)(b) GPAI incident reportsAI Office (Art.70 confidentiality)YES — if on US storageHIGH
Post-mortem documentationArt.73(5) MSA submissionYES — if on US infraHIGH
Root cause analysisArt.30 PMM records → MSAYES — if on US infraHIGH
Corrective action recordsArt.73(5) + Art.30 evidenceYES — if on US infraMEDIUM

The Compellability Gap

Art.70 EU confidentiality applies to documentation submitted to the AI Office or national MSAs. It does not apply to the same documentation stored on your own infrastructure. A US DOJ national security investigation can compel your AWS S3 bucket containing the full incident investigation record under the CLOUD Act — without EU judicial process — while the same documents enjoy EU confidentiality protections once transmitted to regulators.

This creates a practical paradox: the more thorough your incident investigation (better for EU regulators), the more material US authorities can potentially access via CLOUD Act.

EU-Native Infrastructure as Single-Regime Defense

Storing all incident-related records on EU-native infrastructure eliminates CLOUD Act exposure:

With EU-native hosting, Art.73 incident records are subject exclusively to EU legal process. There is no parallel US compellability track because there is no US infrastructure to compel.


Python Implementation

SeriousIncidentReport

from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional


class IncidentHarmCategory(Enum):
    """Art.3(49) serious incident harm categories."""
    DEATH_OR_HEALTH_RISK = "death_or_health_risk"        # 2-working-day deadline
    CRITICAL_INFRASTRUCTURE = "critical_infrastructure"  # 15-calendar-day deadline
    FUNDAMENTAL_RIGHTS = "fundamental_rights"            # 15-calendar-day deadline
    PROPERTY_ENVIRONMENT = "property_environment"        # 15-calendar-day deadline
    NEAR_MISS = "near_miss"                              # "might have led" — assess carefully


class ReportStatus(Enum):
    DRAFT = "draft"
    PRELIMINARY_SUBMITTED = "preliminary_submitted"  # initial report before full investigation
    FOLLOW_UP_SUBMITTED = "follow_up_submitted"      # Art.73(5) complete report
    CLOSED = "closed"


class ReportingAuthority(Enum):
    NATIONAL_MSA = "national_msa"            # Art.73(1) — all high-risk AI
    AI_OFFICE_COMMISSION = "ai_office"       # Art.53(1)(b) — GPAI systemic risk
    SECTORAL_AUTHORITY = "sectoral_authority"  # Art.73(6) — critical infrastructure
    NIS2_CSIRT = "nis2_csirt"               # NIS2 Art.23 — parallel obligation


@dataclass
class SeriousIncidentReport:
    """
    EU AI Act Art.73 serious incident report for high-risk AI systems.
    Tracks both initial notification and Art.73(5) follow-up obligations.
    """

    # Incident identity
    incident_id: str
    ai_system_eu_db_id: str                   # Art.71 EU database registration ID
    ai_system_name: str
    provider_name: str
    provider_eu_representative: Optional[str] = None  # if non-EU provider

    # Incident details
    incident_datetime: datetime = field(default_factory=datetime.utcnow)
    awareness_datetime: datetime = field(default_factory=datetime.utcnow)  # clock start
    harm_category: IncidentHarmCategory = IncidentHarmCategory.NEAR_MISS
    member_state_of_occurrence: str = ""      # ISO 3166-1 alpha-2: "DE", "FR", "IE"
    incident_description: str = ""

    # Causal assessment
    causal_link_established: bool = False
    reasonable_likelihood_of_link: bool = False  # sufficient to trigger reporting
    causal_assessment_narrative: str = ""

    # Harm scope
    persons_affected: int = 0
    person_deceased: bool = False
    health_safety_risk: bool = False           # triggers 2-working-day rule

    # GPAI component
    gpai_component_involved: bool = False
    gpai_model_id: Optional[str] = None
    gpai_systemic_risk_classified: bool = False  # Art.51 classification

    # Corrective actions
    immediate_actions_taken: list[str] = field(default_factory=list)
    system_taken_offline: bool = False
    planned_corrective_actions: list[str] = field(default_factory=list)
    root_cause_summary: str = ""

    # Reporting status
    status: ReportStatus = ReportStatus.DRAFT
    preliminary_report_submitted_at: Optional[datetime] = None
    follow_up_report_submitted_at: Optional[datetime] = None

    # Working days calculation for 2-day rule
    _working_days_exclude_weekends: bool = True

    @property
    def requires_2_day_report(self) -> bool:
        """Art.73(2): 2-working-day deadline for death or health/safety risk."""
        return (
            self.harm_category == IncidentHarmCategory.DEATH_OR_HEALTH_RISK
            or self.person_deceased
            or self.health_safety_risk
        )

    @property
    def reporting_deadline(self) -> datetime:
        """Calculate Art.73(2) reporting deadline from awareness datetime."""
        if self.requires_2_day_report:
            # 2 working days — simplified: add 2 days, extend over weekends
            deadline = self.awareness_datetime
            working_days_added = 0
            while working_days_added < 2:
                deadline += timedelta(days=1)
                if deadline.weekday() < 5:  # Monday=0, Friday=4
                    working_days_added += 1
            return deadline
        else:
            # 15 calendar days
            return self.awareness_datetime + timedelta(days=15)

    @property
    def hours_remaining(self) -> float:
        return (self.reporting_deadline - datetime.utcnow()).total_seconds() / 3600

    @property
    def is_overdue(self) -> bool:
        return datetime.utcnow() > self.reporting_deadline

    @property
    def requires_gpai_parallel_report(self) -> bool:
        """Art.53(1)(b): parallel AI Office notification if GPAI systemic risk component."""
        return self.gpai_component_involved and self.gpai_systemic_risk_classified

    @property
    def reporting_authorities(self) -> list[ReportingAuthority]:
        """Determine all applicable reporting authorities for this incident."""
        authorities = [ReportingAuthority.NATIONAL_MSA]  # always required

        if self.requires_gpai_parallel_report:
            authorities.append(ReportingAuthority.AI_OFFICE_COMMISSION)

        # Art.73(6): add sectoral authority if critical infrastructure context
        # (caller must set this explicitly)
        return authorities

    def get_report_summary(self) -> dict:
        return {
            "incident_id": self.incident_id,
            "ai_system": self.ai_system_name,
            "eu_db_id": self.ai_system_eu_db_id,
            "harm_category": self.harm_category.value,
            "member_state": self.member_state_of_occurrence,
            "deadline": self.reporting_deadline.isoformat(),
            "hours_remaining": round(self.hours_remaining, 1),
            "is_overdue": self.is_overdue,
            "requires_2_day_report": self.requires_2_day_report,
            "requires_gpai_parallel_report": self.requires_gpai_parallel_report,
            "reporting_authorities": [a.value for a in self.reporting_authorities],
            "causal_link": self.causal_link_established or self.reasonable_likelihood_of_link,
            "status": self.status.value,
        }

HighRiskAIIncidentReporter

from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
import uuid


@dataclass
class IncidentTriage:
    """
    Art.3(49) triage result: classifies whether an event qualifies as a
    serious incident and determines the applicable reporting timeline.
    """
    event_description: str
    is_serious_incident: bool
    harm_category: Optional[IncidentHarmCategory]
    triage_rationale: str
    triaged_at: datetime = field(default_factory=datetime.utcnow)
    triaged_by: str = ""  # compliance officer name/id


class HighRiskAIIncidentReporter:
    """
    Manages the full Art.73 incident reporting lifecycle for a high-risk AI system.
    Implements the 7-step response runbook in code.
    """

    def __init__(
        self,
        ai_system_name: str,
        eu_db_id: str,
        provider_name: str,
        gpai_component: Optional[str] = None,
        gpai_systemic_risk: bool = False,
    ):
        self.ai_system_name = ai_system_name
        self.eu_db_id = eu_db_id
        self.provider_name = provider_name
        self.gpai_component = gpai_component
        self.gpai_systemic_risk = gpai_systemic_risk
        self.active_incidents: dict[str, SeriousIncidentReport] = {}

    def triage_event(
        self,
        event_description: str,
        has_death: bool = False,
        has_health_risk: bool = False,
        has_infrastructure_disruption: bool = False,
        has_fundamental_rights_violation: bool = False,
        has_property_damage: bool = False,
        is_near_miss: bool = False,
        triaged_by: str = "compliance_system",
    ) -> IncidentTriage:
        """
        Step 2 of the runbook: triage classification.
        Returns IncidentTriage with is_serious_incident determination.
        """
        if has_death or has_health_risk:
            harm_cat = IncidentHarmCategory.DEATH_OR_HEALTH_RISK
            rationale = "Death or health/safety risk — Art.3(49)(a) triggered. 2-working-day deadline."
            return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)

        if has_infrastructure_disruption:
            harm_cat = IncidentHarmCategory.CRITICAL_INFRASTRUCTURE
            rationale = "Critical infrastructure disruption — Art.3(49)(b) triggered. 15-calendar-day deadline."
            return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)

        if has_fundamental_rights_violation:
            harm_cat = IncidentHarmCategory.FUNDAMENTAL_RIGHTS
            rationale = "Fundamental rights violation — Art.3(49)(c) triggered. 15-calendar-day deadline."
            return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)

        if has_property_damage:
            harm_cat = IncidentHarmCategory.PROPERTY_ENVIRONMENT
            rationale = "Property or environmental damage — Art.3(49)(d) triggered. 15-calendar-day deadline."
            return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)

        if is_near_miss:
            harm_cat = IncidentHarmCategory.NEAR_MISS
            rationale = "Near-miss event — Art.3(49) 'might have led' scope. Assess for causal chain. May require reporting."
            return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)

        return IncidentTriage(
            event_description, False, None,
            "No Art.3(49) harm category applicable. Document decision; standard post-market monitoring applies.",
            triaged_by=triaged_by,
        )

    def create_incident(
        self,
        triage: IncidentTriage,
        member_state: str,
        causal_link_established: bool = False,
        reasonable_likelihood: bool = True,
    ) -> SeriousIncidentReport:
        """Step 3-4: Create the incident record and start the reporting clock."""
        if not triage.is_serious_incident:
            raise ValueError("Cannot create Art.73 incident from non-serious event triage.")

        incident = SeriousIncidentReport(
            incident_id=str(uuid.uuid4()),
            ai_system_eu_db_id=self.eu_db_id,
            ai_system_name=self.ai_system_name,
            provider_name=self.provider_name,
            harm_category=triage.harm_category,
            member_state_of_occurrence=member_state,
            causal_link_established=causal_link_established,
            reasonable_likelihood_of_link=reasonable_likelihood,
            person_deceased=(triage.harm_category == IncidentHarmCategory.DEATH_OR_HEALTH_RISK),
            health_safety_risk=(triage.harm_category == IncidentHarmCategory.DEATH_OR_HEALTH_RISK),
            gpai_component_involved=bool(self.gpai_component),
            gpai_model_id=self.gpai_component,
            gpai_systemic_risk_classified=self.gpai_systemic_risk,
        )

        self.active_incidents[incident.incident_id] = incident
        return incident

    def submit_preliminary_report(
        self,
        incident_id: str,
        immediate_actions: list[str],
        system_taken_offline: bool = False,
    ) -> dict:
        """Step 4-5: Mark preliminary report as submitted to national MSA."""
        incident = self.active_incidents[incident_id]
        incident.immediate_actions_taken = immediate_actions
        incident.system_taken_offline = system_taken_offline
        incident.status = ReportStatus.PRELIMINARY_SUBMITTED
        incident.preliminary_report_submitted_at = datetime.utcnow()

        result = {
            "report_id": incident_id,
            "submitted_at": incident.preliminary_report_submitted_at.isoformat(),
            "reporting_authorities": [a.value for a in incident.reporting_authorities],
            "follow_up_required": True,
            "follow_up_guidance": "Submit Art.73(5) complete report with root cause analysis and corrective actions.",
        }

        if incident.requires_gpai_parallel_report:
            result["gpai_parallel_report"] = {
                "authority": "EU AI Office / Commission",
                "basis": "Art.53(1)(b) GPAI model systemic risk",
                "gpai_model": self.gpai_component,
                "note": "Submit to AI Office separately — different authority, different timeline ('without undue delay').",
            }

        return result

    def compliance_status_report(self) -> list[dict]:
        """Return status of all active incidents with deadline tracking."""
        return [
            {
                **incident.get_report_summary(),
                "action_required": (
                    "SUBMIT IMMEDIATELY — OVERDUE" if incident.is_overdue
                    else f"SUBMIT WITHIN {incident.hours_remaining:.0f}h" if incident.hours_remaining < 24
                    else "ON TRACK"
                ),
            }
            for incident in self.active_incidents.values()
            if incident.status != ReportStatus.CLOSED
        ]

40-Item Compliance Checklist: Art.73 Serious Incident Reporting

Section 1: Incident Scope & Classification — Items 1-8

Section 2: Reporting Infrastructure — Items 9-16

Section 3: Timeline Management — Items 17-24

Section 4: GPAI Component Dual Reporting — Items 25-32

Section 5: CLOUD Act & Infrastructure Jurisdiction — Items 33-40


See Also