EU AI Act Art.73: Serious Incident Reporting for High-Risk AI Systems — Developer Guide (2026)
EU AI Act Article 73 establishes a mandatory incident reporting obligation for providers of high-risk AI systems operating in the EU market. When a high-risk AI system causes or contributes to a serious incident — defined as an unintended malfunction linked to death, serious health harm, fundamental rights violations, or major infrastructure disruption — the provider must report to national market surveillance authorities within 2 working days (death or health/safety risk) or 15 calendar days (other serious harm).
This is not an abstract compliance checkbox. Art.73 creates a real-time regulatory reporting obligation that requires operational infrastructure: incident detection systems, classification workflows, reporting pipelines, and legal counsel readiness. The same incident that triggers Art.73 (high-risk AI) may also trigger Art.53(1)(b) (GPAI models with systemic risk) — creating dual notification to two different authorities under two different timelines.
This guide covers Art.73 in full: the serious incident definition, provider and deployer obligations, reporting timelines, the Art.73 × Art.53(1)(b) dual reporting framework for GPAI components, CLOUD Act jurisdiction risk for incident records, and Python implementation for a production-grade incident reporting system.
Art.73 became applicable on 2 August 2026 (high-risk AI systems in Annex III from the date of application; the general Annex III application period under Art.111 extends some categories). The post-market monitoring framework (Chapter VIII, Art.72-74) applies to all high-risk AI systems placed on the EU market from that date.
Why Art.73 Matters for Developers
Most AI Act developer guides focus on pre-deployment compliance (Art.9 risk management, Art.10 training data, Art.12 logging). Art.73 is different — it is a live operational obligation that activates when something goes wrong after deployment.
For developers and AI providers, Art.73 means:
- You need incident detection infrastructure — you cannot report what you cannot detect. Art.12 logging + Art.30 post-market monitoring feed Art.73 reporting
- The clock starts at awareness, not investigation completion — "became aware" triggers the timeline; 2 or 15 days is short
- Deployers can trigger your obligation — if a deployer reports an incident to you under Art.73(3), your Art.73(1) clock starts
- A GPAI component doubles your reporting burden — Art.73 (national MSA) + Art.53(1)(b) (AI Office) simultaneously
- Failure to report has significant penalties — non-compliance with post-market monitoring and incident reporting provisions falls under Art.99(5): fines up to €15M or 3% of global annual turnover
Art.73 at a Glance
| Provision | Subject | Key Developer Obligation |
|---|---|---|
| Art.73(1) | Provider reporting obligation | Report serious incident to national MSA where it occurred |
| Art.73(2) | Reporting timeline | 2 working days (death/health risk), 15 calendar days (other) |
| Art.73(3) | Deployer obligation | Notify provider + relevant national authority immediately |
| Art.73(4) | MSA cascade | MSA notifies Commission + other Member States |
| Art.73(5) | Provider follow-up | Submit complete information + corrective actions to MSA |
| Art.73(6) | Critical infrastructure | Additional sectoral authority reporting requirement |
| Art.53(1)(b) | GPAI systemic risk parallel | Commission/AI Office notification for GPAI incidents (separate timeline) |
What Is a "Serious Incident"? — Art.3(49)
The EU AI Act defines "serious incident" in Art.3(49):
An incident or a malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following: (a) the death of a person or serious harm to a person's health; (b) a serious and irreversible disruption of the management and operation of critical infrastructure; (c) the infringement of obligations under Union law intended to protect fundamental rights; (d) serious damage to property or the environment.
Unpacking the Definition for Developers
Four harm categories — each with distinct developer implications:
Category (a): Death or serious health harm This is the highest-urgency trigger. "Serious harm to health" includes:
- Permanent or long-term injury requiring medical attention
- Disability or incapacity resulting from AI system behavior
- Conditions requiring emergency intervention or hospitalization
Developer implication: Medical AI, triage systems, drug interaction checkers, radiation dosage assistants, autonomous surgical guidance — any high-risk AI system in Annex III category 5 (health and safety) must have incident classification logic for health outcomes.
Category (b): Critical infrastructure disruption This covers irreversible disruption of energy, water, transport, financial, or digital infrastructure systems. "Serious and irreversible" is the qualifying threshold — transient outages likely do not qualify.
Developer implication: AI systems managing SCADA operations, grid balancing, traffic management, or financial market stability algorithms have specific incident categorization obligations under both Art.73 and NIS2 (dual reporting — see our NIS2 × AI Act guide).
Category (c): Fundamental rights infringement This is the broadest category. Union law protecting fundamental rights includes GDPR, the Charter of Fundamental Rights, anti-discrimination directives, and consumer protection legislation. An AI decision that violates GDPR automated decision rights (Art.22) or discriminates on prohibited grounds can qualify as a serious incident.
Developer implication: High-risk AI systems in Annex III categories 1 (biometrics), 2 (critical infrastructure), 6 (employment), and 8 (law enforcement) have elevated exposure here.
Category (d): Serious damage to property or environment Major material damage or environmental harm caused by or contributed to by the AI system's malfunction or erroneous output.
The "Might Have Led" Standard
Note the standard: Art.3(49) includes incidents that "might have led" to the listed harms — not only those that actually did. Near-miss incidents where serious harm was narrowly avoided are within scope. A drone navigation AI that nearly caused a collision but corrected course is a potential Art.73 incident even if no harm materialized.
This broadens the reporting obligation significantly: you are not reporting only post-harm events, but also near-misses where the causal chain was present but harm did not fully materialize.
Art.73(1): Provider Reporting Obligation
Who Must Report
Art.73(1) applies to providers of high-risk AI systems placed on the Union market. The definition of "provider" (Art.3(3)) means:
- The natural or legal person who develops or has an AI system developed and places it on the market or puts it into service under their own name or trademark
- This includes importers and authorised representatives who place systems on the EU market
- It does NOT directly apply to deployers (users of AI systems) — deployers have their own obligation under Art.73(3)
If you develop a high-risk AI system and license it to customers who deploy it, you are the provider. If a serious incident occurs in a customer deployment, the Art.73 notification obligation rests primarily with you.
What to Report
The incident report to the national MSA must include:
- Identity of the provider and the AI system (with EU database registration number under Art.71)
- Nature of the serious incident — the harm category under Art.3(49)
- Time and location of the incident
- Causal analysis — the link between the AI system and the harm (or the reasonable likelihood of such a link)
- Affected persons and systems — number and nature of those impacted
- Corrective actions taken or planned — immediate containment and longer-term remediation
The Causal Link Trigger
Art.73(1) specifies the reporting obligation activates when the provider "has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link."
This is a dual trigger:
- Established causal link: You have evidence that the AI system's behavior contributed to the harm
- Reasonable likelihood: You have sufficient reason to believe causation is probable even before full investigation
The "reasonable likelihood" standard means you cannot defer reporting until investigation is complete. If a patient died and your AI-assisted diagnostic tool was involved in the care pathway, a reasonable likelihood exists immediately — you must report within the applicable timeline while investigation continues. You can submit updated and corrected reports as investigation progresses.
Art.73(2): Reporting Timelines — The 2-Day and 15-Day Rules
Art.73(2) establishes two reporting timelines:
| Harm Type | Reporting Deadline | Clock Start |
|---|---|---|
| Death of a person or risk to health/safety | 2 working days | Date of becoming aware |
| Other serious incidents (property, environment, fundamental rights, infrastructure) | 15 calendar days | Date of becoming aware |
The 2-Working-Day Rule
This is the most urgent obligation in the EU AI Act. If a serious incident involves:
- The death of a natural person, or
- A risk to the health or safety of persons (even without confirmed death)
...the provider must file the initial report within 2 working days of becoming aware.
"Working days" excludes weekends and public holidays in the Member State where the incident occurred. Practically, this means your incident response team needs to be able to initiate regulatory notification within hours of incident detection — not days. Legal review, report drafting, and submission workflows must be pre-built, not constructed ad hoc after an incident.
What "becoming aware" means: Awareness is triggered at the organizational level — when any person within the provider's organization with relevant responsibility becomes aware, the clock starts. This includes:
- Internal incident monitoring systems flagging an event
- Customer (deployer) notification under Art.73(3)
- Media reports about incidents linked to your system
- Government authority notification
You cannot delay the clock by keeping awareness confined to non-reporting personnel.
The 15-Calendar-Day Rule
For serious incidents not involving death or health/safety risk — primarily fundamental rights violations, property damage, environmental harm, and critical infrastructure disruption — the reporting window is 15 calendar days from awareness.
15 days is still operationally tight. Initial investigations, root cause identification, and initial corrective actions must be underway within this window. The initial report can be preliminary; Art.73(5) allows submission of follow-up reports with additional information and final corrective action plans.
MSA May Shorten Deadlines
National market surveillance authorities may — and some are expected to — set shorter deadlines than Art.73(2) for specific sectors or incident categories. Healthcare regulators in Germany (BfArM), France (ANSM), and Ireland (HPRA) have existing incident reporting frameworks under MDR/IVDR that may be integrated or extended to AI Act requirements, potentially with shorter timelines for medical device-adjacent AI systems.
Art.73(3): Deployer Obligation — Notifying Providers
Deployers who use a high-risk AI system (as defined in Art.3(4)) and "have reason to believe a serious incident has occurred" must:
- Immediately notify the provider of the AI system
- Notify the relevant market surveillance authority in the Member State where they operate
The deployer notification to the provider triggers the provider's Art.73(1) clock. This creates a direct operational dependency: if you build a high-risk AI system and license it to enterprise customers, those customers may initiate your Art.73 reporting obligation. You need:
- API or portal for deployer incident submissions — structured, timestamped, with acknowledgment
- Escalation path from customer success to compliance — deployer reports must reach your Art.73 reporting team immediately
- Contract terms confirming Art.73(3) compliance — deployers should be contractually required to notify you immediately (not within standard SLA windows)
Art.73(4-5): The Reporting Cascade
Art.73(4): MSA to Commission + Member States
Upon receiving an Art.73 incident report, the national MSA is required to:
- Immediately notify the European Commission
- Notify other Member States through the RAPEX/ICSMS information exchange system
- Inform relevant public authorities monitoring fundamental rights obligations
This cascade means that an incident reported in Germany reaches the Irish MSA, the Italian MSA, and the Commission within days. If your AI system is deployed across multiple EU member states, a single incident creates multi-jurisdictional regulatory awareness simultaneously.
Art.73(5): Provider Follow-Up Reporting
After the initial (possibly preliminary) report, the provider must submit to the MSA:
- Complete incident information — updated causal analysis, full scope of harm
- Corrective actions taken — immediate containment, system patches, model rollback
- Planned preventive measures — systematic fixes to prevent recurrence
- Root cause analysis — underlying technical or process failure
There is no fixed deadline for the follow-up report in Art.73(5), but "without undue delay" applies — regulators expect follow-up within weeks, not months.
Art.73(6): Critical Infrastructure — Sectoral Authority Reporting
For high-risk AI systems used in critical infrastructure (Annex III, category 2), Art.73(6) adds an additional layer: providers must also report to the competent authority designated under sector-specific Union legislation.
This means:
- Energy sector AI: Report under Art.73 to MSA + to ACER (Agency for the Cooperation of Energy Regulators) or national energy regulator
- Financial sector AI: Report under Art.73 to MSA + to ECB/national prudential authority + DORA notification obligations (DORA Art.17)
- Water/transport/health: Report to MSA + relevant sectoral competent authority
The NIS2 Directive creates a parallel incident reporting pathway for essential and important entities with 24h early warning + 72h full notification to CSIRTs. An AI incident in critical infrastructure may simultaneously trigger:
- Art.73 EU AI Act (MSA, 2-day/15-day)
- NIS2 Art.23 (CSIRT, 24h/72h)
- DORA Art.17 if financial sector (48h/1-month)
These are not substitutable — each serves a different regulatory purpose and goes to a different authority.
Dual Reporting: Art.73 × Art.53(1)(b) for GPAI Components
This is the highest-complexity scenario in EU AI Act incident reporting: a high-risk AI system that uses a GPAI model with systemic risk as a core component.
When Both Apply
If your high-risk AI product integrates a GPAI model that:
- Has been classified as having systemic risk (Art.51) — typically models trained with >10^25 FLOPs
- That GPAI model's behavior contributed to or caused the serious incident
...then two parallel reporting obligations activate:
| Obligation | Authority | Timeline | Who Triggers |
|---|---|---|---|
| Art.73(1) EU AI Act | National MSA of incident location | 2 days (health/death) / 15 days (other) | Provider of the high-risk AI system |
| Art.53(1)(b) EU AI Act | European Commission / AI Office | "Without undue delay" | Provider of the GPAI model |
The Two Different Providers
In a typical GPAI integration scenario:
- You (deployer of GPAI API, provider of the high-risk AI system) have Art.73 obligations
- The GPAI model provider (OpenAI, Anthropic, Google, Meta/Llama fine-tune deployer) has Art.53(1)(b) obligations
Your Art.73 report to the national MSA must include information about the GPAI component — model ID, version, the specific capability involved. The national MSA will likely coordinate with the AI Office if a GPAI component is implicated.
If You Are Both Provider and GPAI Operator
If you build and deploy your own GPAI-based high-risk AI system — for example, you fine-tuned a base model past the 10^25 FLOPs threshold and deployed it in a medical diagnosis context — you have obligations under:
- Art.52: GPAI general obligations (technical documentation, model card)
- Art.53: GPAI systemic risk obligations (adversarial testing, incident reporting to Commission)
- Art.73: High-risk AI incident reporting (to national MSA)
A single incident in this configuration generates:
- Art.73 report → national MSA (2 or 15 days)
- Art.53(1)(b) report → AI Office/Commission (without undue delay)
- Potentially Art.73(6) report → sectoral authority (if critical infrastructure)
Practical implication: Your incident response runbook must have parallel tracks for national MSA notification and AI Office notification, with coordinated messaging to ensure consistency.
Art.73 Incident Response Runbook — 7-Step Developer Guide
Step 1: Detection (T+0)
Your Art.12 logging infrastructure generates the signal. Incident sources:
- Automated anomaly detection on model outputs
- Deployer/customer incident report (Art.73(3) trigger)
- End-user complaint escalation
- Media/public report
Immediate action: Create timestamped incident ticket. Assign incident commander. The Art.73 clock starts now.
Step 2: Triage — Serious Incident Classification (T+0 to T+2h)
Classify against Art.3(49):
- Does the AI system output have a causal link (or reasonable likelihood) to: death, serious health harm, critical infrastructure disruption, fundamental rights violation, serious property/environmental damage?
- YES → Art.73 applies, advance to Step 3
- NO → Document triage decision with rationale; standard post-market monitoring applies
Determine timeline bucket:
- Death or health/safety risk → 2-working-day clock
- All other serious incidents → 15-calendar-day clock
Step 3: Legal & Regulatory Notification (T+2h to T+4h)
Notify:
- In-house legal counsel or external DPA/AI Act advisor
- Compliance officer / DPO (if GDPR-adjacent incident)
- C-suite if death or major health event
Identify the national MSA of the member state where the incident occurred (Art.73(1)). If incident occurred in multiple member states simultaneously, notify the primary state first; cross-border cascade follows through Commission notification (Art.73(4)).
Step 4: Initial Report Preparation (T+4h to T+24h)
Draft initial Art.73 report:
- Provider identity and AI system registration (Art.71 EU database ID)
- Incident description and harm category
- Timeline of events (detection → harm)
- Initial causal assessment (reasonable likelihood is sufficient)
- Immediate corrective actions taken (model offline, output blocked, etc.)
Submit preliminary report if 2-day deadline applies and investigation is not yet complete. You can and should submit supplementary reports as investigation progresses.
Step 5: GPAI Component Check (T+4h to T+24h — parallel)
Determine if a GPAI model with systemic risk is involved:
- Is the GPAI model classified under Art.51 (10^25 FLOPs threshold or Commission designation)?
- Did the GPAI model's behavior contribute to the incident?
- YES → Initiate Art.53(1)(b) notification to AI Office (parallel to Art.73 MSA report)
- Also notify the upstream GPAI provider under your contractual and Art.55 entitlements
Step 6: MSA Submission (before deadline)
Submit through the national MSA's designated reporting channel (portal, email, registered post — each MSA has a defined mechanism; verify for each target market). Include:
- The Art.73 report (preliminary if needed)
- System documentation reference (Annex IV technical documentation)
- Corrective action status
Step 7: Follow-Up Reporting and Root Cause Analysis (T+15d to T+30d)
Complete and submit the Art.73(5) follow-up report:
- Full root cause analysis
- Impact assessment (persons affected, harm extent)
- Definitive corrective actions implemented
- Systemic prevention measures
- Post-market monitoring update (Art.72) with new risk scenario
Document the full incident in your Art.30 post-market monitoring system with a closed-loop finding: incident → report → corrective action → verification.
CLOUD Act × Art.73: The Dual Jurisdiction Problem
Art.73 incident reports and all post-incident records created for EU regulatory purposes are subject to Art.70 confidentiality protections when submitted to the AI Office. However, if your incident response infrastructure — investigation logs, corrective action records, internal post-mortem documents — runs on US cloud infrastructure, those records carry dual-compellability risk:
| Record Type | Art.73/EU Access | CLOUD Act Exposure | Risk Level |
|---|---|---|---|
| Art.12 audit logs (pre-incident) | National MSA (Art.73/Art.69) | YES — if on US infra | HIGH |
| Internal incident investigation records | National MSA investigation | YES — if on US infra | HIGH |
| Art.73 draft reports + legal memos | Attorney-client privilege (limited) | YES — if on US email | MEDIUM |
| Art.53(1)(b) GPAI incident reports | AI Office (Art.70 confidentiality) | YES — if on US storage | HIGH |
| Post-mortem documentation | Art.73(5) MSA submission | YES — if on US infra | HIGH |
| Root cause analysis | Art.30 PMM records → MSA | YES — if on US infra | HIGH |
| Corrective action records | Art.73(5) + Art.30 evidence | YES — if on US infra | MEDIUM |
The Compellability Gap
Art.70 EU confidentiality applies to documentation submitted to the AI Office or national MSAs. It does not apply to the same documentation stored on your own infrastructure. A US DOJ national security investigation can compel your AWS S3 bucket containing the full incident investigation record under the CLOUD Act — without EU judicial process — while the same documents enjoy EU confidentiality protections once transmitted to regulators.
This creates a practical paradox: the more thorough your incident investigation (better for EU regulators), the more material US authorities can potentially access via CLOUD Act.
EU-Native Infrastructure as Single-Regime Defense
Storing all incident-related records on EU-native infrastructure eliminates CLOUD Act exposure:
- Art.12 audit logs → EU-hosted log management (no US-law reach)
- Incident investigation workspace → EU-jurisdiction document management and collaboration
- Communication with regulators → EU-hosted secure email
- Post-mortem records → EU-native storage with EU-only legal access
With EU-native hosting, Art.73 incident records are subject exclusively to EU legal process. There is no parallel US compellability track because there is no US infrastructure to compel.
Python Implementation
SeriousIncidentReport
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional
class IncidentHarmCategory(Enum):
"""Art.3(49) serious incident harm categories."""
DEATH_OR_HEALTH_RISK = "death_or_health_risk" # 2-working-day deadline
CRITICAL_INFRASTRUCTURE = "critical_infrastructure" # 15-calendar-day deadline
FUNDAMENTAL_RIGHTS = "fundamental_rights" # 15-calendar-day deadline
PROPERTY_ENVIRONMENT = "property_environment" # 15-calendar-day deadline
NEAR_MISS = "near_miss" # "might have led" — assess carefully
class ReportStatus(Enum):
DRAFT = "draft"
PRELIMINARY_SUBMITTED = "preliminary_submitted" # initial report before full investigation
FOLLOW_UP_SUBMITTED = "follow_up_submitted" # Art.73(5) complete report
CLOSED = "closed"
class ReportingAuthority(Enum):
NATIONAL_MSA = "national_msa" # Art.73(1) — all high-risk AI
AI_OFFICE_COMMISSION = "ai_office" # Art.53(1)(b) — GPAI systemic risk
SECTORAL_AUTHORITY = "sectoral_authority" # Art.73(6) — critical infrastructure
NIS2_CSIRT = "nis2_csirt" # NIS2 Art.23 — parallel obligation
@dataclass
class SeriousIncidentReport:
"""
EU AI Act Art.73 serious incident report for high-risk AI systems.
Tracks both initial notification and Art.73(5) follow-up obligations.
"""
# Incident identity
incident_id: str
ai_system_eu_db_id: str # Art.71 EU database registration ID
ai_system_name: str
provider_name: str
provider_eu_representative: Optional[str] = None # if non-EU provider
# Incident details
incident_datetime: datetime = field(default_factory=datetime.utcnow)
awareness_datetime: datetime = field(default_factory=datetime.utcnow) # clock start
harm_category: IncidentHarmCategory = IncidentHarmCategory.NEAR_MISS
member_state_of_occurrence: str = "" # ISO 3166-1 alpha-2: "DE", "FR", "IE"
incident_description: str = ""
# Causal assessment
causal_link_established: bool = False
reasonable_likelihood_of_link: bool = False # sufficient to trigger reporting
causal_assessment_narrative: str = ""
# Harm scope
persons_affected: int = 0
person_deceased: bool = False
health_safety_risk: bool = False # triggers 2-working-day rule
# GPAI component
gpai_component_involved: bool = False
gpai_model_id: Optional[str] = None
gpai_systemic_risk_classified: bool = False # Art.51 classification
# Corrective actions
immediate_actions_taken: list[str] = field(default_factory=list)
system_taken_offline: bool = False
planned_corrective_actions: list[str] = field(default_factory=list)
root_cause_summary: str = ""
# Reporting status
status: ReportStatus = ReportStatus.DRAFT
preliminary_report_submitted_at: Optional[datetime] = None
follow_up_report_submitted_at: Optional[datetime] = None
# Working days calculation for 2-day rule
_working_days_exclude_weekends: bool = True
@property
def requires_2_day_report(self) -> bool:
"""Art.73(2): 2-working-day deadline for death or health/safety risk."""
return (
self.harm_category == IncidentHarmCategory.DEATH_OR_HEALTH_RISK
or self.person_deceased
or self.health_safety_risk
)
@property
def reporting_deadline(self) -> datetime:
"""Calculate Art.73(2) reporting deadline from awareness datetime."""
if self.requires_2_day_report:
# 2 working days — simplified: add 2 days, extend over weekends
deadline = self.awareness_datetime
working_days_added = 0
while working_days_added < 2:
deadline += timedelta(days=1)
if deadline.weekday() < 5: # Monday=0, Friday=4
working_days_added += 1
return deadline
else:
# 15 calendar days
return self.awareness_datetime + timedelta(days=15)
@property
def hours_remaining(self) -> float:
return (self.reporting_deadline - datetime.utcnow()).total_seconds() / 3600
@property
def is_overdue(self) -> bool:
return datetime.utcnow() > self.reporting_deadline
@property
def requires_gpai_parallel_report(self) -> bool:
"""Art.53(1)(b): parallel AI Office notification if GPAI systemic risk component."""
return self.gpai_component_involved and self.gpai_systemic_risk_classified
@property
def reporting_authorities(self) -> list[ReportingAuthority]:
"""Determine all applicable reporting authorities for this incident."""
authorities = [ReportingAuthority.NATIONAL_MSA] # always required
if self.requires_gpai_parallel_report:
authorities.append(ReportingAuthority.AI_OFFICE_COMMISSION)
# Art.73(6): add sectoral authority if critical infrastructure context
# (caller must set this explicitly)
return authorities
def get_report_summary(self) -> dict:
return {
"incident_id": self.incident_id,
"ai_system": self.ai_system_name,
"eu_db_id": self.ai_system_eu_db_id,
"harm_category": self.harm_category.value,
"member_state": self.member_state_of_occurrence,
"deadline": self.reporting_deadline.isoformat(),
"hours_remaining": round(self.hours_remaining, 1),
"is_overdue": self.is_overdue,
"requires_2_day_report": self.requires_2_day_report,
"requires_gpai_parallel_report": self.requires_gpai_parallel_report,
"reporting_authorities": [a.value for a in self.reporting_authorities],
"causal_link": self.causal_link_established or self.reasonable_likelihood_of_link,
"status": self.status.value,
}
HighRiskAIIncidentReporter
from dataclasses import dataclass, field
from datetime import datetime
from typing import Optional
import uuid
@dataclass
class IncidentTriage:
"""
Art.3(49) triage result: classifies whether an event qualifies as a
serious incident and determines the applicable reporting timeline.
"""
event_description: str
is_serious_incident: bool
harm_category: Optional[IncidentHarmCategory]
triage_rationale: str
triaged_at: datetime = field(default_factory=datetime.utcnow)
triaged_by: str = "" # compliance officer name/id
class HighRiskAIIncidentReporter:
"""
Manages the full Art.73 incident reporting lifecycle for a high-risk AI system.
Implements the 7-step response runbook in code.
"""
def __init__(
self,
ai_system_name: str,
eu_db_id: str,
provider_name: str,
gpai_component: Optional[str] = None,
gpai_systemic_risk: bool = False,
):
self.ai_system_name = ai_system_name
self.eu_db_id = eu_db_id
self.provider_name = provider_name
self.gpai_component = gpai_component
self.gpai_systemic_risk = gpai_systemic_risk
self.active_incidents: dict[str, SeriousIncidentReport] = {}
def triage_event(
self,
event_description: str,
has_death: bool = False,
has_health_risk: bool = False,
has_infrastructure_disruption: bool = False,
has_fundamental_rights_violation: bool = False,
has_property_damage: bool = False,
is_near_miss: bool = False,
triaged_by: str = "compliance_system",
) -> IncidentTriage:
"""
Step 2 of the runbook: triage classification.
Returns IncidentTriage with is_serious_incident determination.
"""
if has_death or has_health_risk:
harm_cat = IncidentHarmCategory.DEATH_OR_HEALTH_RISK
rationale = "Death or health/safety risk — Art.3(49)(a) triggered. 2-working-day deadline."
return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)
if has_infrastructure_disruption:
harm_cat = IncidentHarmCategory.CRITICAL_INFRASTRUCTURE
rationale = "Critical infrastructure disruption — Art.3(49)(b) triggered. 15-calendar-day deadline."
return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)
if has_fundamental_rights_violation:
harm_cat = IncidentHarmCategory.FUNDAMENTAL_RIGHTS
rationale = "Fundamental rights violation — Art.3(49)(c) triggered. 15-calendar-day deadline."
return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)
if has_property_damage:
harm_cat = IncidentHarmCategory.PROPERTY_ENVIRONMENT
rationale = "Property or environmental damage — Art.3(49)(d) triggered. 15-calendar-day deadline."
return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)
if is_near_miss:
harm_cat = IncidentHarmCategory.NEAR_MISS
rationale = "Near-miss event — Art.3(49) 'might have led' scope. Assess for causal chain. May require reporting."
return IncidentTriage(event_description, True, harm_cat, rationale, triaged_by=triaged_by)
return IncidentTriage(
event_description, False, None,
"No Art.3(49) harm category applicable. Document decision; standard post-market monitoring applies.",
triaged_by=triaged_by,
)
def create_incident(
self,
triage: IncidentTriage,
member_state: str,
causal_link_established: bool = False,
reasonable_likelihood: bool = True,
) -> SeriousIncidentReport:
"""Step 3-4: Create the incident record and start the reporting clock."""
if not triage.is_serious_incident:
raise ValueError("Cannot create Art.73 incident from non-serious event triage.")
incident = SeriousIncidentReport(
incident_id=str(uuid.uuid4()),
ai_system_eu_db_id=self.eu_db_id,
ai_system_name=self.ai_system_name,
provider_name=self.provider_name,
harm_category=triage.harm_category,
member_state_of_occurrence=member_state,
causal_link_established=causal_link_established,
reasonable_likelihood_of_link=reasonable_likelihood,
person_deceased=(triage.harm_category == IncidentHarmCategory.DEATH_OR_HEALTH_RISK),
health_safety_risk=(triage.harm_category == IncidentHarmCategory.DEATH_OR_HEALTH_RISK),
gpai_component_involved=bool(self.gpai_component),
gpai_model_id=self.gpai_component,
gpai_systemic_risk_classified=self.gpai_systemic_risk,
)
self.active_incidents[incident.incident_id] = incident
return incident
def submit_preliminary_report(
self,
incident_id: str,
immediate_actions: list[str],
system_taken_offline: bool = False,
) -> dict:
"""Step 4-5: Mark preliminary report as submitted to national MSA."""
incident = self.active_incidents[incident_id]
incident.immediate_actions_taken = immediate_actions
incident.system_taken_offline = system_taken_offline
incident.status = ReportStatus.PRELIMINARY_SUBMITTED
incident.preliminary_report_submitted_at = datetime.utcnow()
result = {
"report_id": incident_id,
"submitted_at": incident.preliminary_report_submitted_at.isoformat(),
"reporting_authorities": [a.value for a in incident.reporting_authorities],
"follow_up_required": True,
"follow_up_guidance": "Submit Art.73(5) complete report with root cause analysis and corrective actions.",
}
if incident.requires_gpai_parallel_report:
result["gpai_parallel_report"] = {
"authority": "EU AI Office / Commission",
"basis": "Art.53(1)(b) GPAI model systemic risk",
"gpai_model": self.gpai_component,
"note": "Submit to AI Office separately — different authority, different timeline ('without undue delay').",
}
return result
def compliance_status_report(self) -> list[dict]:
"""Return status of all active incidents with deadline tracking."""
return [
{
**incident.get_report_summary(),
"action_required": (
"SUBMIT IMMEDIATELY — OVERDUE" if incident.is_overdue
else f"SUBMIT WITHIN {incident.hours_remaining:.0f}h" if incident.hours_remaining < 24
else "ON TRACK"
),
}
for incident in self.active_incidents.values()
if incident.status != ReportStatus.CLOSED
]
40-Item Compliance Checklist: Art.73 Serious Incident Reporting
Section 1: Incident Scope & Classification — Items 1-8
- 1. Reviewed Art.3(49) serious incident definition and mapped all four harm categories to your AI system's potential failure modes
- 2. Documented the "might have led" near-miss standard — your incident classification covers scenarios that narrowly avoided harm
- 3. Confirmed which Annex III high-risk category applies to your AI system (determines MSA, sectoral authority, and enforcement context)
- 4. Identified whether your system is used in critical infrastructure (Art.73(6) additional sectoral authority reporting applies)
- 5. Established internal incident classification criteria with worked examples for each Art.3(49) harm category
- 6. Created a "reasonable likelihood" standard for your domain — when is causal connection probable enough to trigger reporting?
- 7. Mapped potential harm categories to your Art.9 risk scenarios — pre-incident planning, not post-incident improvisation
- 8. Verified EU AI database registration (Art.71 ID) — required field in all Art.73 reports
Section 2: Reporting Infrastructure — Items 9-16
- 9. Built Art.12 audit logging with sufficient granularity to establish AI system involvement in any incident
- 10. Identified national MSA for every EU Member State where your high-risk AI system is deployed
- 11. Documented the MSA reporting channel (portal, email, secure submission) for each target Member State
- 12. Established a legal entity / compliance contact authorized to submit Art.73 reports on behalf of the provider
- 13. Built a 24/7 escalation path from incident detection to compliance officer — 2-working-day deadline requires out-of-hours capability
- 14. Created Art.73 preliminary report template — pre-populated fields with mandatory content (Art.73(1) information requirements)
- 15. Built deployer notification portal or API for Art.73(3) — deployers must be able to notify you immediately
- 16. Implemented contract terms requiring deployers to notify you of incidents within hours (not standard SLA windows)
Section 3: Timeline Management — Items 17-24
- 17. Implemented working day calculation for the 2-working-day deadline (excluding weekends + Member State public holidays)
- 18. Created automated deadline tracking from incident awareness timestamp to reporting deadline
- 19. Built escalation alerts: notify compliance team at T+1h, T+12h, T+20h for 2-day incidents
- 20. Established process for preliminary report submission when investigation is incomplete (Art.73(2) does not require complete information)
- 21. Documented follow-up report process under Art.73(5) — updated causal analysis and corrective actions
- 22. Confirmed local MSA deadlines — some Member States set shorter timelines than Art.73(2) for sector-specific AI
- 23. Established Art.73(5) follow-up report timeline (typically 30 days from incident)
- 24. Created root cause analysis template aligned with Art.30 post-market monitoring record format
Section 4: GPAI Component Dual Reporting — Items 25-32
- 25. Identified all GPAI model components integrated into your high-risk AI system
- 26. Confirmed Art.51 systemic risk classification status of each GPAI component (10^25 FLOPs threshold or Commission designation)
- 27. Established Art.53(1)(b) reporting pathway to AI Office/Commission for GPAI component incidents — separate from Art.73 MSA reporting
- 28. Documented the difference between Art.73 timeline (2/15 days) and Art.53(1)(b) timeline ("without undue delay") in your runbook
- 29. Notified upstream GPAI providers (under Art.55 entitlements) of incidents involving their models — they have parallel Art.53(1)(b) obligations
- 30. Mapped NIS2 Art.23 parallel reporting requirements (if critical infrastructure) — 24h early warning vs 72h full report
- 31. Identified DORA Art.17 parallel reporting requirements (if financial sector AI)
- 32. Built coordinated communication workflow ensuring Art.73 (MSA) and Art.53(1)(b) (AI Office) reports are consistent
Section 5: CLOUD Act & Infrastructure Jurisdiction — Items 33-40
- 33. Assessed whether Art.12 audit logs (primary Art.73 evidence) are stored on US cloud infrastructure
- 34. Assessed whether incident investigation records and internal post-mortems are on US cloud storage
- 35. Evaluated CLOUD Act exposure: US DOJ/national security compellability of records on AWS, Azure, GCP
- 36. Documented the Art.70 confidentiality protection for records submitted to MSAs/AI Office — and that it does NOT cover records on your own US infrastructure
- 37. Considered migrating Art.12 logging infrastructure to EU-native storage for single-regime compliance
- 38. Established that incident response communication (legal memos, regulatory correspondence) uses EU-hosted secure email
- 39. Confirmed that post-mortem and root cause analysis records are not stored on US infrastructure
- 40. Documented your infrastructure jurisdiction assessment for each Art.73-relevant record type in your Art.30 post-market monitoring system
See Also
- EU AI Act Art.74: Market Surveillance Authority Powers & Developer Obligations — Developer Guide — Art.74 defines the MSA enforcement powers that respond to Art.73 serious incident reports
- EU AI Act Art.64-70: The EU AI Office & AI Governance Structure — Developer Guide — Art.73 reports received by MSAs cascade to AI Office under Art.73(4)
- EU AI Act Art.53 GPAI Models with Systemic Risk: Adversarial Testing, Incident Reporting & Cybersecurity — Developer Guide — Art.53(1)(b) parallel reporting for GPAI components
- EU AI Act Art.30: Post-Market Monitoring for High-Risk AI Systems — Developer Guide — Art.30 PMM generates the monitoring data that feeds Art.73 incident detection
- EU AI Act Art.12: Logging & Record-Keeping for High-Risk AI Systems Developer Guide — Art.12 logs are the primary evidentiary record for Art.73 causal analysis