EU AI Act Article 19: Serious Incidents Reporting for High-Risk AI (2026)
Article 18 requires you to proactively monitor your deployed high-risk AI system. Article 19 governs what happens when that monitoring — or any other source — detects something seriously wrong.
Article 19 is the provider's incident reporting obligation. When your high-risk AI system causes, or is suspected of causing, a serious incident, you must notify the national market surveillance authority (MSA) of the member state where the incident occurred. The obligation is time-bound, content-specific, and carries direct legal consequences if missed.
This guide explains what Art.19 requires, how to classify incidents under the Art.3(49) definition, the exact reporting timelines (2 vs 15 days), what your report must contain, how multi-jurisdiction incidents work, and how to build incident detection and reporting infrastructure that keeps you compliant without turning every alert into a regulatory filing.
What Article 19 Actually Says
Article 19(1): Providers of high-risk AI systems which are placed on the market or put into service in the Union shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred.
Article 19(2): The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the providers becomes aware of the serious incident.
Article 19(3): Notwithstanding paragraph 2, in the case of a widespread serious incident or a serious incident involving a continuous supply of a product or system, the provider may submit an initial notification within the period referred to in that paragraph, to be followed by a complete report.
Article 19(4): Where an incident involves or could have involved a natural person, the provider shall simultaneously inform the operator or user involved and, where identifiable, the natural person harmed, of the incident and of the measures taken or being taken to address it.
Article 19(5): The obligation referred to in paragraph 1 shall also apply to providers when they become aware of a serious incident involving their high-risk AI system which was placed on the market or put into service in a third country.
Article 19(6): Following notification pursuant to paragraph 1, the provider shall, without undue delay, inform the relevant market surveillance authority of all steps taken to recall or withdraw the high-risk AI system from the market or to mitigate the risks.
The Art.18 → Art.19 Trigger Chain
Art.18 and Art.19 are designed to work together. Art.18 establishes the monitoring infrastructure; Art.19 defines what to do when that infrastructure detects a threshold breach.
[Art.18 Post-Market Monitoring]
│
├── Performance threshold exceeded? ──→ Art.9 risk assessment update
│
└── Serious incident detected? ──────→ Art.19 reporting pipeline
│
┌───────────┼───────────┐
▼ ▼ ▼
Classify Notify Suspend/
incident MSA Recall?
(Art.3(49)) (2d/15d) (Art.19(6))
In practice, your Art.18 monitoring system must have a decision gate that evaluates whether each detected anomaly or adverse outcome meets the Art.3(49) definition of a "serious incident." If it does, the Art.19 clock starts running — from the moment you establish causal likelihood, not from when you first noticed the anomaly.
This timing distinction matters for incident response:
| Stage | What happens | Art.19 clock |
|---|---|---|
| Art.18 monitoring detects anomaly | Initial signal — may or may not be serious | Not started |
| Triage: anomaly matches Art.3(49) pattern? | Classification investigation | Not started |
| Provider establishes causal likelihood | "Reasonable likelihood of causal link" | Starts here |
| Provider notifies MSA | Report submitted | Obligation fulfilled |
"Reasonable likelihood" does not require certainty. If the evidence suggests your system probably caused or contributed to the incident, the clock runs.
What Counts as a Serious Incident: Art.3(49)
Article 3(49) defines "serious incident" as any incident or malfunction of a high-risk AI system that directly or indirectly leads to any of the following:
- Death or serious harm to health of a natural person
- Serious and irreversible disruption of the management and operation of critical infrastructure (as defined in Union or national law)
- Infringement of obligations under Union law intended to protect fundamental rights
- Serious harm to property or the environment
Point 3 is often overlooked: a fundamental rights infringement — for example, a biometric categorization system making discriminatory decisions at scale — is a serious incident even without physical harm.
Art.3(49) in Practice: A Classification Table
| System type | Example output | Serious incident? | Provision |
|---|---|---|---|
| Medical diagnosis AI | Missed malignancy → patient treatment delay → serious health harm | Yes | Art.3(49)(a) |
| Credit scoring AI | Discriminatory denial to protected group → infringes Art.21 EUCFR | Yes | Art.3(49)(c) |
| Infrastructure monitoring AI | False negative → grid failure → widespread power outage | Yes | Art.3(49)(b) |
| Recruitment AI | Biased ranking causing economic harm to candidate | Possibly (property/fundamental rights) | Art.3(49)(c)/(d) |
| Document classification AI | Misclassification → document sent to wrong party | Likely no (not serious harm) | — |
| Autonomous vehicle AI | Incorrect obstacle detection → collision → injury | Yes | Art.3(49)(a) |
| Insurance pricing AI | Discriminatory premium calculation for protected group | Yes if systematic | Art.3(49)(c) |
The key qualifier is "serious." Minor inaccuracies that produce inconvenient but non-harmful outcomes do not meet the threshold. But once harm reaches the level of "serious" under any of the four categories, Art.19 applies.
Malfunctions vs Incidents
Art.19 covers serious incidents and malfunctions of the AI system. A malfunction that does not produce harm but has the reasonable potential to cause it — because the system behaved outside its intended operational envelope — may still trigger reporting obligations if it is detected before harm materialises.
This is distinct from mere performance degradation. Accuracy dropping from 94% to 91% is a performance issue handled by Art.18. A malfunction where the system enters an undefined state, produces corrupted outputs, or behaves in ways inconsistent with its technical documentation is a potential Art.19 trigger regardless of whether observable harm occurred.
Reporting Timelines: 2 Working Days vs 15 Calendar Days
Art.19 does not use a single reporting window. The timeline depends on the severity and nature of the incident:
| Incident type | Reporting deadline | Deadline type |
|---|---|---|
| Death or serious health deterioration (unexpected) | 2 working days | From causal likelihood established |
| Serious incident (other) | 15 days | From awareness |
| Widespread incident or continuous supply | Initial notification within deadline + complete report later | Phased |
| Third-country incidents involving EU provider | Same timelines as EU incidents | Art.19(5) |
The 2-working-day window applies when the incident involves death or serious health harm and the outcome was unexpected — meaning it was not anticipated in the risk assessment or technical documentation. If the risk was documented and assessed (and accepted), the 15-day window may apply.
In practice, apply the more conservative deadline when in doubt. Explaining to an MSA why you chose the 15-day window for a death event is a more difficult conversation than filing early.
Calendar vs Working Days
"Calendar days" means weekends and public holidays count. "Working days" excludes them. For a serious incident detected on a Friday afternoon:
- 2 working days: deadline is Tuesday end of business (Monday + Tuesday)
- 15 calendar days: deadline is the following Saturday evening
Build your incident response workflow with automated deadline tracking that accounts for the jurisdiction's public holidays. An MSA in France has different public holidays than one in Germany.
Where to Report: National MSAs and Sectoral Authorities
Art.19(1) directs reports to the MSA of the member state where the incident occurred — not where the provider is established, not where the system was placed on the market.
For incidents involving deployers in multiple member states, you must identify the primary incident location and consider whether parallel reporting is needed.
MSA Directory (Key Jurisdictions)
| Member State | MSA for AI | Contact mechanism |
|---|---|---|
| Germany | Bundesnetzagentur (BNetzA) | RAPEX portal + sector MSA |
| France | ANSSI (cybersecurity) + CNIL (data) | Sector-dependent |
| Netherlands | ACM | Digital Single Market contact point |
| Sweden | Konsumentverket | RAPEX |
| EU-level coordination | EAIA (European AI Office) | AI Office notification portal |
For sector-specific high-risk systems (medical devices, aviation, financial services), the relevant sectoral authority also receives the report:
- Medical devices with embedded AI: Also report to national Competent Authority under MDR/IVDR
- Aviation AI: EASA notification parallel to national MSA
- Financial services AI: National financial regulator parallel notification
Art.19 does not replace sectoral reporting — it supplements it.
What Your Report Must Contain
Art.19 does not specify a mandatory form. But MSA guidance and RAPEX precedent establish what a complete Art.19 report should contain:
Art.19 Serious Incident Report — Required Content
1. IDENTIFICATION
- Provider name, address, EU representative (if non-EU provider)
- System name, model version, registration number (Art.71 EUAIDB if applicable)
- Incident reference number (internal)
2. INCIDENT DESCRIPTION
- Date and time of incident
- Location (member state, specific site if relevant)
- Persons involved (number, role, anonymised if necessary)
- Description of harm or malfunction
- Severity classification (Art.3(49)(a)/(b)/(c)/(d))
3. CAUSAL CHAIN
- How the AI system contributed to or caused the incident
- Evidence establishing causal likelihood
- System behaviour at time of incident (logs, outputs)
- Whether incident was anticipated in risk assessment
4. TIMELINE
- When monitoring detected the anomaly
- When causal likelihood was established
- When notification was submitted
- Reporting deadline applied (2d or 15d) and basis
5. IMMEDIATE MEASURES TAKEN
- Whether system suspended or withdrawn
- Interim measures to prevent recurrence
- Notification to deployer/users (Art.19(4))
- Notification to harmed person (if identifiable)
6. FOLLOW-UP PLAN
- Root cause investigation timeline
- Corrective measures planned
- Estimated timeline for system reinstatement (if suspended)
- When complete report will follow (if initial notification)
Immediate Suspension and Recall (Art.19(6))
When the serious incident poses a continuing risk, Art.19(6) requires you to inform the MSA of all steps taken to:
- Recall the high-risk AI system from the market
- Withdraw it from service
- Mitigate the risk without full withdrawal
The suspension decision is yours to make — Art.19 does not mandate suspension for all serious incidents. But the decision must be documented and defensible. An MSA that receives your Art.19 report and finds no suspension decision (or a decision not to suspend) will scrutinise your reasoning.
Suspension Decision Framework
Serious incident confirmed
│
▼
Is the risk that caused the incident still present in the deployed system?
│
Yes ─┤── Is it mitigable by configuration/update without withdrawal?
│ │
│ Yes → Deploy fix, document, notify MSA
│ │
│ No → Suspend/withdraw, notify MSA under Art.19(6)
│
No ──┴── Isolated incident (hardware failure, misuse)?
│
Yes → Document, monitor, no suspension required
│
No → Reassess system-level risk (Art.9 update required)
Art.19 vs Art.73: Two Different Obligations
Providers working through the EU AI Act often confuse Art.19 and Art.73. They are complementary but distinct:
| Aspect | Art.19 | Art.73 |
|---|---|---|
| Who it applies to | Providers of high-risk AI | Market surveillance authorities |
| What it governs | Provider's duty to report incidents | MSA powers to investigate incidents |
| Location in Act | Chapter III, Section 3 (provider obligations) | Title VIII (post-market surveillance) |
| Triggers | Serious incident as defined in Art.3(49) | MSA-initiated investigation or Art.19 report received |
| Key output | Report to national MSA within deadline | Investigation, enforcement, coordination via EUDAMED |
Art.19 is what you do when an incident occurs. Art.73 is what the MSA does with your report — and what they can do to your system during investigation.
For providers who also place GPAI models on the market, Art.53(1)(b) creates a parallel GPAI-specific incident reporting obligation that co-exists with Art.19. A GPAI model embedded in a high-risk application triggers both.
Multi-Jurisdiction Incidents
When your deployed system operates across member states and an incident occurs in multiple jurisdictions simultaneously — or when it is unclear where "the incident occurred" — Art.19 coordination becomes complex.
Principle: Report to the MSA of the member state most directly connected to the harm. For distributed systems:
- If a user in Germany is harmed by a system accessed via a deployer in France: report to Germany (location of harm)
- If harm occurs during data processing on infrastructure in Ireland: report to Ireland
- If an algorithmic decision made in Poland causes harm to a person in Czech Republic: report to Czech Republic (harm location)
For truly distributed incidents (e.g., a recommendation algorithm causes harm to users in 12 member states simultaneously), file an initial report with the MSA of your establishment (or principal place of business) and explicitly flag the multi-jurisdiction nature. The European AI Office coordinates MSA collaboration for cross-border cases.
Third-Country Incidents (Art.19(5))
If your EU-placed high-risk AI system causes a serious incident in a non-EU country, Art.19(5) still applies. The report goes to the MSA of the member state where your system is placed on the market (your principal EU market if multiple). Document:
- The incident location (third country)
- The jurisdictional basis for reporting to the EU MSA
- Any notification to local authorities in the third country
CLOUD Act × Art.19: Incident Data Jurisdiction
Serious incident reports contain sensitive data: logs, outputs, user information, harm descriptions. Under the EU AI Act, this data is generated in the EU and subject to EU data protection law. But if your monitoring infrastructure runs on US-based cloud services, CLOUD Act compellability applies.
The conflict:
- EU GDPR Art.48: US government requests for data must go through MLAT or similar, not direct subpoenas
- US CLOUD Act: DOJ can compel US cloud providers to disclose data regardless of storage location
Practical risk for Art.19 compliance:
| Data element | CLOUD Act risk | Mitigation |
|---|---|---|
| System logs showing incident behaviour | High — operational data on US cloud infra | Incident logging on EU-sovereign infrastructure |
| Harmed person identification data | High — PII GDPR-protected, CLOUD Act conflict | EU-only storage, encryption at rest |
| Internal causal likelihood analysis | Medium — legal privilege may apply | Document privilege basis |
| Report submitted to MSA | Low — submitted to EU authority, not stored on US cloud | Maintain copies in EU-native storage |
The Art.19 report itself goes to an EU authority, but the underlying evidence that supports causal likelihood — the data you rely on to determine the 2-day vs 15-day window — may be subject to CLOUD Act compellability if stored on AWS, Azure, or GCP.
Incident response infrastructure should treat post-incident logs as privileged and store them on EU-sovereign storage. sota.io deploys infrastructure exclusively within EU jurisdiction — no CLOUD Act exposure.
Python Tooling: Incident Classification and Reporting
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime, timedelta
from typing import Optional
import json
class IncidentSeverity(Enum):
DEATH_OR_SERIOUS_HEALTH = "death_or_serious_health" # 2 working days
SERIOUS_INCIDENT_OTHER = "serious_incident_other" # 15 calendar days
MALFUNCTION_NO_HARM = "malfunction_no_harm" # Document, no mandatory report
NOT_SERIOUS = "not_serious" # Art.18 handling only
class Art349Category(Enum):
A = "death_or_serious_harm_to_person"
B = "critical_infrastructure_disruption"
C = "fundamental_rights_infringement"
D = "serious_harm_to_property_or_environment"
NONE = "none"
@dataclass
class SeriousIncidentReport:
"""Art.19 EU AI Act Serious Incident Report"""
# Identification
provider_name: str
system_name: str
model_version: str
euaidb_number: Optional[str] = None # Art.71 registration if applicable
# Incident details
incident_date: datetime = field(default_factory=datetime.utcnow)
incident_location_member_state: str = ""
incident_description: str = ""
art349_category: Art349Category = Art349Category.NONE
severity: IncidentSeverity = IncidentSeverity.NOT_SERIOUS
# Timeline tracking
anomaly_detected_at: Optional[datetime] = None
causal_likelihood_established_at: Optional[datetime] = None
notification_submitted_at: Optional[datetime] = None
# Measures
system_suspended: bool = False
deployer_notified: bool = False
harmed_person_notified: bool = False
# Report content
causal_chain_description: str = ""
immediate_measures: list[str] = field(default_factory=list)
follow_up_plan: str = ""
def reporting_deadline(self) -> Optional[datetime]:
"""Calculate Art.19 reporting deadline from causal likelihood established."""
if not self.causal_likelihood_established_at:
return None
base = self.causal_likelihood_established_at
if self.severity == IncidentSeverity.DEATH_OR_SERIOUS_HEALTH:
# 2 working days: skip weekends
deadline = base
days_added = 0
while days_added < 2:
deadline += timedelta(days=1)
if deadline.weekday() < 5: # Monday=0, Friday=4
days_added += 1
return deadline
elif self.severity == IncidentSeverity.SERIOUS_INCIDENT_OTHER:
# 15 calendar days from awareness (use causal_likelihood as proxy)
return base + timedelta(days=15)
return None
def is_overdue(self) -> bool:
deadline = self.reporting_deadline()
if deadline is None:
return False
return datetime.utcnow() > deadline and self.notification_submitted_at is None
def days_until_deadline(self) -> Optional[float]:
deadline = self.reporting_deadline()
if deadline is None:
return None
return (deadline - datetime.utcnow()).total_seconds() / 86400
def to_report_dict(self) -> dict:
return {
"article": "EU AI Act Art.19",
"provider": self.provider_name,
"system": f"{self.system_name} v{self.model_version}",
"euaidb_number": self.euaidb_number,
"incident": {
"date": self.incident_date.isoformat(),
"location": self.incident_location_member_state,
"description": self.incident_description,
"art349_category": self.art349_category.value,
"severity": self.severity.value,
},
"timeline": {
"anomaly_detected": self.anomaly_detected_at.isoformat() if self.anomaly_detected_at else None,
"causal_likelihood_established": self.causal_likelihood_established_at.isoformat() if self.causal_likelihood_established_at else None,
"reporting_deadline": self.reporting_deadline().isoformat() if self.reporting_deadline() else None,
"notification_submitted": self.notification_submitted_at.isoformat() if self.notification_submitted_at else None,
"days_until_deadline": self.days_until_deadline(),
},
"measures": {
"system_suspended": self.system_suspended,
"deployer_notified": self.deployer_notified,
"harmed_person_notified": self.harmed_person_notified,
"immediate_measures": self.immediate_measures,
},
"causal_chain": self.causal_chain_description,
"follow_up_plan": self.follow_up_plan,
}
class IncidentClassifier:
"""Classify anomalies detected by Art.18 monitoring against Art.3(49) thresholds."""
@staticmethod
def classify(
involves_death_or_serious_health_harm: bool,
harm_was_anticipated_in_risk_assessment: bool,
involves_critical_infrastructure: bool,
involves_fundamental_rights_infringement: bool,
involves_serious_property_or_env_harm: bool,
) -> tuple[Art349Category, IncidentSeverity]:
"""
Returns (Art349Category, IncidentSeverity) for reporting decision.
Applies 2-day window for unanticipated death/health harm.
"""
if involves_death_or_serious_health_harm:
category = Art349Category.A
# 2-day window only if unanticipated
severity = (
IncidentSeverity.DEATH_OR_SERIOUS_HEALTH
if not harm_was_anticipated_in_risk_assessment
else IncidentSeverity.SERIOUS_INCIDENT_OTHER
)
return category, severity
if involves_critical_infrastructure:
return Art349Category.B, IncidentSeverity.SERIOUS_INCIDENT_OTHER
if involves_fundamental_rights_infringement:
return Art349Category.C, IncidentSeverity.SERIOUS_INCIDENT_OTHER
if involves_serious_property_or_env_harm:
return Art349Category.D, IncidentSeverity.SERIOUS_INCIDENT_OTHER
return Art349Category.NONE, IncidentSeverity.NOT_SERIOUS
class Art19IncidentPipeline:
"""
End-to-end Art.19 incident response pipeline.
Integrates with Art.18 monitoring output.
"""
def __init__(self, provider_name: str, system_name: str, model_version: str):
self.provider_name = provider_name
self.system_name = system_name
self.model_version = model_version
self.open_incidents: list[SeriousIncidentReport] = []
def ingest_monitoring_alert(
self,
alert_timestamp: datetime,
alert_description: str,
**classification_kwargs,
) -> Optional[SeriousIncidentReport]:
"""
Receives Art.18 monitoring alert, classifies, and opens Art.19 report if needed.
Returns report if Art.19-level, None if below threshold.
"""
category, severity = IncidentClassifier.classify(**classification_kwargs)
if severity == IncidentSeverity.NOT_SERIOUS:
return None # Art.18 handling only — no Art.19 obligation
report = SeriousIncidentReport(
provider_name=self.provider_name,
system_name=self.system_name,
model_version=self.model_version,
anomaly_detected_at=alert_timestamp,
incident_description=alert_description,
art349_category=category,
severity=severity,
)
self.open_incidents.append(report)
return report
def establish_causal_likelihood(
self,
report: SeriousIncidentReport,
causal_description: str,
member_state: str,
) -> None:
"""
Marks causal likelihood established — starts Art.19 clock.
This is the moment you call after investigation confirms the link.
"""
report.causal_likelihood_established_at = datetime.utcnow()
report.causal_chain_description = causal_description
report.incident_location_member_state = member_state
def check_overdue_reports(self) -> list[SeriousIncidentReport]:
"""Returns all open incidents that have passed their Art.19 reporting deadline."""
return [r for r in self.open_incidents if r.is_overdue()]
def generate_msa_notification(self, report: SeriousIncidentReport) -> str:
"""Serialises report to JSON for MSA submission."""
return json.dumps(report.to_report_dict(), indent=2, default=str)
The Art.19 Reporting Decision Tree
Anomaly detected by Art.18 monitoring (or other source)
│
▼
Does outcome match Art.3(49)?
(death/health harm, critical infra, fundamental rights, property/env)
│
No ──┤── Art.18 follow-up only (Art.9 update if threshold crossed)
│
Yes ──▼
Triage: Is there reasonable likelihood of causal link to AI system?
│
No ──┤── Monitor and document. Revisit if new evidence emerges.
│
Yes ──▼
Art.19 clock starts. Determine severity:
┌─────────────────────────────────────────────────────────────┐
│ Death or serious health harm + NOT anticipated in RA? │
│ → 2 working days │
│ │
│ All other serious incidents? │
│ → 15 calendar days │
│ │
│ Widespread or continuous supply incident? │
│ → Initial notification within deadline + complete report │
└─────────────────────────────────────────────────────────────┘
│
▼
Determine reporting authority:
MSA of member state where incident occurred
+ Sectoral authority if applicable (EASA/EBA/competent authority)
│
▼
Suspension decision (Art.19(6)):
Does risk persist? → Suspend/recall + notify MSA of steps taken
│
▼
Notify deployer/users (Art.19(4))
Notify identifiable harmed person (Art.19(4))
│
▼
Submit Art.19 report to MSA
Maintain copy in EU-sovereign storage (CLOUD Act risk mitigation)
30-Item Art.19 Compliance Checklist
Part 1: Detection Infrastructure (Art.18 → Art.19 Link)
- 1. Art.18 monitoring system has defined thresholds that trigger Art.19 triage
- 2. Triage process documented: who classifies, what criteria, what evidence is required
- 3. Art.3(49) classification matrix maintained and updated with each Art.9 review
- 4. Monitoring alerts include enough context to assess Art.3(49) applicability
- 5. Non-AI-caused harm (hardware failure, user error) distinguishable from AI-caused harm in monitoring output
- 6. Malfunction detection (undefined state, corrupted output) distinct from performance degradation in alerting
Part 2: Incident Response Process
- 7. Named incident response owner (provider-side) with Art.19 authority
- 8. Causal likelihood investigation process documented (who decides, what evidence threshold)
- 9. "Causal likelihood established" is a documented, timestamped decision — not informal
- 10. Art.19 clock tracking automated from causal likelihood timestamp
- 11. Working day calculator accounts for member state public holidays
- 12. Escalation protocol for incidents detected outside business hours
Part 3: Report Content Readiness
- 13. Report template maintained with all required fields (identification, incident, causal chain, timeline, measures)
- 14. EUAIDB registration number (Art.71) available for inclusion if applicable
- 15. System logs are structured and accessible within response deadline
- 16. Risk assessment documentation available to confirm whether harm was anticipated
- 17. Model version history maintained (MSA will ask which version caused incident)
- 18. Deployer contact information current (for Art.19(4) notification)
Part 4: Reporting to Authorities
- 19. Contact details for national MSA of each member state where system is deployed
- 20. Contact details for sectoral authority where applicable (EASA/EBA/MDR Competent Authority)
- 21. EU AI Office notification pathway identified for multi-jurisdiction incidents
- 22. Third-country incident reporting pathway documented (Art.19(5))
- 23. Submission mechanism tested (RAPEX portal, email, direct MSA portal)
- 24. Initial notification vs complete report distinction understood for widespread incidents
Part 5: Post-Report Obligations
- 25. Suspension decision framework documented and defensible (Art.19(6))
- 26. Harmed person notification process (Art.19(4)) — anonymisation vs identification decision
- 27. MSA follow-up channel monitored after submission (MSA will request additional information)
- 28. Art.9 risk management system updated following incident (Art.18 → Art.9 feedback loop)
- 29. Incident stored in EU-sovereign storage (CLOUD Act risk mitigation for log data)
- 30. Lessons learned integrated into Art.18 monitoring thresholds for future detection improvement
Common Art.19 Mistakes
1. Counting from awareness instead of causal likelihood The 15-day clock runs from when you establish causal likelihood, not from when you first became aware of the anomaly. Many providers start the clock at detection and miss the distinction — when causal likelihood is established days later, they think they have 15 days from that point. The Act is clear: 15 days from awareness; but Art.19(2) specifies it begins when causal link is established (or reasonable likelihood thereof).
2. Treating the 2-day window as 2 calendar days The 2-day window for death and serious health harm is 2 working days — not calendar days. A Friday detection gives you Monday + Tuesday. A Monday detection gives you Wednesday EOB.
3. Reporting to your national MSA instead of the incident MSA Art.19(1) directs you to the MSA of the member state where the incident occurred. A German provider whose system causes harm in France reports to France's MSA, not BNetzA.
4. Failing to document the suspension decision Art.19(6) requires notification of measures taken — including the decision not to suspend if you reached that conclusion. "We assessed the risk and determined the incident was isolated; system remained in service while investigation continues" is acceptable if documented. Silence is not.
5. Conflating Art.19 with GDPR breach notification If a serious incident also involves a personal data breach, you have parallel obligations: Art.19 to the MSA (AI Act) and Art.33 GDPR to the DPA (72-hour window). These are separate notifications to separate authorities. One does not fulfil the other.
Art.19 in the Provider Compliance Stack
Art.19 does not stand alone. It integrates with the full provider obligation chain:
Art.9 Risk Management System
│ identifies acceptable risk thresholds
▼
Art.11 Technical Documentation
│ documents anticipated risks and mitigations
▼
Art.15 Accuracy/Robustness
│ defines operational envelope
▼
Art.18 Post-Market Monitoring
│ detects when operational envelope is breached or harm occurs
▼
Art.19 Serious Incident Reporting ← THIS ARTICLE
│ notifies MSA when Art.18 monitoring detects serious harm
▼
Art.9 Risk Management Update
│ incorporates incident findings into risk assessment
▼
[Updated system, updated documentation, updated monitoring thresholds]
Art.19 is not the end of the compliance cycle — it is the feedback mechanism that keeps Art.9 and Art.18 grounded in real-world evidence from deployed systems.
Key Takeaways
- Art.19 is the provider's mandatory incident reporting obligation — triggered when a high-risk AI system causes or likely causes harm meeting the Art.3(49) definition.
- Two timelines: 2 working days for death/serious health harm (unanticipated); 15 calendar days for other serious incidents. When in doubt, use the shorter window.
- Report to the MSA of the incident location, not your establishment's MSA.
- The Art.19 clock starts when causal likelihood is established, not when the anomaly is first detected.
- Art.18 monitoring is your primary Art.19 detection pipeline — the two obligations are designed to work together.
- Suspension decision must be documented — whether you suspend or choose not to.
- CLOUD Act risk: incident logs and evidence stored on US cloud infrastructure may be compellable by US government under CLOUD Act. Use EU-sovereign storage for post-incident evidence.
- Art.19 ≠ GDPR Art.33 — if the incident involves a data breach, you have parallel reporting obligations to the DPA under GDPR.
- Art.19 feeds back into Art.9 — incidents must update your risk management system, closing the monitoring-reporting-improvement loop.
Next: Article 20 covers the obligation to automatically generate logs — the technical foundation that makes Art.19 reporting possible.