EU AI Act Art.65: Reporting of Serious Incidents and Malfunctioning — High-Risk AI System NCA Notification Obligations (2026)
EU AI Act Article 65 closes the enforcement loop opened by Articles 57–64. The Chapter IX governance framework establishes authorities (Art.57), grants them investigation powers (Art.58), creates coordination bodies (Art.59–63), and enables data access (Art.64) — but all of this reactive machinery depends on regulators knowing that something has gone wrong. Art.65 is the mechanism by which that knowledge reaches them: a mandatory, timeline-bound incident reporting obligation that transforms high-risk AI system malfunctions from internal engineering problems into regulatory events.
The core obligation is direct: when a provider of a high-risk AI system becomes aware that their system has caused a serious incident or has malfunctioned in an unexpected way, they must notify the market surveillance authority of the Member State where the incident occurred. The 15-day notification window is tight by enterprise incident management standards — and for incidents constituting a serious risk to health, safety, or fundamental rights, a 72-hour preliminary notification may be required under Art.19 of Regulation (EU) 2019/1020 (the EU Market Surveillance Regulation), which applies alongside Art.65.
For developers building or deploying high-risk AI systems — and for EU-sovereign infrastructure providers hosting those systems — Art.65 defines what constitutes a reportable incident, who must report, to whom, within what timeframe, and what the cross-border coordination implications are when an incident affects multiple Member States.
Art.65 became applicable on 2 August 2025 as part of the phased entry into force of Regulation (EU) 2024/1689.
Art.65 in the Chapter IX Enforcement Architecture
Art.65 occupies the detection layer of the Chapter IX framework:
| Article | Function | Relationship to Art.65 |
|---|---|---|
| Art.57 | NCA designation — the recipients of Art.65 notifications | NCAs receive Art.65 incident reports and trigger investigation |
| Art.58 | NCA investigation powers — activated by Art.65 notifications | Art.65 reports serve as triggers for Art.58 formal investigations |
| Art.59 | AI Board — coordination body for cross-border incident patterns | AI Board coordinates NCA responses to multi-MS incident patterns |
| Art.60 | EU AI database — incident reports may require EUID updates | Serious incidents may require database entry amendments |
| Art.61 | Scientific Panel — technical evaluation of complex incidents | Scientific Panel may be engaged for technically complex incident investigations |
| Art.62 | AI Office corrective measures — may be triggered by GPAI-linked incidents | AI Office coordinates when incident involves GPAI model in supply chain |
| Art.63 | Advisory Forum — shapes incident reporting guidance | Advisory Forum input to standardisation of incident reporting procedures |
| Art.64 | Data access powers — exercised during post-notification investigation | NCAs use Art.64 powers to access incident-related data after Art.65 notification |
| Art.65 | Incident detection — mandatory notification mechanism | The trigger that activates the entire enforcement chain |
| Art.72 | Post-market monitoring system — the source of incident detection | Art.72 post-market monitoring is the operational system that surfaces Art.65-reportable incidents |
The key dependency: Art.65 is an output of Art.72 post-market monitoring. Providers are required under Art.72 to maintain a post-market monitoring system that actively collects and analyses performance data from deployed high-risk AI systems. When that system surfaces an incident meeting the Art.65 threshold, the Art.65 reporting obligation activates.
Art.65(1): The Core Reporting Obligation
Text: Providers of high-risk AI systems shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred.
Parsing the obligation:
"Serious incident" — Defined in Art.3(49) of the Regulation as:
an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following: (a) the death of a person or serious harm to a person's health; (b) a serious and irreversible disruption of the management or operation of critical infrastructure referred to in Annex I to Directive (EU) 2022/2557 [CER Directive]; (c) the breach of obligations under Union law intended to protect fundamental rights; (d) serious harm to property or the environment.
The Art.3(49) definition establishes four reportable consequence categories:
| Category | Threshold | Scope |
|---|---|---|
| Health harm | Death or serious harm | Individual level |
| Critical infrastructure disruption | Serious and irreversible | Energy, transport, banking, health, digital infrastructure (CER Annex I) |
| Fundamental rights breach | Breach of Union law obligations | GDPR, non-discrimination law, etc. |
| Property or environmental harm | Serious harm | Physical damage, ecological damage |
"Directly or indirectly leads to" — The causation threshold is broad. An incident does not need to directly cause harm; contributing causation (e.g., an AI recommendation that a human acted on, which then caused harm) is sufficient.
"Provider" — The reporting obligation falls on the provider — the legal or natural person who has placed the high-risk AI system on the market or put it into service. This is not the deployer operating the system in production. The deployer obligation runs under Art.65(2).
"Market surveillance authorities of the Member States where that incident occurred" — The reporting authority is the NCA of the Member State where the incident manifested, not where the provider is established. A provider headquartered in Germany whose system causes an incident in France reports to the French NCA.
Art.65(2): Deployer Notification to Provider
Text: Deployers of high-risk AI systems shall report any serious incident to the provider. In cases where the deployer does not know the identity of the provider, it shall report to the relevant market surveillance authority.
The notification chain:
[Incident occurs in deployment]
↓
[Deployer identifies serious incident]
↓
[Deployer notifies Provider — immediately / as soon as practicable]
↓
[Provider assesses incident against Art.3(49) criteria]
↓
[Provider notifies NCA — within 15 days (Art.65(3))]
↓
[NCA may activate Art.58 investigation + Art.64 data access]
Key implications for deployers:
-
Deployers are not the primary NCA reporters — the Art.65(1) obligation belongs to providers. Deployers report to providers. This matters for liability: the NCA looks primarily to the provider, not the deployer, for incident disclosure.
-
Exception — unknown provider — If a deployer cannot identify the provider (e.g., acquired a system from a defunct entity, using a system through multiple intermediaries), the deployer must report directly to the NCA. This prevents the deployer-to-provider chain from becoming a reporting gap.
-
Deployer notification to provider does not restart the 15-day clock — The 15 days runs from when the provider becomes aware of the incident. Deployer notification is the mechanism by which provider awareness is established. Delayed deployer notification delays provider awareness — but NCAs may scrutinise whether a deployer notification delay was reasonable.
-
Provider-deployer contract terms — Contracts governing high-risk AI system deployments should specify deployer notification obligations, maximum notification timescales, and the information to be provided (incident description, timestamp, system ID, affected users/operations).
Art.65(3): The 15-Day Notification Timeline
Text: The notification referred to in paragraph 1 shall be made without undue delay and in any event within 15 days of the providers becoming aware of the serious incident.
Timeline mechanics:
| Trigger | Clock starts | Deadline |
|---|---|---|
| Provider becomes aware of serious incident | Day 0 | Day 15 |
| "Without undue delay" standard | As soon as facts established | Overrides 15-day floor |
"Without undue delay" is the operative constraint. The 15 days is a maximum, not a target. If a provider becomes aware of a serious incident on Day 1 with sufficient facts to support notification, submitting on Day 14 is likely "undue delay."
Interaction with EU Market Surveillance Regulation (2019/1020) Art.19:
Art.19 of Regulation (EU) 2019/1020 requires economic operators to notify NCAs immediately when a product presents a serious risk. For AI systems that present an imminent serious risk, this may impose a 72-hour preliminary notification obligation running in parallel with the Art.65 15-day timeline:
- 72-hour preliminary notification (Art.19 MSR): Incident discovered → immediate NCA notification of serious risk even before full investigation complete
- 15-day full notification (Art.65 EU AI Act): Complete incident report with technical details, causation assessment, corrective measures taken
Providers should build incident response procedures that trigger parallel MSR Art.19 and EU AI Act Art.65 workflows.
Serious incidents involving critical infrastructure (Art.3(49)(b)):
If the serious incident involves disruption of critical infrastructure under the CER Directive, additional reporting obligations may arise under NIS2 (for entities classified as essential or important under NIS2 Art.3) or the CER Directive itself. A single incident may generate simultaneous reporting obligations under EU AI Act Art.65, NIS2 Art.23, and CER Directive Art.15.
Art.65(4): Information Content of the Notification
Text: Without prejudice to paragraph 3, after becoming aware of a serious incident, the provider shall notify the national competent authority without undue delay of any of the following: (a) any corrective action taken or planned with regard to the high-risk AI system; (b) any corrective action taken or planned with regard to the specific use of the high-risk AI system in the Member State.
Notification content requirements:
| Element | Required | Notes |
|---|---|---|
| Incident description | Yes (implicit) | Nature, date, location, affected system |
| Serious incident categorisation | Yes | Which Art.3(49) category applies |
| Provider identity and contact | Yes | Include Art.22 authorised representative for third-country providers |
| High-risk AI system identification | Yes | EUID (from EU AI database Art.60) + product reference |
| Corrective actions taken | Yes (Art.65(4)(a)) | Actions already implemented |
| Corrective actions planned | Yes (Art.65(4)(a)) | Timeline and scope of planned remediation |
| Use-case-specific corrective actions | Yes (Art.65(4)(b)) | Member State-specific use restrictions or operational changes |
| Causal analysis | Best practice | Link between system behaviour and incident |
| Affected users/operations | Best practice | Scope of impact |
"Without prejudice to paragraph 3" — Art.65(4) information should be provided within the 15-day window of Art.65(3). If corrective actions are still being determined at the time of initial notification, providers should notify the NCA of this fact and provide a timeline for submitting the Art.65(4) information.
Art.65(5): Malfunctioning Reporting to AI Office (GPAI Link)
Text: Where the serious incident referred to in paragraph 1 is related to a general-purpose AI model, the provider of the high-risk AI system shall also notify the provider of the general-purpose AI model of the serious incident. The AI Office shall be notified of the serious incident by the provider of the high-risk AI system where the incident is related to a general-purpose AI model with systemic risk.
The GPAI incident notification chain:
[Serious incident in high-risk AI system using GPAI model]
↓
[High-risk AI system provider → Art.65(1) notification to NCA]
↓
[High-risk AI system provider → Art.65(5) notification to GPAI model provider]
↓ (if GPAI model has systemic risk)
[High-risk AI system provider → Art.65(5) notification to AI Office]
Why this matters for cloud-based AI services:
Modern high-risk AI systems increasingly use foundation model APIs (GPT, Gemini, Claude, Llama-hosted services) as core components. When a serious incident occurs in a high-risk AI system that uses such a model:
- The high-risk AI system provider bears the Art.65(1) NCA reporting obligation regardless of whether the GPAI model contributed
- The high-risk AI system provider must separately notify the GPAI model provider under Art.65(5)
- If the GPAI model is classified as presenting systemic risk (≥10²⁵ FLOPs threshold under Art.51 or Commission designation), the AI Office must also receive notification
CLOUD Act implications: GPAI providers that are US-incorporated entities receive incident notifications that may be subject to US government compelled disclosure under the CLOUD Act. Incident notifications contain sensitive technical details about system failures, security vulnerabilities, and causation chains. For high-risk AI system providers, the CLOUD Act exposure of incident data shared with US-incorporated GPAI providers is a material risk to assess before deployment.
Art.65(6): Continued Investigation and Reporting Obligations
Text: After notifying the market surveillance authority, the provider shall, without undue delay, notify the market surveillance authority of any new relevant information related to the serious incident.
Ongoing reporting after initial notification:
The Art.65(3) 15-day notification is the beginning of the reporting cycle, not the end. After initial notification, providers carry a continuing obligation to report:
- New information about causation that becomes available through investigation
- Scope changes (additional affected users, additional Member States)
- Changes to corrective actions taken or planned
- Results of post-incident technical investigation
- Corrective action completion confirmation
This obligation mirrors the NIS2 "intermediate reports" and "final reports" framework (NIS2 Art.23(4)). Providers building incident response workflows should plan for a 30–90 day continued reporting cycle after initial Art.65(3) notification.
Art.65(7)–(8): NCA Powers and Confidentiality
Art.65(7): NCAs receiving Art.65 notifications shall immediately inform other NCAs of the serious incident and of the corrective actions taken or planned, using the RAPEX/ICSMS information exchange systems.
Cross-border coordination mechanics:
RAPEX (the EU Rapid Alert System for dangerous non-food products) is the notification system through which NCAs alert each other to dangerous products. For AI systems, Art.65(7) integrates incident reporting into this existing cross-border alert infrastructure. An incident reported to one NCA propagates to all NCAs via RAPEX.
Implications:
- A serious incident in one Member State may trigger NCA scrutiny in all Member States where the system is deployed
- Corrective actions mandated by one NCA may be adopted by other NCAs
- RAPEX notifications are generally public — serious incidents become public regulatory events across the EU
Art.65(8): NCAs shall take into account the information provided under Art.65(6) and shall, without undue delay, inform the AI Board and the AI Office of the incident and of any corrective actions taken.
The escalation chain:
- NCA receives Art.65 notification → RAPEX notification to other NCAs → AI Board notification → AI Office notification
- For incidents involving GPAI models: AI Office notification may trigger Art.62 corrective measures against the GPAI provider
Art.65 and Art.72 Post-Market Monitoring: The Detection Layer
Art.65 is operationally dependent on Art.72. Providers cannot report incidents they do not detect. Art.72 requires providers to:
- Establish a post-market monitoring system that actively collects and analyses performance data from deployed high-risk AI systems
- Monitor for serious incidents by defining incident detection criteria aligned with Art.3(49) serious incident categories
- Log outputs and anomalies in a way that enables causal investigation when an incident is identified
The Art.72 → Art.65 workflow:
[Art.72 monitoring system collects operational data]
↓
[Anomaly detected → automated alert or manual review]
↓
[Incident classification: does it meet Art.3(49) criteria?]
↓ (yes)
[Art.65 reporting workflow triggered]
↓
[Day 0: Provider "becomes aware" — clock starts]
↓
[Art.65(4) information gathered (corrective actions, causation)]
↓
[Day ≤15: NCA notification submitted]
Providers whose Art.72 monitoring systems have poor anomaly detection or delayed classification will systematically miss the 15-day Art.65 window. NCAs will scrutinise whether Art.72 monitoring was adequate when evaluating whether the 15-day clock was properly started.
Python Implementation: SeriousIncidentReport
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum
from typing import Optional, List
class SeriousIncidentCategory(Enum):
"""Art.3(49) serious incident categories."""
DEATH_OR_SERIOUS_HEALTH_HARM = "death_or_serious_health_harm"
CRITICAL_INFRASTRUCTURE_DISRUPTION = "critical_infrastructure_disruption"
FUNDAMENTAL_RIGHTS_BREACH = "fundamental_rights_breach"
SERIOUS_PROPERTY_OR_ENVIRONMENTAL_HARM = "serious_property_or_environmental_harm"
class IncidentReportStatus(Enum):
DETECTED = "detected"
UNDER_INVESTIGATION = "under_investigation"
INITIAL_NOTIFICATION_SUBMITTED = "initial_notification_submitted"
FOLLOW_UP_REPORTING_IN_PROGRESS = "follow_up_reporting_in_progress"
CLOSED = "closed"
@dataclass
class GPAILink:
"""GPAI model involvement details for Art.65(5) notification."""
gpai_provider_name: str
gpai_model_id: str
has_systemic_risk: bool # ≥10^25 FLOPs or Commission designation
gpai_provider_notified: bool = False
ai_office_notified: bool = False
gpai_provider_cloud_act_exposure: bool = False # US-incorporated provider
@dataclass
class CorrectiveAction:
description: str
status: str # "taken" or "planned"
planned_completion: Optional[datetime] = None
member_state_specific: bool = False
@dataclass
class SeriousIncidentReport:
"""
Art.65 EU AI Act serious incident report tracker.
Art.65(3): Notification to NCA within 15 days of provider becoming aware.
Art.65(4): Corrective actions in notification.
Art.65(5): GPAI model provider + AI Office notification if systemic risk.
Art.65(6): Ongoing reporting obligation after initial notification.
"""
incident_id: str
system_euid: str # EU AI database identifier (Art.60)
system_provider_name: str
incident_description: str
incident_categories: List[SeriousIncidentCategory]
incident_occurred_date: datetime
provider_became_aware_date: datetime # Day 0 for 15-day clock
member_state_of_occurrence: str # ISO country code, e.g. "DE", "FR"
deployer_notified_provider: bool = False
deployer_notification_date: Optional[datetime] = None
corrective_actions: List[CorrectiveAction] = field(default_factory=list)
gpai_link: Optional[GPAILink] = None
status: IncidentReportStatus = IncidentReportStatus.DETECTED
nca_notification_date: Optional[datetime] = None
nca_case_reference: Optional[str] = None
msm_art19_notification_required: bool = False # Serious risk → 72h MSR obligation
@property
def notification_deadline(self) -> datetime:
"""Art.65(3): 15 days from provider awareness."""
return self.provider_became_aware_date + timedelta(days=15)
@property
def days_remaining(self) -> int:
"""Days until Art.65(3) deadline."""
remaining = (self.notification_deadline - datetime.now()).days
return max(remaining, 0)
@property
def is_overdue(self) -> bool:
"""True if 15-day deadline has passed without notification."""
if self.nca_notification_date:
return False
return datetime.now() > self.notification_deadline
@property
def notification_within_deadline(self) -> Optional[bool]:
"""None if not yet notified; True/False if notified."""
if not self.nca_notification_date:
return None
return self.nca_notification_date <= self.notification_deadline
@property
def requires_rapex_propagation(self) -> bool:
"""Art.65(7): NCA will notify other NCAs via RAPEX."""
return self.nca_notification_date is not None
def gpai_notification_status(self) -> dict:
"""Art.65(5) compliance check."""
if not self.gpai_link:
return {"gpai_involved": False}
return {
"gpai_involved": True,
"gpai_provider_notified": self.gpai_link.gpai_provider_notified,
"systemic_risk": self.gpai_link.has_systemic_risk,
"ai_office_notification_required": self.gpai_link.has_systemic_risk,
"ai_office_notified": self.gpai_link.ai_office_notified,
"cloud_act_exposure": self.gpai_link.gpai_provider_cloud_act_exposure,
}
def compliance_gaps(self) -> List[str]:
"""Returns list of unresolved compliance gaps for this incident."""
gaps = []
if self.is_overdue:
gaps.append("OVERDUE: NCA notification beyond Art.65(3) 15-day deadline")
if not self.nca_notification_date and self.days_remaining <= 3:
gaps.append(f"URGENT: {self.days_remaining} days remaining for Art.65(3) notification")
if self.gpai_link:
if not self.gpai_link.gpai_provider_notified:
gaps.append("Art.65(5): GPAI model provider not yet notified")
if self.gpai_link.has_systemic_risk and not self.gpai_link.ai_office_notified:
gaps.append("Art.65(5): AI Office not yet notified (systemic risk GPAI model involved)")
if not self.corrective_actions:
gaps.append("Art.65(4): No corrective actions documented")
return gaps
def summary(self) -> str:
lines = [
f"Incident {self.incident_id} — {self.system_euid}",
f"Provider: {self.system_provider_name}",
f"MS of occurrence: {self.member_state_of_occurrence}",
f"Categories: {[c.value for c in self.incident_categories]}",
f"Provider aware: {self.provider_became_aware_date.date()}",
f"Notification deadline: {self.notification_deadline.date()} ({self.days_remaining} days)",
f"Status: {self.status.value}",
]
if self.nca_notification_date:
lines.append(f"NCA notified: {self.nca_notification_date.date()} — {'ON TIME' if self.notification_within_deadline else 'LATE'}")
gaps = self.compliance_gaps()
if gaps:
lines.append("COMPLIANCE GAPS:")
lines.extend(f" ⚠ {g}" for g in gaps)
return "\n".join(lines)
Art.65 Compliance Readiness Checklist
| # | Item | Who | Timing |
|---|---|---|---|
| 1 | Define "serious incident" classification criteria aligned with Art.3(49) — include concrete examples for your system's use case | Provider | Before market placement |
| 2 | Designate an Art.65 incident response coordinator with authority to initiate NCA notification | Provider | Before market placement |
| 3 | Establish deployer-to-provider notification SLA (e.g., "notify within 24 hours of incident awareness") in deployment contracts | Provider + deployer | Before deployment |
| 4 | Build Art.72 post-market monitoring system with Art.3(49)-aligned alert thresholds | Provider | Before market placement |
| 5 | Identify the NCA contact point for each Member State where the system is deployed — notifications go to the MS where incident occurred | Provider | Before market placement |
| 6 | Assess GPAI model involvement: if using a foundation model API, identify whether the provider is systemic-risk classified and prepare Art.65(5) dual notification workflow | Provider | Before market placement |
| 7 | Assess CLOUD Act exposure of GPAI model provider: if US-incorporated, evaluate incident notification confidentiality risk | Provider | Before market placement |
| 8 | Build parallel MSR Art.19 / EU AI Act Art.65 notification workflow for incidents presenting imminent serious risk (72-hour MSR + 15-day AI Act timelines) | Provider | Before market placement |
| 9 | Assess multi-regulation overlap: if deployed in NIS2 essential/important entity context or CER critical infrastructure context, map Art.65 obligations against NIS2 Art.23 and CER Art.15 | Provider | Before deployment |
| 10 | Conduct annual Art.65 drill: simulate a serious incident, test deployer-to-provider notification, NCA notification submission, GPAI link notification, and ongoing Art.65(6) reporting cycle | Provider + deployer | Annually |
Series Context: Chapter IX Governance and Enforcement Framework
| Article | Coverage | Post |
|---|---|---|
| Art.57 | National Competent Authorities — designation, tasks, independence | Art.57 guide |
| Art.58 | NCA enforcement powers — investigation, access, corrective measures | Art.58 guide |
| Art.59 | AI Board — composition, independence, NCA coordination | Art.59 guide |
| Art.60 | EU AI database — public registry, EUID governance, Commission management | Art.60 guide |
| Art.61 | Scientific Panel — independent experts, model evaluation, AI Office advisory | Art.61 guide |
| Art.62 | AI Office enforcement powers — corrective measures, market withdrawal, emergency action | Art.62 guide |
| Art.63 | Advisory Forum — multi-stakeholder consultation, composition, tasks, CoP input | Art.63 guide |
| Art.64 | Access to data and documentation — market surveillance authority enforcement powers | Art.64 guide |
| Art.65 | Reporting of serious incidents — provider NCA notification obligations | This guide |
| Art.66 | Market surveillance, information exchange, and enforcement coordination | Art.66 guide |
EU AI Act Art.65 analysis based on Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Applicable from 2 August 2025 per Art.113(3). Member State NCAs will establish specific notification formats and submission mechanisms; providers should check the NCA website for each Member State where their system is deployed. This guide reflects the text of the Regulation as enacted and does not constitute legal advice.