2026-04-09·13 min read·sota.io team

EU NIS2 + AI Act: The Double Compliance Burden for Critical Infrastructure Developers

When an energy grid operator deploys an AI system for load forecasting, or a water treatment facility uses machine learning for anomaly detection in SCADA systems, they do not face one regulatory regime — they face two simultaneously.

The NIS2 Directive (2022/2555) imposes ICT-risk management obligations on operators of essential services across ten critical sectors. The EU AI Act (2024/1689) classifies AI systems used in critical infrastructure as high-risk under Annex III Category 2. Together, they create a compliance burden that is greater than the sum of its parts: the requirements interact, overlap, and in some areas directly reference each other.

This guide maps the intersection for developers and architects working in or building for critical infrastructure organisations. It covers what each regulation requires, where they overlap, and how your infrastructure decisions affect your compliance posture under both regimes simultaneously.

The Two Regulatory Regimes

NIS2 Directive (2022/2555) — Network and Information Security

The NIS2 Directive entered into force on 16 January 2023 and required EU Member State transposition by 17 October 2024. It replaces the original NIS Directive (2016/1148) and significantly expands both scope and obligations.

Scope — Essential and Important Entities:

NIS2 distinguishes between "essential entities" (stricter supervision, larger fines) and "important entities" (lighter supervision, lower fines threshold):

SectorEssential (Art. 3(1))Important (Art. 3(2))
EnergyElectricity, gas, oil, hydrogenDistrict heating, EV charging operators
TransportAir, rail, water, road
Banking and Financial MarketCredit institutions, trading venues
HealthHospitals, healthcare (≥250 employees or ≥€50M)Medical devices (mid-size)
WaterDrinking water, wastewater (≥50,000 persons)
Digital InfrastructureIXPs, DNS providers, TLD registries, Cloud providers, CDNs, Datacentres, TSPsManaged security providers
ICT Service ManagementMSPs (large)MSPs (mid-size)
Public AdministrationCentral governmentRegional government
SpaceSpace infrastructure operators

Article 21 — Cybersecurity Risk Management Measures:

This is the core technical obligation. Essential and important entities must implement measures addressing at minimum:

  1. Risk analysis and information system security policies (Art. 21(2)(a))
  2. Incident handling — detection, response, recovery (Art. 21(2)(b))
  3. Business continuity — backup management, disaster recovery, crisis management (Art. 21(2)(c))
  4. Supply chain security — relationships with direct suppliers and service providers (Art. 21(2)(d))
  5. Security in network and information systems acquisition, development and maintenance — vulnerability handling, disclosure (Art. 21(2)(e))
  6. Policies and procedures to assess cybersecurity risk management effectiveness (Art. 21(2)(f))
  7. Basic cyber hygiene practices and cybersecurity training (Art. 21(2)(g))
  8. Cryptography and encryption policies (Art. 21(2)(h))
  9. Human resources security, access control and asset management (Art. 21(2)(i))
  10. Multi-factor authentication, continuous authentication, secured voice/video/text communications, secured emergency communications (Art. 21(2)(j))

Article 23 — Incident Reporting:

NIS2 creates a three-stage incident reporting obligation:

A "significant incident" is one that has caused or is capable of causing severe operational disruption or financial loss, or has caused or is capable of causing significant material or non-material damage to other natural or legal persons.

EU AI Act (2024/1689) — Annex III Category 2: Critical Infrastructure

The EU AI Act classifies AI systems used in the management and operation of critical infrastructure as high-risk under Annex III, Point 2:

"AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity."

The AI Office has clarified in guidance that this covers:

What "High-Risk AI" means under the AI Act:

From 2 August 2026 (when Annex III becomes fully applicable), providers and deployers of Annex III Category 2 AI systems must comply with Articles 9-15:

AI Act ArticleObligation
Art. 9Risk management system — continuous, throughout lifecycle
Art. 10Data governance — training/validation/test datasets
Art. 11Technical documentation — before market placement
Art. 12Record-keeping — automatic logging of events
Art. 13Transparency — information to deployers
Art. 14Human oversight — design for human intervention
Art. 15Accuracy, robustness, cybersecurity

The critical distinction: Art. 15 explicitly mandates cybersecurity for high-risk AI systems. This is where NIS2 and the AI Act begin to overlap structurally.

The Intersection Matrix

Where the two regimes overlap, the more demanding requirement governs. Where they address different risks, both apply independently.

Compliance AreaNIS2 RequirementAI Act RequirementIntersection
Risk ManagementArt. 21(2)(a): ICT risk analysis and security policiesArt. 9: Risk management system throughout AI lifecycleShared foundation — one integrated risk register covers both
Incident DetectionArt. 21(2)(b): Incident handlingArt. 12: Automatic event loggingAI Act logs feed NIS2 detection systems
Incident ReportingArt. 23: 24h/72h/1mo notification to CSIRT/NCAArt. 73: Serious incident reporting to market surveillance authorityDual reporting channels — same event, two authorities
Supply Chain SecurityArt. 21(2)(d): Supplier security requirementsArt. 25: Obligations for importers/distributors; Art. 28: Deployer obligationsAI provider = supply chain under NIS2
Cybersecurity MeasuresArt. 21(2)(h)/(j): Encryption, MFAArt. 15: Accuracy, robustness, cybersecurity for AI systemsDirect cross-reference — NIS2 measures satisfy some Art. 15 requirements
DocumentationArt. 21(2)(f): Effectiveness assessmentArt. 11: Technical documentationTechnical docs can be structured to serve both
Access ControlArt. 21(2)(i): Asset management, access controlArt. 14: Human oversight mechanismsAccess control supports both human oversight and NIS2 access policies
TrainingArt. 21(2)(g): Cyber hygiene trainingArt. 14(4): Training of persons responsible for human oversightTraining programmes can be unified

Affected Sectors: What Operators Face

Energy Sector

Scenario: An electricity transmission operator uses AI for real-time grid stability prediction and automatic load balancing.

NIS2 exposure: Electricity operators are essential entities under Annex I, Point 1. Full Art. 21 obligations apply. National energy regulators coordinate with NCA (National Competent Authority) for NIS2 supervision.

AI Act exposure: Grid management AI is textbook Annex III Category 2. Provider obligations (Art. 9-15) apply if the operator built or customised the system. Deployer obligations (Art. 29) apply if using a commercial AI product.

Specific complexity: The same AI system that optimises load balancing is both a cybersecurity-relevant asset under NIS2 and a high-risk AI system under the AI Act. A cyberattack that manipulates the AI model (adversarial input to the load forecasting system) is simultaneously:

The operator must notify two different authorities with different timelines and different information requirements.

Water and Wastewater

Scenario: A water utility with ≥50,000 persons served deploys ML-based anomaly detection on SCADA sensor data.

NIS2 exposure: Essential entity under Annex I, Point 5. The 50,000-person threshold specifically targets municipal-scale utilities.

AI Act exposure: SCADA anomaly detection for water infrastructure is explicitly covered by the AI Act's guidance on Annex III Category 2 ("supply of water").

Specific complexity: Water infrastructure is particularly sensitive because:

  1. Real-time intervention may be impossible (treatment takes hours)
  2. SCADA systems often run on operational technology (OT) networks with long lifecycle devices
  3. AI model poisoning could cause treatment failures before detection

NIS2 Art. 21(2)(e) requires vulnerability handling for systems acquired, developed, or maintained — this applies to the AI model itself as a vulnerability surface.

Transport

Scenario: A railway operator uses AI for dynamic scheduling and delay prediction, integrated with traffic management systems.

NIS2 exposure: Rail transport operators are essential entities under Annex I, Point 3.

AI Act exposure: Traffic management AI is covered under Annex III Category 2 ("management and operation of road traffic" — extended by guidance to rail).

Specific complexity: ERA (European Union Agency for Railways) already requires safety cases for rail systems. The AI Act's Art. 9 risk management system must align with ERA's Common Safety Methods (CSM-RA). Both require systematic risk assessment but use different frameworks and reporting channels.

Health

Scenario: A hospital with ≥250 employees uses AI for patient triage prioritisation in the emergency department.

NIS2 exposure: Hospitals above the threshold are essential entities under Annex I, Point 4.

AI Act exposure: Triage AI is classified as Annex III Category 5 (access to and enjoyment of essential private services, education, and employment), not Category 2. However, hospital AI that interacts with medical devices or hospital information systems may trigger both Category 2 (infrastructure) and Category 5 (access to services).

Note: For health specifically, the Medical Device Regulation (MDR 2017/745) and In Vitro Diagnostic Regulation (IVDR 2017/746) add a third regulatory layer for AI integrated into medical devices.

Implementation Guide: What Developers Must Build

Unified Risk Register

The most significant operational efficiency gain from integrated NIS2 + AI Act compliance is building a single risk register that satisfies both:

RiskRegister:
  system_id: "grid-load-forecaster-v2"
  nis2_classification: "essential_entity"
  ai_act_classification: "high_risk_annex_iii_cat2"
  
  # NIS2 Art. 21(2)(a) — ICT Risk Analysis
  ict_risks:
    - id: "ICT-001"
      description: "Adversarial manipulation of load forecasting inputs"
      likelihood: "medium"
      impact: "critical"  # grid instability
      controls: ["input_validation", "anomaly_detection_layer", "human_override"]
  
  # AI Act Art. 9 — AI Risk Management
  ai_risks:
    - id: "AI-001"
      description: "Model drift causing systematic under-prediction of peak load"
      likelihood: "medium"
      impact: "high"  # frequency deviation
      controls: ["drift_monitoring", "performance_thresholds", "human_review_trigger"]
      
  # Cross-referenced risks (same event, dual classification)
  cross_risks:
    - id: "CROSS-001"
      description: "Cyberattack via adversarial input to AI model"
      nis2_ref: "ICT-001"
      ai_act_ref: "AI-001"
      reporting: ["NCA_24h", "MARKET_SURVEILLANCE_72h"]

Key design principle: Cross-referenced risks have dual reporting paths. One risk management team, two notification workflows.

Automatic Event Logging (Art. 12 AI Act)

The AI Act requires automatic logging of events in high-risk AI systems. This logging system must be designed to satisfy both Art. 12 requirements and NIS2's incident detection needs:

# Structured event log that satisfies AI Act Art. 12
# and feeds NIS2 incident detection pipeline

import structlog
import hashlib
from datetime import datetime, timezone

log = structlog.get_logger()

def log_ai_decision(
    system_id: str,
    input_hash: str,      # never log raw inputs containing personal data
    decision: dict,
    confidence: float,
    human_override: bool,
    session_id: str,
):
    """
    AI Act Art. 12: Automatic logging of high-risk AI system events.
    NIS2 Art. 21(2)(b): Feeds incident detection pipeline.
    """
    log.info(
        "ai_decision",
        system_id=system_id,
        timestamp=datetime.now(timezone.utc).isoformat(),
        input_hash=input_hash,          # AI Act: traceability without raw data
        decision_type=decision["type"],
        decision_value=decision["value"],
        confidence=confidence,
        human_override=human_override,  # AI Act Art. 14: human oversight documented
        session_id=session_id,
        # NIS2 fields for SIEM integration
        severity="INFO" if confidence > 0.8 else "WARN",
        alert_threshold=confidence < 0.6,  # triggers NIS2 incident review
    )
    
    # Alert NIS2 incident management if confidence below threshold
    if confidence < 0.6 or human_override:
        trigger_nis2_incident_review(system_id, session_id, confidence)

Critical design decision: AI Act Art. 12 logs must be retained for at least the period the AI system is in use, and for at least 6 months after. NIS2 Art. 21(2)(b) requires logs to be available for incident response. Align your retention policy to the longer of the two.

Supply Chain Security for AI Providers

NIS2 Art. 21(2)(d) requires operators to secure their supply chain — and AI providers are supply chain. An energy operator buying a commercial AI model for grid management must:

  1. Assess the AI provider's security posture (Art. 21(2)(d) NIS2)
  2. Verify the AI system's Art. 11 Technical Documentation (AI Act)
  3. Confirm the provider's Art. 9 risk management covers the use case

The AI Act's Art. 25 explicitly addresses obligations for importers of high-risk AI systems: verify the provider has conducted the conformity assessment, the technical documentation is available, and the system bears CE marking and EU declaration of conformity.

Due diligence checklist for AI procurement by critical infrastructure operators:

AI System Procurement Checklist (NIS2 + AI Act):
□ Provider holds ISO 27001 or equivalent (satisfies NIS2 supply chain security)
□ AI system has AI Act Art. 11 technical documentation
□ Conformity assessment completed (AI Act Art. 43)
□ EU Declaration of Conformity signed
□ CE marking present on system documentation
□ EU Database registration number confirmed (Art. 51)
□ Provider commits to incident notification within 24h (aligns with NIS2 timeline)
□ Provider incident notification = NIS2-compatible (includes root cause, impact scope)
□ Data processing agreement (GDPR) in place for any personal data in training/inference
□ Provider is EU-incorporated or operates under EU jurisdiction (reduces CLOUD Act risk)
□ SLA includes uptime guarantees compatible with essential service continuity (NIS2 Art. 21(2)(c))
□ Provider specifies EU data residency for model inference logs (AI Act Art. 12 retention)

Dual Incident Reporting Workflow

The most operationally demanding aspect of the NIS2 + AI Act intersection is dual incident reporting. A significant AI system incident affecting critical infrastructure must be reported to:

  1. NCA/CSIRT under NIS2 (24h early warning, 72h notification, 1mo final report)
  2. Market Surveillance Authority under AI Act Art. 73 (for serious incidents)

These are different authorities with different forms and different definitions of "significant/serious."

Incident Classification Decision Tree:

AI system anomaly detected
    │
    ├── Does it cause/risk: severe operational disruption OR significant financial loss?
    │       YES → NIS2 Significant Incident → 24h Early Warning to CSIRT/NCA
    │
    └── Does it constitute: death OR serious health damage OR significant property damage
              OR disruption to critical infrastructure provisions?
                    YES → AI Act Serious Incident → Art. 73 reporting to Market Surveillance Authority
                    
Both paths active? → Parallel reporting required.
Coordinate messaging: same facts, different forms, different recipients.

Recommended implementation: Designate one person as NIS2 incident coordinator and one as AI Act compliance officer. For SMEs, this may be the same person — but the reporting workflows must be documented separately because regulators have different information requirements.

Infrastructure Jurisdiction: The CLOUD Act Problem

Both NIS2 and the AI Act create obligations that are materially affected by where your infrastructure runs.

NIS2 Art. 21 Supply Chain Security: If your critical infrastructure AI runs on US cloud infrastructure (AWS, Azure, GCP), that infrastructure provider is subject to the US CLOUD Act. A CLOUD Act request to an AWS data centre in Frankfurt does not require the Frankfurt court's approval — it goes directly to AWS's US headquarters. The US government can compel production of data stored in EU AWS infrastructure without notifying the EU operator.

For a critical infrastructure operator, this creates a structural supply chain security risk: a third party (AWS) can be compelled to provide access to systems and data that the operator is obligated to protect under NIS2 Art. 21(2)(a) and (d).

AI Act Art. 12 Record-Keeping: The automatic logs required under Art. 12 must be "available to national competent authorities" for supervision. If those logs are stored on US infrastructure, a parallel CLOUD Act access path exists alongside the EU supervisory channel. The EU market surveillance authority has the right to inspect — but so does the US government, without the authority's knowledge.

AI Act Art. 9 Post-Market Monitoring: For high-risk AI systems, providers must implement continuous post-market monitoring. The monitoring data — including performance metrics, drift indicators, anomaly logs — must be processable under EU law. If this data passes through US infrastructure, supplementary measures under EDPB Recommendations 01/2020 are required (encryption with EU key management, pseudonymisation before transfer, etc.).

Practical implication for critical infrastructure operators:

Using EU-native PaaS for AI system deployment eliminates the CLOUD Act supply chain risk:

NIS2 itself recognises this: under Annex I, Point 8 (Digital Infrastructure), cloud computing service providers and data centre service providers are covered entities subject to NIS2 obligations. An EU-native cloud provider that is itself NIS2-compliant is structurally a stronger supply chain choice than a US provider operating EU data centres.

Enforcement Landscape

NIS2 Enforcement

NIS2 establishes maximum fines significantly higher than NIS1:

Entity TypeMaximum Fine
Essential entities€10 million or 2% of global annual turnover (whichever is higher)
Important entities€7 million or 1.4% of global annual turnover

Key enforcement powers under Art. 32-33:

AI Act Enforcement

For high-risk AI systems (Annex III), AI Act enforcement creates:

ViolationMaximum Fine
Non-compliance with Art. 9-15 obligations€15 million or 3% of global annual turnover
Prohibited AI systems (Art. 5)€35 million or 7% of global annual turnover

Market surveillance authorities have powers under Art. 74:

Dual Enforcement Risk

A critical infrastructure operator who deploys a high-risk AI system without proper NIS2 and AI Act compliance faces:

These are not alternative penalties — they are simultaneous exposure from different regulatory frameworks enforced by different authorities (NCA for NIS2, market surveillance authority for AI Act).

Timeline: When What Applies

DateEvent
17 October 2024NIS2 transposition deadline — Member States must have enacted national law
2 August 2024AI Act entered into force
2 February 2025AI Act prohibited AI practices (Art. 5) applicable
2 August 2025AI Act GPAI model rules (Art. 51-55) applicable
2 August 2026AI Act Annex III high-risk AI obligations fully applicable — this is the critical deadline for critical infrastructure AI

NIS2 is already in force across most EU Member States. The AI Act's Annex III obligations have a 26-month countdown from the Act's entry into force, ending August 2026.

Critical infrastructure organisations deploying AI systems have until August 2026 to achieve full AI Act compliance. NIS2 compliance obligations are current — organisations already should have implemented Art. 21 measures.

What to Do Now

For critical infrastructure operators currently deploying AI:

  1. Inventory all AI systems against Annex III Category 2 criteria — categorise as high-risk AI or not
  2. Assess NIS2 classification — essential or important entity per your sector and size
  3. Build the unified risk register — one register, dual classification (ICT risk + AI risk)
  4. Implement Art. 12 logging — automatic event logs with appropriate retention policy
  5. Audit your AI supply chain — apply the procurement checklist to existing AI provider relationships
  6. Map dual reporting workflows — who reports what to which authority, at what timeline
  7. Review infrastructure jurisdiction — assess CLOUD Act exposure for all systems within scope

For developers building AI systems for critical infrastructure clients:

  1. Identify if you are a Provider — if you built or customised the system, you have Art. 9-15 obligations
  2. Prepare Technical Documentation (Art. 11) — before August 2026
  3. Implement the Risk Management System (Art. 9) — it must be continuous and documented
  4. Include NIS2 incident notification in your SLAs — your clients have 24h/72h windows; you must feed them information in time
  5. Design logging infrastructure (Art. 12) for EU jurisdiction from the start

See Also