2026-04-26·15 min read·

If you build or operate AI systems for electricity grid management, water treatment process control, gas network operations, road traffic signal management, or rail train control, EU AI Act Annex III Point 2 classifies your systems as high-risk — but only when the AI functions as a safety component. The critical distinction that most compliance guidance misses is between AI systems that directly actuate physical infrastructure decisions (high-risk) and AI systems that provide planning, forecasting, or maintenance analytics without automated control authority (not high-risk under Annex III Point 2). Critical infrastructure operators face an additional complication that no other Annex III category creates: they are simultaneously subject to NIS2 Directive 2022/2555, which defines largely the same infrastructure perimeter with a different regulatory logic, different competent authorities, and different incident reporting obligations. This guide provides the framework to navigate both regimes.

What Annex III Point 2 Actually Covers

Annex III Point 2 of the EU AI Act applies to AI systems "intended to be used as safety components in the management and operation of critical infrastructure, as referred to in Directive (EU) 2022/2557, as well as road traffic management, and the supply of water, gas, heating and electricity."

Three elements define the scope. First, the AI system must function as a safety component — not merely be deployed in a critical infrastructure context. An AI system used for demand forecasting, maintenance scheduling, or asset management analytics in a power station is not a safety component under Annex III Point 2, even though it operates in a critical infrastructure sector. Second, the infrastructure must fall within the CER Directive 2022/2557 definition of critical infrastructure, which the Annex III text incorporates by reference — this is the exhaustive EU-wide list of critical sectors. Third, the Annex III text adds road traffic management, and supply of water, gas, heating and electricity as explicit categories, partly overlapping with CER and partly extending it to heating networks and road traffic systems that the CER Directive does not enumerate in identical terms.

The CER Directive 2022/2557 critical infrastructure sectors that feed into Annex III Point 2:

CER SectorAI Examples Potentially High-Risk
Energy — electricityGrid frequency regulation AI, automatic generation control, distribution automation switching
Energy — gasPipeline compressor station control AI, pressure safety valve actuation
Energy — hydrogenElectrolysis process safety management AI
Transport — roadTraffic signal control AI, variable message sign systems, tunnel safety AI
Transport — railETCS/ERTMS train protection AI, interlocking AI, level crossing control
Transport — airATC collision avoidance (TCAS successor AI), runway incursion detection
Transport — maritimeVessel Traffic Service AI with automated intervention, port SMGCS
Water — drinking waterTreatment process control AI (dosing, disinfection), distribution pressure management
Water — wastewaterWastewater treatment AI with autonomous chemical dosing
Digital infrastructureDNS resolution AI with automated traffic redirection affecting infrastructure
SpaceGround station AI controlling critical satellite operations
FoodAI systems in food safety inspection with mandatory rejection authority

Road traffic management and heating/gas/electricity supply are additionally covered by the Annex III text directly, which matters because smaller heating networks and local electricity distribution systems may not individually meet CER Directive minimum size thresholds.

The Safety Component Test: High-Risk vs. Not High-Risk

The "safety component" qualifier is the threshold criterion that separates high-risk critical infrastructure AI from the large volume of analytics, optimisation, and maintenance AI that operates in the same sectors without triggering Annex III Point 2. The EU AI Act does not define "safety component" exhaustively, but the Annex VII guidance and product safety legislative history establish three factors that jointly determine whether an AI system is a safety component:

Factor 1 — Automated actuation authority: Does the AI system directly issue commands to physical actuators, switches, valves, signals, or control systems without mandatory human confirmation before the command executes? Grid frequency AI that automatically dispatches generation capacity to maintain 50 Hz meets this test. Energy trading optimisation AI that presents portfolio recommendations to human traders does not.

Factor 2 — Safety function in product liability sense: Does a failure of the AI system create a direct pathway to physical harm, service disruption of essential services, or environmental damage? Water treatment dosing AI that controls chlorination levels has a clear failure-to-harm pathway (contaminated public water supply). Water quality monitoring AI that alerts operators to chlorination anomalies without control authority has a weaker safety-component case, though it can still qualify where the monitoring function is mandated as a safety measure.

Factor 3 — Deployment context: Is the AI system deployed in a context where it performs functions that competent sector regulators (national energy regulator, railway safety authority, water authority) have designated as safety-relevant functions requiring qualified oversight? This is the normative layer: sector-specific safety regulations that require certain functions to be performed by certified systems automatically make those functions "safety components" under EU AI Act Annex III Point 2.

Critical Infrastructure AI System Classification Table

AI System TypeSectorSafety Component?Annex III Pt.2 High-Risk?Reason
Automatic frequency restoration AI (automatic generation control)ElectricityYesYesDirectly dispatches MW of generation to maintain grid stability; failure = cascading outage
Distribution automation fault isolation AI (FLISR)ElectricityYesYesAutomatically isolates faults and restores supply without operator intervention
Energy demand forecasting AIElectricityNoNoAdvisory planning tool; human operators decide actual dispatch
Electricity trading optimisation AIElectricityNoNoPortfolio optimisation; no automated physical actuation
Predictive asset maintenance AI (transformer condition monitoring)ElectricityNoNoMaintenance scheduling; no automated safety action
Pipeline compressor station control AI (autonomous pressure management)GasYesYesControls physical gas flow; failure = pressure exceedance or supply loss
Gas leak detection AI (advisory only)GasNoNoAlerts human operators; does not trigger autonomous shutoffs
Water treatment dosing AI (autonomous chlorination control)WaterYesYesDirectly controls disinfectant levels; failure = public health risk
Water network pressure management AI (autonomous control)WaterYesYesControls physical distribution pressure; failure = network damage or supply loss
Water quality monitoring AI (alert-only)WaterNo (usually)NoAdvisory; human confirmation before response action
Adaptive traffic signal control AI (direct actuation)RoadYesYesControls physical signal states; failure = collision risk at junctions
Incident detection + variable message sign AIRoadYesYesDirectly activates warning signals affecting driver behaviour
Traffic volume forecasting AIRoadNoNoPlanning analytics; no direct signal actuation
ETCS/ERTMS train protection AIRailYesYesDirectly applies train brakes; foundational safety system
Interlocking AI (route setting, signal clearing)RailYesYesControls signals and points; failure = collision pathway
Railway predictive maintenance AIRailNoNoMaintenance scheduling; no operational safety action
Level crossing automatic activation AIRailYesYesControls barriers; failure = collision pathway
Heating network flow control AI (autonomous)HeatingYesYesControls physical heat distribution; district heating safety relevance
District heating demand forecastingHeatingNoNoPlanning tool
ATC TCAS-successor AI (automated resolution advisories)AviationYesYesIssues resolution advisories; safety-critical collision avoidance
Airport runway incursion AI (automated alert only)AviationBorderlineRequires analysisAlert-only may be below threshold, but depends on whether advisory is mandated action
Port VTS AI with automated radio overrideMaritimeYesYesCan override vessel communications; safety intervention authority
Satellite ground station AI (critical manoeuvre authority)SpaceYesYesControls critical space assets; failure = collision or signal loss

The NIS2 Dual-Compliance Challenge

Critical infrastructure operators subject to Annex III Point 2 are almost universally also "essential entities" under NIS2 Directive 2022/2555 (transposed in EU member states since October 2024). The two regulatory frameworks address overlapping infrastructure with different instruments, creating a dual-compliance burden that has no equivalent in any other Annex III category.

NIS2 obligations relevant to critical infrastructure AI operators:

EU AI Act obligations for the same operator as deployer (Art. 26):

The conflict point: NIS2 Art. 23 requires incident reporting to national CSIRTs; EU AI Act Art. 73 requires serious incident reporting to market surveillance authorities. These are different bodies, different timelines, and different scopes. A cyberattack that compromises a grid stability AI system triggers both NIS2 incident reporting (because it affects the availability of an essential service) and EU AI Act serious incident reporting (because the AI system performed in an unexpected manner creating safety risk). Operators must maintain parallel reporting pipelines and ensure the reports are consistent — which requires cross-functional coordination between OT/SCADA security teams (NIS2 responsibility) and AI compliance teams (EU AI Act responsibility).

The overlap advantage: Art. 21 NIS2 cybersecurity requirements for OT networks align closely with EU AI Act Art. 15(5) requirement for high-risk AI systems to be resilient to errors, faults, and cybersecurity threats including adversarial attacks. The IEC 62443 standard series (industrial automation and control systems cybersecurity) satisfies obligations under both frameworks simultaneously where correctly implemented.

CLOUD Act Exposure in Critical Infrastructure AI

US-headquartered OT platform vendors represent a significant CLOUD Act exposure vector for critical infrastructure operators processing operational data through US-jurisdiction cloud services:

High-risk CLOUD Act exposure vendors:

EU-sovereign OT infrastructure alternatives:

For critical infrastructure operators subject to EU AI Act Annex III Point 2, routing operational AI inference through US-jurisdiction cloud platforms creates a specific GDPR compatibility question: GDPR Art. 32 requires appropriate technical measures to ensure a level of security appropriate to the risk — and CLOUD Act compelled disclosure of real-time critical infrastructure operational data to US authorities represents a security risk that operators must assess and document under both NIS2 Art. 21 supply chain risk obligations and EU AI Act Art. 9 risk management requirements.

Provider and Deployer Roles in Critical Infrastructure AI

The EU AI Act distinguishes providers (who place high-risk AI on the market) from deployers (who use it under their own authority). In critical infrastructure AI, this split typically follows the OT vendor / infrastructure operator boundary, but with important exceptions:

OT vendors as providers (most common case): Industrial automation vendors (Siemens, ABB, Schneider Electric, Rockwell Automation, Honeywell) who supply AI-enabled SCADA systems, smart grid controllers, or process control AI as products are providers under Art. 3(3). They must satisfy: conformity assessment (Art. 43), technical documentation (Art. 11), CE marking (Art. 48), EU AI Act database registration (Art. 71), Declaration of Conformity (Art. 47), and post-market monitoring (Art. 72).

Infrastructure operators as deployers (standard case): National grid operators (TSOs/DSOs), water utilities, rail infrastructure managers, and road authorities deploying OT vendor AI systems are deployers under Art. 3(4). Deployer obligations: verify provider conformity documentation (Art. 26(1)), implement human oversight measures (Art. 26(2)), monitor AI system operation and report serious incidents to market surveillance authorities (Art. 26(4)-(5)), maintain operational logs (Art. 26(6)).

Infrastructure operators as providers (in-house AI development): Large grid operators (Elia, RTE, National Grid ESO), national rail managers, and major water utilities sometimes develop AI safety components internally for operational control. Internal deployment for own use may qualify for a deployer-like regime under Art. 25(4), but internal-use high-risk AI systems still require conformity assessment equivalent to the provider standard where the AI is used in high-risk contexts. National grid operators developing in-house automatic generation control AI should assume provider-level obligations.

System integrators as providers: Consultancies and SI firms that integrate third-party AI components into bespoke critical infrastructure control systems are providers under Art. 3(3) if they substantially modify the AI system before deployment. This is a significant risk for infrastructure modernisation projects where SI firms combine ML forecasting engines, optimisation algorithms, and SCADA historians into custom grid management AI.

Python CriticalInfrastructureAIClassifier

from enum import Enum
from dataclasses import dataclass, field
from typing import Optional

class AIActRisk(Enum):
    HIGH_RISK = "HIGH_RISK"
    NOT_HIGH_RISK = "NOT_HIGH_RISK"
    REQUIRES_ANALYSIS = "REQUIRES_ANALYSIS"

@dataclass
class CriticalInfraAISystem:
    name: str
    sector: str
    has_automated_actuation: bool
    has_direct_safety_function: bool
    has_sector_safety_certification: bool
    cloud_provider: Optional[str] = None
    cloud_act_exposure: bool = False

class CriticalInfrastructureAIClassifier:

    ANNEX_III_PT2_SECTORS = {
        "electricity", "gas", "water", "road_traffic",
        "rail", "aviation", "maritime", "heating",
        "space", "digital_infrastructure",
    }

    CLOUD_ACT_VENDORS = {
        "honeywell_forge", "ge_digital_predix", "aspentech",
        "microsoft_azure", "aws", "google_cloud",
    }

    EU_SOVEREIGN_OT_VENDORS = {
        "siemens_xcelerator", "abb_ability", "aveva_pi",
        "schneider_ecostruxure", "atos_eviden",
    }

    def classify(self, system: CriticalInfraAISystem) -> dict:
        if system.sector not in self.ANNEX_III_PT2_SECTORS:
            return {
                "classification": AIActRisk.NOT_HIGH_RISK,
                "reason": f"Sector '{system.sector}' not in Annex III Point 2 scope",
                "cloud_act_risk": False,
            }

        safety_component = (
            system.has_automated_actuation
            or system.has_direct_safety_function
            or system.has_sector_safety_certification
        )

        if not safety_component:
            return {
                "classification": AIActRisk.NOT_HIGH_RISK,
                "reason": "No safety component criteria met — advisory/analytics AI",
                "cloud_act_risk": system.cloud_act_exposure,
                "cloud_act_note": self._cloud_note(system),
            }

        if (system.has_automated_actuation
                and system.has_direct_safety_function):
            classification = AIActRisk.HIGH_RISK
        elif system.has_automated_actuation or system.has_direct_safety_function:
            classification = AIActRisk.HIGH_RISK
        else:
            classification = AIActRisk.REQUIRES_ANALYSIS

        return {
            "classification": classification,
            "reason": "Safety component in critical infrastructure sector",
            "obligations": [
                "Conformity assessment (Art.43 — internal check or third-party)",
                "Technical documentation (Art.11, Annex IV)",
                "Risk management system (Art.9) with OT failure mode analysis",
                "Cybersecurity measures (Art.15 + IEC 62443)",
                "Human oversight design (Art.14)",
                "Post-market monitoring (Art.72)",
                "EU AI Act database registration (Art.71)",
            ],
            "cloud_act_risk": system.cloud_act_exposure,
            "cloud_act_note": self._cloud_note(system),
        }

    def _cloud_note(self, system: CriticalInfraAISystem) -> str:
        if not system.cloud_provider:
            return "No cloud provider identified"
        if system.cloud_provider in self.CLOUD_ACT_VENDORS:
            return (
                f"CLOUD Act exposure: {system.cloud_provider} is a US entity — "
                "OT telemetry and AI inference data subject to compelled disclosure. "
                "Assess under NIS2 Art.21 supply chain security + EU AI Act Art.9."
            )
        if system.cloud_provider in self.EU_SOVEREIGN_OT_VENDORS:
            return f"EU-sovereign: {system.cloud_provider} — no CLOUD Act exposure"
        return f"Verify CLOUD Act status for provider: {system.cloud_provider}"


# Usage examples
classifier = CriticalInfrastructureAIClassifier()

systems = [
    CriticalInfraAISystem(
        name="Automatic Frequency Restoration AI (AGC)",
        sector="electricity",
        has_automated_actuation=True,
        has_direct_safety_function=True,
        has_sector_safety_certification=True,
        cloud_provider="siemens_xcelerator",
        cloud_act_exposure=False,
    ),
    CriticalInfraAISystem(
        name="Grid Energy Demand Forecasting AI",
        sector="electricity",
        has_automated_actuation=False,
        has_direct_safety_function=False,
        has_sector_safety_certification=False,
        cloud_provider="microsoft_azure",
        cloud_act_exposure=True,
    ),
    CriticalInfraAISystem(
        name="Water Treatment Autonomous Dosing AI",
        sector="water",
        has_automated_actuation=True,
        has_direct_safety_function=True,
        has_sector_safety_certification=True,
        cloud_provider="aws",
        cloud_act_exposure=True,
    ),
    CriticalInfraAISystem(
        name="Adaptive Traffic Signal Control AI",
        sector="road_traffic",
        has_automated_actuation=True,
        has_direct_safety_function=True,
        has_sector_safety_certification=False,
        cloud_provider="google_cloud",
        cloud_act_exposure=True,
    ),
    CriticalInfraAISystem(
        name="ETCS/ERTMS Train Protection AI",
        sector="rail",
        has_automated_actuation=True,
        has_direct_safety_function=True,
        has_sector_safety_certification=True,
        cloud_provider="abb_ability",
        cloud_act_exposure=False,
    ),
    CriticalInfraAISystem(
        name="Railway Predictive Maintenance AI",
        sector="rail",
        has_automated_actuation=False,
        has_direct_safety_function=False,
        has_sector_safety_certification=False,
        cloud_provider="honeywell_forge",
        cloud_act_exposure=True,
    ),
]

for system in systems:
    result = classifier.classify(system)
    print(f"\n{system.name}")
    print(f"  Classification: {result['classification'].value}")
    print(f"  Reason: {result['reason']}")
    if result.get("cloud_act_risk"):
        print(f"  CLOUD Act: {result.get('cloud_act_note', '')}")

# Output:
# Automatic Frequency Restoration AI (AGC)
#   Classification: HIGH_RISK
#   Reason: Safety component in critical infrastructure sector
#
# Grid Energy Demand Forecasting AI
#   Classification: NOT_HIGH_RISK
#   Reason: No safety component criteria met — advisory/analytics AI
#   CLOUD Act: CLOUD Act exposure: microsoft_azure is a US entity — ...
#
# Water Treatment Autonomous Dosing AI
#   Classification: HIGH_RISK
#   Reason: Safety component in critical infrastructure sector
#   CLOUD Act: CLOUD Act exposure: aws is a US entity — ...
#
# Adaptive Traffic Signal Control AI
#   Classification: HIGH_RISK
#   Reason: Safety component in critical infrastructure sector
#   CLOUD Act: CLOUD Act exposure: google_cloud is a US entity — ...
#
# ETCS/ERTMS Train Protection AI
#   Classification: HIGH_RISK
#   Reason: Safety component in critical infrastructure sector
#
# Railway Predictive Maintenance AI
#   Classification: NOT_HIGH_RISK
#   Reason: No safety component criteria met — advisory/analytics AI
#   CLOUD Act: CLOUD Act exposure: honeywell_forge is a US entity — ...

25-Item Compliance Checklist — EU AI Act Annex III Point 2

Scope Determination

  1. Map all AI systems deployed in your critical infrastructure operations against the CER Directive 2022/2557 sector list and the Annex III Point 2 explicit additions (road traffic, water, gas, heating, electricity) — identify every AI system operating in scope sectors
  2. Apply the three-factor safety component test to each identified AI system: automated actuation authority, direct safety function, sector-authority safety certification — document the analysis with evidence for each factor
  3. Identify AI systems that have automated actuation authority but claim advisory-only status — verify technically that human confirmation gates exist before physical commands execute and are not bypassable under normal operating conditions
  4. Check whether your sector regulator (national energy regulator, rail safety authority, water authority, traffic management authority) has published guidance on which AI system types require safety certification — safety-certified systems are automatically safety components
  5. Review AI systems deployed as decision-support tools to determine whether operators in practice always confirm AI recommendations or whether time pressure causes de facto rubber-stamping — if the latter, the system functions as a safety component regardless of its design intent

NIS2 and EU AI Act Dual-Compliance Mapping 6. Map your NIS2 Art. 21 risk management measures to EU AI Act Art. 9 risk management requirements — identify overlapping obligations that can be satisfied with a single unified OT risk management framework 7. Map your IEC 62443 cybersecurity programme to EU AI Act Art. 15 cybersecurity requirements — document the mapping explicitly in your AI system technical documentation 8. Establish parallel incident reporting pipelines: NIS2 Art. 23 (to national CSIRT) for cybersecurity incidents affecting AI systems, and EU AI Act Art. 73 (to market surveillance authority) for AI serious incidents with physical safety consequences 9. Verify which national authority is the competent NIS2 authority for your sector and which authority is the AI Act market surveillance authority — ensure cross-function coordination between OT security and AI compliance teams when incidents affect both frameworks 10. Review your NIS2 supply chain security assessments (Art. 21(2)(d)) to include AI system providers — EU AI Act provider conformity documentation should be part of supplier due diligence for AI purchases

Conformity Assessment and Technical Documentation 11. Determine the applicable conformity assessment procedure: Art. 43(1) internal control (Annex VI) for critical infrastructure AI not covered by Union harmonisation legislation in Annex I, or third-party assessment if the AI is also a safety component under sector-specific product safety legislation (e.g., ETCS/ERTMS under Directive 2016/797) 12. Prepare technical documentation (Art. 11, Annex IV) with critical infrastructure specific content: SCADA/ICS integration architecture, failure mode analysis specific to physical infrastructure consequences (grid instability, water contamination, collision pathways), OT network dependency map 13. Document the human oversight architecture (Art. 14) with specific attention to time-constrained operational scenarios — SCADA operators responding to grid disturbances in 30-second timeframes have qualitatively different oversight capacity than analysts reviewing recommendations off-line 14. Establish accuracy and robustness testing methodology (Art. 15) using OT-realistic test environments including degraded sensor input, communication latency, and adversarial input scenarios representative of industrial cyber threats 15. Configure automated logging (Art. 12) for all safety-relevant AI decisions, integrating with existing SCADA historian infrastructure where compatible — retain logs for the duration required by sector-specific regulations plus EU AI Act Art. 12 minimum 6-month requirement

CLOUD Act and Data Sovereignty Assessment 16. Audit all critical infrastructure AI systems for US-jurisdiction cloud provider involvement: AI inference endpoints, model hosting, OT telemetry pipelines, and SCADA historian cloud synchronisation — identify every touchpoint where operational data reaches a US-entity's infrastructure 17. For each identified CLOUD Act exposure point, assess whether the operational data involved constitutes critical infrastructure security information — many member states classify grid topology data, water system control parameters, and rail signalling data as classified or restricted; CLOUD Act disclosure of such data to US authorities may violate national security classification laws 18. Evaluate EU-sovereign OT platform alternatives (AVEVA PI/Schneider Electric, Siemens Xcelerator, ABB Ability) for CLOUD Act-exposed workloads — document the migration assessment outcome including functional equivalence, cost, and transition timeline 19. Ensure data processing agreements for US-jurisdiction OT platforms include CLOUD Act notification provisions — providers should contractually commit to notify operators of compelled disclosure orders to the maximum extent permitted by US law 20. Review the intersection of CLOUD Act exposure with NIS2 Art. 21(2)(d) supply chain security — US-jurisdiction AI providers represent supply chain risk that national NIS2 competent authorities may scrutinise in supervision activities

Deployer Obligations and Operator Training 21. Verify AI provider conformity documentation before deployment: EU AI Act database registration (Art. 71), Declaration of Conformity (Art. 47), CE marking where applicable — do not deploy high-risk AI systems without confirmed provider compliance status 22. Implement AI system-specific operator training programmes (Art. 26(2)(b)) addressing: what the AI system's intended purpose and operational boundaries are, what failure modes and residual risks exist, when and how to override AI recommendations or commands, and how to report serious incidents 23. Define suspension criteria (Art. 26(5)) — the specific AI output anomaly patterns or infrastructure performance deviations that trigger immediate AI system suspension and transition to manual control — embed these in operational runbooks 24. Establish a serious incident reporting procedure aligned with EU AI Act Art. 73 and NIS2 Art. 23 simultaneously — a single incident form that captures information required by both reporting frameworks reduces compliance overhead and improves reporting completeness 25. Register as a deployer of high-risk AI systems in the EU AI Act database (Art. 71) for each deployed Annex III Point 2 AI system before the August 2026 general application deadline — verify registration includes the specific operational context and the infrastructure sector category

See Also