2026-04-15·14 min read·sota.io team

EU AI Act Art.53(1)(d): Cybersecurity and Physical Infrastructure Protection for GPAI Systemic Risk Models (2026)

Art.53 of the EU AI Act imposes four obligations on providers of GPAI models with systemic risk. Three of those obligations — adversarial testing, risk assessment, and incident reporting — receive substantial attention in the GPAI Code of Practice Chapter 3 and in developer compliance guides. The fourth obligation, Art.53(1)(d), is consistently underexplored: the requirement to "ensure an adequate level of cybersecurity protection for the GPAI model with systemic risk and the physical infrastructure of the model."

This is not a minor supplement to adversarial testing. Art.53(1)(d) is a standalone statutory obligation that covers the security of the entire operational stack of a systemic risk GPAI model — from the data center housing the training compute to the inference API endpoints, from the protection of model weights as intellectual property to the integrity of the training data pipeline. The GPAI CoP Chapter 3 covers three cybersecurity measures (S-08 prompt injection, S-09 weight access control, S-10 anomaly monitoring) as part of the adversarial testing framework. Art.53(1)(d) is broader.

This guide covers what Art.53(1)(d) actually requires in operational terms, how it differs from Art.15 cybersecurity for high-risk AI, what "adequate" means in the absence of a fixed standard, and how to build a systemic risk GPAI security program that satisfies the obligation.


Art.53(1)(d) Scope: What Is Protected and Why

Art.53(1)(d) protects two categories of assets:

The GPAI model with systemic risk — the trained model artifact, its weights, its architecture documentation, and the inference stack that serves the model to downstream providers and end users.

The physical infrastructure of the model — the compute infrastructure used for training and inference, including co-location facilities, network connections, hardware acceleration systems, and the operational technology that supports the model's deployment.

The inclusion of "physical infrastructure" is deliberate and consequential. The EU legislature was aware that GPAI models represent high-value targets not just for capability-related misuse (addressed by Art.53(1)(a) adversarial testing) but for theft, sabotage, and intelligence collection. A GPAI model's weights represent years of compute investment and contain proprietary architectural knowledge. The infrastructure that runs frontier-scale models is a critical node in EU AI capacity. Both assets need protection under Art.53(1)(d).

What Art.53(1)(d) Does Not Cover

Art.53(1)(d) does not apply to:

Art.53(1)(d) vs Art.15: The Key Distinction

Art.15 of the EU AI Act addresses cybersecurity for high-risk AI systems. Art.53(1)(d) addresses cybersecurity for systemic risk GPAI models. These are different obligations with different scopes.

DimensionArt.15 (High-Risk AI)Art.53(1)(d) (Systemic Risk GPAI)
Who it applies toProviders of high-risk AI systems under Annex IIIProviders of GPAI models above 10^25 FLOPs or AI Office-designated
What is protectedThe AI system's operations — resistance to adversarial inputs, output manipulation, availabilityThe GPAI model artifact AND physical training/inference infrastructure
Key threatsAdversarial inputs, model poisoning at inference time, availability attacksWeight exfiltration, training pipeline compromise, infrastructure sabotage, model extraction
Standard for "adequate"Art.42 harmonised standards (EN ISO/IEC 42001, ETSI EN 303 645)No fixed standard — "adequate" assessed in context of systemic risk designation
GPAI provider in scope?Only if their GPAI model is itself a high-risk AI systemYes, if systemic risk threshold is met
Downstream effectProvider provides technical documentation to downstream usersDownstream providers are not covered by Art.53(1)(d) — separate TPSP obligations

A GPAI model that is also a high-risk AI system (unusual but possible) would need to satisfy both Art.15 and Art.53(1)(d).


What "Adequate Cybersecurity Protection" Means in Practice

The EU AI Act does not define "adequate" for Art.53(1)(d). The AI Office's guidance on systemic risk GPAI compliance references the ENISA AI Security threat taxonomy (published 2023) and the GPAI CoP Chapter 3 cybersecurity measures as a floor. "Adequate" means security measures proportionate to:

  1. The systemic risk designation itself — a 10^25 FLOPs model that triggered automatic designation faces a different threat environment than a borderline-designated model
  2. The sensitivity of the capabilities — a model with demonstrated CBRN-uplift or cyberoffensive capabilities requires stronger protections than a model whose systemic risk is based on societal influence capabilities
  3. The deployment context — inference-only cloud APIs face different threats than research environments where weights are more widely accessible
  4. The state of the art in AI security — "adequate" evolves as the AI security field develops, requiring ongoing reassessment

The GPAI CoP Chapter 3 measures (S-08, S-09, S-10) represent the minimum floor. The Art.53(1)(d) obligation encompasses that floor plus the additional dimensions described below.


Physical Infrastructure Security

Art.53(1)(d) explicitly covers "the physical infrastructure of the model." This section maps what physical security requirements follow from that obligation.

Data Center and Facility Requirements

Tier classification: Training infrastructure and inference infrastructure for systemic risk GPAI models should meet at least Tier III data center standards (N+1 redundancy, 99.982% availability). The justification is two-fold: availability is a component of cybersecurity (denial-of-service attacks targeting inference infrastructure are a real threat vector for GPAI providers), and physical resilience reduces the attack surface for infrastructure compromise.

Physical access controls:

Hardware security:

Network and power segregation:

CLOUD Act and EU-Sovereign Infrastructure

Physical infrastructure protection under Art.53(1)(d) intersects with a legal risk that is not purely a cybersecurity question: the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 18 U.S.C. § 2713).

The CLOUD Act authorizes US law enforcement to compel US persons and US-incorporated entities to produce data stored on servers located anywhere in the world — including EU-based data centers operated by US cloud providers. For GPAI model weights and training data stored in US-hyperscaler infrastructure (AWS, Azure, GCP), a US government order can require production of those assets to US authorities without an EU court process.

This creates a direct tension with Art.53(1)(d)'s physical infrastructure protection obligation:

Infrastructure TypeCLOUD Act RiskArt.53(1)(d) Implication
AWS/Azure/GCP EU regionsYES — US parent entities are subject to CLOUD Act regardless of data locationModel weights stored here may be compelled to US government without EU legal process
EU-incorporated cloud provider (no US parent)NO — CLOUD Act requires US person/entity jurisdictionModel weights are not subject to US compulsion
On-premises EU infrastructureNO — CLOUD Act applies to communications providers, not private infrastructureMaximum physical infrastructure control

The EU AI Act does not explicitly require EU-sovereign infrastructure for GPAI models. But the physical infrastructure protection obligation of Art.53(1)(d), combined with the CLOUD Act risk, creates a strong practical argument for EU-sovereign hosting. If model weights can be compelled to a foreign jurisdiction without EU authorization, the "physical infrastructure of the model" is not adequately protected against a specific category of legal/technical risk.


Model Weight Security: Beyond S-09

The GPAI CoP S-09 measure requires basic model weight access control — logical access restrictions, multi-person authorization for weight export, and audit logging. Art.53(1)(d) extends this floor.

Model Exfiltration Threat Model

Model weights for systemic risk GPAI models represent a threat category specific to frontier AI: exfiltration by a sophisticated adversary to replicate the model's capabilities. This threat differs from typical IP theft because:

Exfiltration vectors for GPAI model weights:

VectorMechanismMitigation
Insider threatAuthorized employee exports weights for personal use or saleMulti-person authorization, behavioral monitoring, DLP
Infrastructure compromiseAttacker gains access to training or inference servers and copies weightsNetwork segregation, endpoint detection, runtime attestation
Supply chain attackMalicious code in training infrastructure exfiltrates weights during training runSoftware bill of materials (SBOM), code signing, isolated training networks
Legal compulsionGovernment order to US-parent cloud providerEU-sovereign infrastructure
API model extractionSystematic querying to distill the model's knowledge into a student modelRate limiting, query pattern detection, output watermarking

Hardware Security Module (HSM) Integration

For the highest-sensitivity weight storage, Art.53(1)(d) physical infrastructure requirements point toward hardware security module (HSM) integration:

This approach is used in practice by frontier AI providers for their most sensitive model artifacts. The CoP S-09 describes the policy requirement; TEE-based weight loading is one implementation path.


Model Extraction Attack Prevention

Model extraction attacks are a specific threat to GPAI providers that Art.53(1)(d) must address. Unlike infrastructure-based exfiltration, model extraction works entirely through the public inference API.

Model Extraction Taxonomy

Model distillation attacks: An attacker systematically queries the target GPAI model with a curated dataset and uses the outputs to train a smaller "student" model that approximates the target's behavior. For large enough query budgets, distillation can recover a substantial fraction of the target model's capabilities in specific domains. This attack is relevant for Art.53(1)(d) because the distilled model is not subject to Art.53 compliance obligations.

Membership inference attacks: An attacker queries the model with examples from the suspected training set to infer what data the model was trained on. This is relevant to Art.52 training data transparency obligations but also implicates Art.53(1)(d) as a privacy and data security threat.

Model inversion attacks: An attacker uses the model's outputs to reconstruct aspects of its training data. For GPAI models trained on sensitive data, this is both a privacy violation and an indicator that the model is inadvertently memorizing training data in a recoverable way.

API-Level Defenses

Rate limiting and query budget monitoring:

Output perturbation:

Query watermarking:


Training Pipeline Integrity

Art.53(1)(d) physical infrastructure protection extends to the training pipeline — not just the trained model artifact. A compromised training pipeline produces a compromised model.

Adversarial Data Injection (Data Poisoning)

If an attacker can inject adversarial data into the training corpus, they can embed backdoors or degrade specific capabilities in the resulting model. For internet-scale pretraining corpora, complete elimination of adversarial data is not feasible — but controls can significantly reduce the risk:

Data provenance tracking:

Data pipeline integrity:

Training monitoring:


AI Vulnerability Disclosure Program

Art.53(1)(d) physical infrastructure protection implies a structured process for receiving and handling external reports of security vulnerabilities — an AI-specific Vulnerability Disclosure Program (AI VDP).

Traditional CVE-based vulnerability disclosure is designed for software flaws. GPAI security vulnerabilities include categories that have no direct CVE equivalent: prompt injection bypass techniques, capability-unlocking jailbreaks, output watermark removal methods, and model inversion attack vectors. An Art.53(1)(d)-compliant AI VDP should address:

Scope definition:

Reporter protection:

Response timelines:

Connection to S-05 Incident Reporting: A reported vulnerability that, when assessed, reveals that the vulnerability has already been exploited in production may trigger the S-05 serious incident classification process. The AI VDP team must have a clear escalation path to the incident response team.


Third-Party Security Audits

Art.53(1)(d) physical infrastructure protection requires that the provider not rely solely on internal security assessments. The standard of "adequate" cybersecurity for a systemic risk GPAI model implies third-party audit.

Audit scope for Art.53(1)(d) compliance:

Audit frequency:

Audit documentation:


Python GPAISecurityObligationChecker

from dataclasses import dataclass, field
from enum import Enum
from typing import List, Optional
import json


class SystemicRiskBasis(Enum):
    FLOPS_THRESHOLD = "flops_threshold"        # Art.51(1)(a): >= 10^25 FLOPs
    AI_OFFICE_DESIGNATION = "ai_office_designation"  # Art.51(1)(b): AI Office designated


class InfrastructureType(Enum):
    US_HYPERSCALER_EU_REGION = "us_hyperscaler_eu_region"   # AWS/Azure/GCP EU: CLOUD Act risk
    EU_SOVEREIGN_CLOUD = "eu_sovereign_cloud"                 # EU-incorporated, no US parent
    ON_PREMISES_EU = "on_premises_eu"                         # Private EU infrastructure
    HYBRID = "hybrid"                                          # Mixed


class CapabilityRiskLevel(Enum):
    HIGH = "high"        # CBRN or cyberoffensive capabilities demonstrated
    MEDIUM = "medium"    # Societal influence or critical infrastructure risks
    LOW = "low"          # Systemic risk based on scale, not specific dangerous capabilities


@dataclass
class PhysicalInfrastructureProfile:
    infrastructure_type: InfrastructureType
    tier_level: int                          # Data center Tier (I-IV)
    has_tpm_attestation: bool                # TPM-based runtime attestation
    has_physical_access_logging: bool        # All physical access logged
    has_hardware_tamper_evidence: bool       # Tamper-evident hardware
    has_network_segregation: bool            # Training/inference network separation
    cloud_act_risk_mitigated: bool           # EU-sovereign or on-prem


@dataclass
class WeightSecurityProfile:
    has_hsm_encryption: bool                 # HSM/TPM-based key management
    has_multi_person_authorization: bool     # Weight export requires >1 person
    has_weight_access_audit_log: bool        # Full audit trail of weight access
    has_tee_inference: bool                  # TEE-based weight loading for inference
    has_aibom: bool                          # AI Bill of Materials for training data
    has_pipeline_code_signing: bool          # Signed training pipeline code


@dataclass
class APISecurityProfile:
    has_rate_limiting: bool
    has_long_window_query_monitoring: bool   # Multi-day query budget tracking
    has_extraction_pattern_detection: bool  # Model extraction query signature detection
    has_output_watermarking: bool            # Cryptographic output watermarking
    has_ddos_protection: bool


@dataclass
class GovernanceProfile:
    has_ai_vdp: bool                         # AI Vulnerability Disclosure Program
    has_third_party_audit: bool              # Annual third-party security audit
    last_audit_date: Optional[str]           # ISO date of last audit
    has_post_incident_audit_procedure: bool  # Audit trigger after serious incident


@dataclass
class Art53dComplianceGap:
    domain: str
    gap_description: str
    statutory_basis: str
    severity: str                            # critical / high / medium
    remediation: str


def assess_art53d_compliance(
    systemic_risk_basis: SystemicRiskBasis,
    capability_risk: CapabilityRiskLevel,
    infra: PhysicalInfrastructureProfile,
    weights: WeightSecurityProfile,
    api: APISecurityProfile,
    governance: GovernanceProfile,
) -> List[Art53dComplianceGap]:
    """
    Assess Art.53(1)(d) compliance gaps for a systemic risk GPAI provider.
    Returns list of identified gaps sorted by severity.
    """
    gaps: List[Art53dComplianceGap] = []

    # Physical infrastructure checks
    if infra.infrastructure_type == InfrastructureType.US_HYPERSCALER_EU_REGION \
            and not infra.cloud_act_risk_mitigated:
        gaps.append(Art53dComplianceGap(
            domain="physical_infrastructure",
            gap_description="Model weights and training data stored in US-hyperscaler "
                            "EU region are subject to CLOUD Act compulsion by US "
                            "government without EU legal process.",
            statutory_basis="Art.53(1)(d) — physical infrastructure of the model",
            severity="critical" if capability_risk == CapabilityRiskLevel.HIGH else "high",
            remediation="Migrate GPAI training and inference infrastructure to EU-sovereign "
                       "cloud provider (no US parent) or on-premises EU infrastructure.",
        ))

    if infra.tier_level < 3:
        gaps.append(Art53dComplianceGap(
            domain="physical_infrastructure",
            gap_description=f"Data center Tier {infra.tier_level} provides insufficient "
                            "availability and physical security for systemic risk GPAI "
                            "infrastructure.",
            statutory_basis="Art.53(1)(d) — adequate cybersecurity protection",
            severity="high",
            remediation="Upgrade to Tier III+ data center facilities with documented "
                       "physical access controls and availability SLA.",
        ))

    if not infra.has_tpm_attestation:
        gaps.append(Art53dComplianceGap(
            domain="physical_infrastructure",
            gap_description="Inference servers lack TPM-based runtime attestation, "
                            "preventing verification that model weights are loaded in "
                            "an expected, unmodified environment.",
            statutory_basis="Art.53(1)(d) — cybersecurity protection for GPAI model",
            severity="high",
            remediation="Implement TPM attestation for all inference servers hosting "
                       "systemic risk GPAI model weights.",
        ))

    # Model weight security checks
    if not weights.has_hsm_encryption:
        gaps.append(Art53dComplianceGap(
            domain="model_weight_security",
            gap_description="Model weights encrypted with software key stores — keys "
                            "are vulnerable to memory-scraping attacks and infrastructure "
                            "compromise.",
            statutory_basis="Art.53(1)(d) — cybersecurity protection for GPAI model",
            severity="high" if capability_risk == CapabilityRiskLevel.HIGH else "medium",
            remediation="Integrate HSM or TPM-based key management for weight encryption "
                       "keys. Implement TEE-based weight loading for inference.",
        ))

    if not weights.has_aibom:
        gaps.append(Art53dComplianceGap(
            domain="training_pipeline_integrity",
            gap_description="No AI Bill of Materials (AIBOM) documenting training data "
                            "sources, collection methods, and integrity hashes — cannot "
                            "verify training pipeline integrity or detect data poisoning.",
            statutory_basis="Art.53(1)(d) — physical infrastructure of the model "
                            "(training pipeline)",
            severity="high",
            remediation="Implement AIBOM with per-shard hash verification, source "
                       "attribution logging, and data pipeline code signing.",
        ))

    # API security checks
    if not api.has_extraction_pattern_detection:
        gaps.append(Art53dComplianceGap(
            domain="api_security",
            gap_description="No model extraction attack pattern detection — systematic "
                            "querying for model distillation or capability mapping is "
                            "not monitored.",
            statutory_basis="Art.53(1)(d) — cybersecurity protection for GPAI model",
            severity="high" if capability_risk == CapabilityRiskLevel.HIGH else "medium",
            remediation="Implement long-window query budget monitoring and model "
                       "extraction signature detection. Add output watermarking for "
                       "authorized downstream access.",
        ))

    # Governance checks
    if not governance.has_ai_vdp:
        gaps.append(Art53dComplianceGap(
            domain="governance",
            gap_description="No AI Vulnerability Disclosure Program — external security "
                            "researchers have no sanctioned channel to report novel "
                            "capability-level vulnerabilities.",
            statutory_basis="Art.53(1)(d) — adequate cybersecurity protection (standard "
                            "of care for systemic risk AI)",
            severity="medium",
            remediation="Establish AI VDP with defined scope (capability-level risks, "
                       "prompt injection bypasses, infrastructure vulnerabilities), "
                       "safe harbor for researchers, and escalation path to incident "
                       "response team.",
        ))

    if not governance.has_third_party_audit:
        gaps.append(Art53dComplianceGap(
            domain="governance",
            gap_description="No third-party security audit — 'adequate' cybersecurity "
                            "for systemic risk GPAI models requires independent "
                            "verification.",
            statutory_basis="Art.53(1)(d) — adequate cybersecurity protection",
            severity="high",
            remediation="Commission annual third-party security audit covering physical "
                       "infrastructure, weight access controls, training pipeline "
                       "integrity, and API security.",
        ))

    # Sort by severity
    severity_order = {"critical": 0, "high": 1, "medium": 2}
    gaps.sort(key=lambda g: severity_order.get(g.severity, 3))
    return gaps


# Example: EU-domiciled GPAI provider on US-hyperscaler infrastructure
gaps = assess_art53d_compliance(
    systemic_risk_basis=SystemicRiskBasis.FLOPS_THRESHOLD,
    capability_risk=CapabilityRiskLevel.HIGH,
    infra=PhysicalInfrastructureProfile(
        infrastructure_type=InfrastructureType.US_HYPERSCALER_EU_REGION,
        tier_level=3,
        has_tpm_attestation=False,
        has_physical_access_logging=True,
        has_hardware_tamper_evidence=False,
        has_network_segregation=True,
        cloud_act_risk_mitigated=False,  # US parent = CLOUD Act risk
    ),
    weights=WeightSecurityProfile(
        has_hsm_encryption=False,
        has_multi_person_authorization=True,
        has_weight_access_audit_log=True,
        has_tee_inference=False,
        has_aibom=False,
        has_pipeline_code_signing=False,
    ),
    api=APISecurityProfile(
        has_rate_limiting=True,
        has_long_window_query_monitoring=False,
        has_extraction_pattern_detection=False,
        has_output_watermarking=False,
        has_ddos_protection=True,
    ),
    governance=GovernanceProfile(
        has_ai_vdp=False,
        has_third_party_audit=False,
        last_audit_date=None,
        has_post_incident_audit_procedure=False,
    ),
)

for gap in gaps:
    print(f"[{gap.severity.upper()}] {gap.domain}: {gap.gap_description[:80]}...")

25-Item Art.53(1)(d) Compliance Checklist

Part A: Physical Infrastructure (Items 1–8)

Part B: Model Weight Security (Items 9–14)

Part C: Training Pipeline Integrity (Items 15–18)

Part D: API and Inference Security (Items 19–22)

Part E: Governance (Items 23–25)


Common Art.53(1)(d) Implementation Mistakes

Mistake 1: Equating Art.53(1)(d) with GPAI CoP Chapter 3 cybersecurity measures

CoP Chapter 3 measures S-08, S-09, and S-10 are a floor, not a ceiling. They cover prompt injection, basic weight access control, and anomaly monitoring. Art.53(1)(d) covers physical infrastructure, model extraction defense, training pipeline integrity, and governance measures that are not addressed in Chapter 3. Meeting S-08 to S-10 does not fully satisfy Art.53(1)(d).

Mistake 2: Treating Art.53(1)(d) as equivalent to Art.15 high-risk AI cybersecurity

Art.15 is designed for the inference-time behavior of high-risk AI systems — resistance to adversarial inputs, robustness of outputs. Art.53(1)(d) covers the entire operational stack of a GPAI model provider including training infrastructure. The two obligations have different scopes, different threat models, and partially different technical controls.

Mistake 3: Assuming US-hyperscaler EU regions satisfy "physical infrastructure" protection

The EU AI Act does not explicitly prohibit US-hyperscaler infrastructure. But the CLOUD Act creates a jurisdiction gap that is directly relevant to Art.53(1)(d)'s physical infrastructure protection requirement. A risk acceptance decision to remain on US-hyperscaler infrastructure for systemic risk GPAI model weights must be documented, and the rationale must be defensible to the AI Office.

Mistake 4: No AI VDP because "we don't do bug bounties"

A traditional software bug bounty program is not the same as an AI VDP. The AI VDP is specifically scoped to capability-level vulnerabilities, novel jailbreaks that circumvent S-08 Chapter 3 controls, and infrastructure security reports. The safe harbor and reporting channel are part of the "adequate cybersecurity protection" standard for a systemic risk GPAI provider.


EU-Sovereign Infrastructure as Art.53(1)(d) Risk Reduction

sota.io is an EU-sovereign PaaS platform — incorporated and operated entirely within EU jurisdiction, with no US parent entity. For GPAI providers deploying inference infrastructure on sota.io:

CLOUD Act risk eliminated: sota.io is not a US person or entity subject to CLOUD Act jurisdiction. Model weights stored and inference workloads running on sota.io infrastructure cannot be compelled to US government under CLOUD Act, satisfying the physical infrastructure protection rationale of Art.53(1)(d).

Art.70 confidentiality protection: The EU AI Act's Art.70 confidentiality obligations for trade secrets in AI systems interact with physical infrastructure jurisdiction. EU-jurisdiction infrastructure means Art.70 protections are the applicable framework for any government access requests, not US compulsion procedures.

DORA Art.30(2)(e) alignment: For financial sector GPAI providers subject to both Art.53(1)(d) and DORA, Art.30(2)(e) contractual data location requirements align naturally with EU-sovereign GPAI infrastructure — satisfying both regulatory frameworks with a single infrastructure decision.