2026-04-24·14 min read·sota.io team

EU AI Act Art.56: Codes of Practice for GPAI Models — Compliance Pathway, Conformity Presumption, and Commission Fallback (2026)

EU AI Act Article 56 establishes the Codes of Practice (CoP) as the central voluntary compliance mechanism for providers of General-Purpose AI models. In the Chapter V GPAI obligation architecture — Art.51 classification → Art.52 baseline → Art.53 systemic risk obligations → Art.54 authorised representatives → Art.55 AI Office evaluation powers — Art.56 is the exit ramp: the mechanism through which providers can demonstrate compliance without waiting for the AI Office to investigate them.

A Code of Practice is not a legal standard or a harmonised norm. It is a structured voluntary commitment that, once signed and followed, creates a rebuttable presumption of conformity with Art.52 (general GPAI obligations) and Art.53 (systemic risk obligations). For frontier AI developers operating at scale in the EU, Art.56 is the practical answer to the question: "How do we show regulators we are compliant before they ask?"

Art.56 became applicable on 2 August 2025 as part of Chapter V of the EU AI Act (Regulation (EU) 2024/1689). The AI Office launched its CoP facilitation process in mid-2024, before the Regulation came fully into force, resulting in an initial draft CoP that providers could sign ahead of the August 2025 application date.

For EU infrastructure providers and PaaS operators — including sota.io — Art.56 has indirect but significant relevance: GPAI model providers who rely on EU-hosted infrastructure to store CoP evidence, training documentation, and adversarial testing records operate under a single legal jurisdiction. Providers using US-incorporated cloud infrastructure for these records face the dual-jurisdiction risk: Art.56 monitoring obligations and CLOUD Act access requests operate simultaneously against the same evidence pool.


Art.56 in the Chapter V GPAI Compliance Architecture

Art.56 sits at the compliance-demonstration end of Chapter V:

ArticleTitleFunction
Art.51Systemic risk classificationDefines who Art.56 extended CoP applies to
Art.52General GPAI obligationsBaseline obligations CoP must address
Art.53Systemic risk obligationsEnhanced obligations CoP must address for high-compute models
Art.54Authorised representativesNon-EU provider gateway obligation
Art.55AI Office evaluation powersExternal oversight — CoP deviation triggers evaluation
Art.56Codes of PracticeVoluntary compliance pathway + conformity presumption

The relationship between Art.56 and Art.55 is bidirectional: following the CoP reduces the likelihood of Art.55 evaluation (the AI Office focuses resources on non-signatories and CoP deviations); conversely, Art.55 evaluation is explicitly triggered by CoP deviation without adequate alternative measures.


Art.56(1): AI Office Facilitation of Code of Practice Development

Art.56(1) assigns the AI Office primary responsibility for facilitating CoP development at Union level, with the aim of contributing to the proper application of the Regulation and taking into account international approaches.

What "Facilitation" Means in Practice

The AI Office does not write the CoP itself. Facilitation means:

Facilitation FunctionWhat the AI Office Does
Process designEstablishes working groups, timelines, consultation rounds
Stakeholder coordinationInvites providers, authorities, civil society, researchers
Draft consolidationSynthesises input into coherent commitments
Adequacy assessmentEvaluates whether draft CoP sufficiently ensures compliance
Publication and maintenancePublishes final CoP, tracks signatories, manages updates

The phrase "taking into account international approaches" is significant for frontier AI developers: it means the AI Office is expected to align CoP requirements with international AI governance frameworks — G7 AI principles, OECD AI guidelines, ISO/IEC 42001 — rather than creating a purely EU-centric compliance regime. This alignment reduces duplicative compliance burden for globally operating providers.

The 2024-2025 AI Office CoP Process

Before the Regulation applied on 2 August 2025, the AI Office launched a pre-deployment CoP process:

  1. Call for expression of interest (Q4 2024): Open invitation to GPAI providers, researchers, civil society
  2. Working group formation (Q1 2025): Multiple thematic groups covering capability evaluation, systemic risk, transparency, cybersecurity
  3. Draft CoP v1 (Q2 2025): First consolidated draft circulated for comment
  4. Draft CoP v2 (Q3 2025): Revised following public consultation
  5. Final CoP (August 2025): Applicable date alignment with Chapter V

The real-world CoP process was notable for including frontier AI providers (including non-EU companies), civil society organisations, academic researchers, and national AI authorities — reflecting Art.56(3)'s broad participation mandate.


Art.56(2): Mandatory CoP Content — Art.52 and Art.53 Coverage

Art.56(2) specifies what the CoP must cover. The AI Office and the AI Board aim to ensure the CoP addresses:

Art.52 Obligations (All GPAI Providers)

Art.52 ObligationCoP Commitment Area
Art.52(1) Technical documentation (Annex XI)Documentation templates, update frequency, version control
Art.52(2) Information provision to downstream providersContractual clauses, API documentation standards
Art.52(3) Copyright policyRobots.txt compliance, training data filtering policy
Art.52(4) Summary of training dataDisclosure scope, format, update mechanism

Art.53 Obligations (Systemic Risk Providers Only)

Art.53 ObligationCoP Commitment Area
Art.53(1)(a) Adversarial testingStandardised red-teaming protocols, scope (CBRN, jailbreak, agentic), AI Office review submission
Art.53(1)(b) Incident reportingDetection-to-notification workflow, Art.87 interaction, timelines
Art.53(1)(c) CybersecurityWeight protection standards, inference security, supply chain
Art.53(1)(d) Energy efficiencyTraining FLOPs disclosure, inference kWh reporting, PUE methodology

The CoP also addresses Annex XI (technical documentation for GPAI models) and Annex XII (summary information for downstream providers). Art.56(2) specifically mentions adversarial testing procedure documentation as a required CoP element — reflecting the EU legislature's view that standardised adversarial testing is the cornerstone of systemic risk compliance.


Art.56(3): Participation — Who Can Join

Art.56(3) establishes a tiered participation structure:

Mandatory Invitees

The AI Office must invite:

Discretionary Invitees

The AI Office may invite:

Third-Country Authority Cooperation

Art.56(3) explicitly anticipates cooperation with authorities of third countries that are significant providers of GPAI models. This provision reflects the practical reality that the most capable GPAI models are developed by US, UK, and other non-EU entities. The AI Office may coordinate with:

For providers operating globally, this cross-border coordination is positive: it reduces the risk of conflicting compliance obligations between the EU CoP and equivalent national frameworks.


Art.56(4): AI Office and Board Oversight of CoP Quality

Art.56(4) assigns the AI Office and AI Board joint responsibility for ensuring CoP quality:

(a) The CoP must clearly outline specific objectives with concrete commitments or measures and, where appropriate, key performance indicators (KPIs) for measuring achievement.

(b) The CoP must take into account the specific nature and complexity of GPAI models and related value chains.

KPI Examples in Practice

The Art.56(4)(a) KPI requirement transforms abstract commitments into measurable obligations:

Obligation AreaExample KPI
Adversarial testingMinimum testing hours per model release; CBRN uplift threshold scores
Incident reportingTime-to-detection and time-to-AI-Office-notification metrics
Copyright policyPercentage of training data sources with documented opt-out compliance
Energy efficiencyMaximum kWh/1M tokens normalised inference energy
Documentation currencyMaximum lag between model update and Annex XI documentation update

The KPI framework is significant for developers because it creates objective compliance thresholds — a CoP with KPIs transforms a vague commitment ("we take security seriously") into a verifiable metric ("we achieve time-to-notification ≤ 72 hours for all Art.3(49) serious incidents").

Value Chain Complexity Acknowledgment

Art.56(4)(b) acknowledges that GPAI models operate within complex multi-layer value chains: foundation model provider → fine-tuning provider → API provider → application developer → end deployer → end user. The CoP must be structured so commitments are meaningful across this chain — not just at the foundation model level.


Art.56(5): AI Office Monitoring and Board Reporting

Art.56(5) creates a continuous monitoring obligation:

What Monitoring Looks Like

Monitoring ActivityFrequencyMechanism
KPI reporting by signatoriesPeriodic (quarterly or annual)Structured data submissions
AI Office spot checksEvent-triggered (incidents, deviations)Information requests under Art.55(2)
Board reportingPeriodicAI Office → Board summary reports
Public transparencyAnnualAI Office publishes monitoring summary

The monitoring obligation has a direct operational implication: providers cannot simply sign the CoP and file it away. CoP participation requires ongoing evidence collection, structured reporting, and audit-readiness — the same disciplines that Art.53 self-compliance requires, but now with external oversight cadence.


Art.56(6): Commission Implementing Acts — Fallback When CoP Fails

Art.56(6) is the most consequential provision for providers who choose not to participate. If:

then the Commission may issue implementing acts providing common rules for implementing Art.52 and Art.53 obligations. These implementing acts are adopted under the examination procedure (Art.98(2)).

The Implementing Act Risk for Non-Participants

The Art.56(6) fallback mechanism creates a strategic dynamic for GPAI providers:

Provider ActionRegulatory Outcome
Participates in CoP draftingShapes KPIs, documentation standards, and adversarial testing protocols before they become binding
Signs CoP, follows itBenefits from conformity presumption; low Art.55 evaluation risk
Does not participateImplementing acts are written without their input; less flexibility in how obligations are met
Non-participant after implementing actMust comply with Commission rules verbatim — no equivalent-measure flexibility

The implementing act is harder to comply with than the CoP for one structural reason: the Commission writes implementing acts as legal rules, not as flexible commitments. A CoP can accommodate equivalent measures and judgment calls; an implementing act cannot.

Timeline Risk

Art.56(6) does not specify a deadline for implementing act issuance. The Commission's authority activates when the AI Office determines CoP inadequacy. For planning purposes, providers should assume:


Art.56(7): Three-Year Review Cycle

Art.56(7) requires the AI Office to review CoPs at least every three years from their entry into force, updating them based on:

  1. Evolution of the state of the art — new model capabilities, new risk categories (e.g., agentic AI that didn't exist at CoP drafting)
  2. Evolution of the Regulation — Commission delegated acts, implementing acts, or legislative amendments that affect Art.52-53 scope

Practical Implication for Providers

The three-year review cycle means CoP compliance is not a one-time certification. Providers must:

For GPAI model providers with rapidly evolving model lines, the three-year cycle may lag capability development. A provider who trains a substantially more capable model between CoP reviews is expected to apply the existing CoP to the new model — with the understanding that the next review will incorporate updated adversarial testing standards for that capability level.


Art.56(8): Information Obligation for CoP Signatories

Art.56(8) imposes a specific information obligation on signatories: upon request by the AI Office, signatories must provide information and evidence enabling assessment of compliance with CoP objectives and commitments.

This is distinct from Art.55(2) (AI Office powers to request information during model evaluations). Art.56(8) is a CoP-specific obligation that applies solely because the provider signed the CoP — it does not require the AI Office to initiate a formal Art.55 evaluation to trigger.

Documentation Requirements Under Art.56(8)

CoP Commitment CategoryDocumentation to Maintain
Adversarial testingTest protocols, scope, results, third-party red-team reports
Incident reportingDetection logs, notification records, AI Office correspondence
CybersecurityArchitecture diagrams, penetration test reports, access control logs
Energy efficiencyTraining FLOPs records, inference kWh measurements, datacenter PUE
Training data copyrightOpt-out compliance records, robots.txt states at training time
Annex XI documentationVersion history, update logs, downstream communication records

The Conformity Presumption in Practice

The conformity presumption under Art.56 works as follows:

Provider signs CoP
      ↓
Provider follows CoP commitments + KPIs
      ↓
AI Office monitoring confirms ongoing compliance
      ↓
Rebuttable presumption: Provider complies with Art.52 + Art.53
      ↓
Art.55 evaluation risk reduced (AI Office resources focus on non-signatories)

The presumption is rebuttable: if the AI Office finds evidence that a CoP signatory is systematically failing CoP commitments despite positive KPI reports, it may initiate an Art.55 evaluation notwithstanding CoP participation.

CoP vs. Individual Compliance Demonstration

For providers who choose not to sign the CoP, Art.56 does not explicitly require an alternative — but Art.55 and general Chapter V enforcement practice create strong incentives:

ApproachConformity PresumptionArt.55 RiskFlexibility
CoP signatory, following CoPYes (Art.56)LowMedium (KPI-bound)
CoP signatory, deviating with equivalentsConditionalMediumMedium
Non-signatory, individual complianceNoHigherHigh
Non-signatory, post implementing actNoHighestLow

Art.56 × Art.53: The Adversarial Testing Bridge

Art.56(2) specifically requires the CoP to address adversarial testing procedures — creating a direct bridge between Art.53(1)(a) adversarial testing and Art.56 CoP compliance.

In practice this means:

  1. The CoP defines standardised adversarial testing protocols (red-teaming scope, CBRN uplift evaluation, jailbreak resistance thresholds)
  2. Providers who follow the CoP's adversarial testing protocol simultaneously satisfy Art.53(1)(a) standardised protocol requirement
  3. Adversarial testing results submitted to the AI Office under the CoP constitute evidence under Art.55(2) if an evaluation is triggered

This integration reduces compliance burden: instead of separately satisfying Art.53(1)(a) and maintaining CoP compliance, providers use the CoP's testing protocol as the single source of truth for both obligations.


CLOUD Act Implications for CoP Evidence Records

CoP compliance requires maintaining substantial documentation under Art.56(8). For providers using US-incorporated cloud infrastructure for these records:

Documentation TypeCLOUD Act Risk
Adversarial testing resultsHigh — US government access request possible
Incident detection logsHigh — contains model capability evidence
AI Office correspondenceMedium — may reveal regulatory posture
Training data copyright recordsMedium — IP-sensitive
Energy efficiency reportsLow — publicly disclosed anyway

The dual-jurisdiction risk is structural: EU AI Office requests documentation under Art.56(8); US government potentially seeks the same documentation under CLOUD Act 18 U.S.C. §2713. EU-hosted infrastructure eliminates the CLOUD Act exposure entirely: EU data protection law (GDPR) and EU blocking statutes apply to records stored on EU-established infrastructure, and EU courts would not honour a CLOUD Act order for records physically in EU jurisdiction.

For sota.io as a EU PaaS provider, this is the concrete compliance argument: GPAI model providers can satisfy Art.56(8) documentation obligations with zero CLOUD Act exposure by storing evidence records on EU infrastructure.


Python Implementation: CoP Compliance Tracker

from dataclasses import dataclass, field
from datetime import datetime, date
from enum import Enum
from typing import Optional

class CoPStatus(str, Enum):
    SIGNATORY = "signatory"
    NON_SIGNATORY = "non_signatory"
    PENDING = "pending"

class CommitmentStatus(str, Enum):
    ON_TRACK = "on_track"
    AT_RISK = "at_risk"
    DEVIATED = "deviated"
    EQUIVALENT_MEASURE = "equivalent_measure"

@dataclass
class CoPCommitment:
    area: str               # e.g. "adversarial_testing", "incident_reporting"
    article_reference: str  # e.g. "Art.53(1)(a)"
    kpi_description: str
    kpi_target: str
    current_status: CommitmentStatus
    last_evidence_date: Optional[date] = None
    evidence_location: Optional[str] = None  # EU-hosted storage path
    notes: str = ""

@dataclass
class CoPComplianceRecord:
    provider_name: str
    cop_version: str
    cop_signed_date: Optional[date]
    status: CoPStatus
    commitments: list[CoPCommitment] = field(default_factory=list)

    def conformity_presumption_active(self) -> bool:
        """True if provider qualifies for Art.56 conformity presumption."""
        if self.status != CoPStatus.SIGNATORY:
            return False
        deviated = [
            c for c in self.commitments
            if c.current_status == CommitmentStatus.DEVIATED
        ]
        return len(deviated) == 0

    def ai_office_request_ready(self) -> dict:
        """Returns Art.56(8) readiness assessment for each commitment."""
        return {
            c.area: {
                "evidence_available": c.last_evidence_date is not None,
                "evidence_location": c.evidence_location,
                "status": c.current_status,
                "days_since_evidence": (
                    (date.today() - c.last_evidence_date).days
                    if c.last_evidence_date else None
                ),
            }
            for c in self.commitments
        }

    def at_risk_commitments(self) -> list[CoPCommitment]:
        return [
            c for c in self.commitments
            if c.current_status in (CommitmentStatus.AT_RISK, CommitmentStatus.DEVIATED)
        ]

    def report(self) -> str:
        lines = [
            f"CoP Compliance Report: {self.provider_name}",
            f"CoP Version: {self.cop_version} | Signed: {self.cop_signed_date}",
            f"Status: {self.status} | Conformity Presumption: {self.conformity_presumption_active()}",
            "",
            "Commitment Status:",
        ]
        for c in self.commitments:
            flag = "✅" if c.current_status == CommitmentStatus.ON_TRACK else "⚠️"
            lines.append(f"  {flag} [{c.article_reference}] {c.area}: {c.current_status}")
            if c.current_status != CommitmentStatus.ON_TRACK:
                lines.append(f"       KPI target: {c.kpi_target}")
                if c.notes:
                    lines.append(f"       Note: {c.notes}")
        return "\n".join(lines)


# Example usage
record = CoPComplianceRecord(
    provider_name="Acme GPAI GmbH",
    cop_version="CoP v2.0 (2025)",
    cop_signed_date=date(2025, 8, 1),
    status=CoPStatus.SIGNATORY,
    commitments=[
        CoPCommitment(
            area="adversarial_testing",
            article_reference="Art.53(1)(a)",
            kpi_description="Red-team evaluation before each major release",
            kpi_target="Minimum 500 hours CBRN-scope testing per release",
            current_status=CommitmentStatus.ON_TRACK,
            last_evidence_date=date(2025, 10, 15),
            evidence_location="eu-storage://cop-evidence/adversarial/2025-Q4/",
        ),
        CoPCommitment(
            area="incident_reporting",
            article_reference="Art.53(1)(b)",
            kpi_description="Time-to-AI-Office-notification for Art.3(49) events",
            kpi_target="≤ 72 hours from detection",
            current_status=CommitmentStatus.ON_TRACK,
            last_evidence_date=date(2025, 11, 1),
            evidence_location="eu-storage://cop-evidence/incidents/2025/",
        ),
        CoPCommitment(
            area="energy_efficiency",
            article_reference="Art.53(1)(d)",
            kpi_description="Quarterly inference kWh/1M token disclosure",
            kpi_target="Published within 30 days of quarter end",
            current_status=CommitmentStatus.AT_RISK,
            last_evidence_date=date(2025, 7, 30),
            notes="Q3 2025 disclosure overdue — internal reporting lag",
        ),
    ],
)

print(record.report())
print("\nAI Office Request Readiness:")
for area, info in record.ai_office_request_ready().items():
    print(f"  {area}: {'Ready' if info['evidence_available'] else 'NOT READY'}")

Art.56 Compliance Checklist (12 Items)

Before Signing the CoP

After Signing

Ongoing (Per Monitoring Cycle)


See Also