EU AI Act Art.56: Codes of Practice for GPAI Models — Compliance Pathway, Conformity Presumption, and Commission Fallback (2026)
EU AI Act Article 56 establishes the Codes of Practice (CoP) as the central voluntary compliance mechanism for providers of General-Purpose AI models. In the Chapter V GPAI obligation architecture — Art.51 classification → Art.52 baseline → Art.53 systemic risk obligations → Art.54 authorised representatives → Art.55 AI Office evaluation powers — Art.56 is the exit ramp: the mechanism through which providers can demonstrate compliance without waiting for the AI Office to investigate them.
A Code of Practice is not a legal standard or a harmonised norm. It is a structured voluntary commitment that, once signed and followed, creates a rebuttable presumption of conformity with Art.52 (general GPAI obligations) and Art.53 (systemic risk obligations). For frontier AI developers operating at scale in the EU, Art.56 is the practical answer to the question: "How do we show regulators we are compliant before they ask?"
Art.56 became applicable on 2 August 2025 as part of Chapter V of the EU AI Act (Regulation (EU) 2024/1689). The AI Office launched its CoP facilitation process in mid-2024, before the Regulation came fully into force, resulting in an initial draft CoP that providers could sign ahead of the August 2025 application date.
For EU infrastructure providers and PaaS operators — including sota.io — Art.56 has indirect but significant relevance: GPAI model providers who rely on EU-hosted infrastructure to store CoP evidence, training documentation, and adversarial testing records operate under a single legal jurisdiction. Providers using US-incorporated cloud infrastructure for these records face the dual-jurisdiction risk: Art.56 monitoring obligations and CLOUD Act access requests operate simultaneously against the same evidence pool.
Art.56 in the Chapter V GPAI Compliance Architecture
Art.56 sits at the compliance-demonstration end of Chapter V:
| Article | Title | Function |
|---|---|---|
| Art.51 | Systemic risk classification | Defines who Art.56 extended CoP applies to |
| Art.52 | General GPAI obligations | Baseline obligations CoP must address |
| Art.53 | Systemic risk obligations | Enhanced obligations CoP must address for high-compute models |
| Art.54 | Authorised representatives | Non-EU provider gateway obligation |
| Art.55 | AI Office evaluation powers | External oversight — CoP deviation triggers evaluation |
| Art.56 | Codes of Practice | Voluntary compliance pathway + conformity presumption |
The relationship between Art.56 and Art.55 is bidirectional: following the CoP reduces the likelihood of Art.55 evaluation (the AI Office focuses resources on non-signatories and CoP deviations); conversely, Art.55 evaluation is explicitly triggered by CoP deviation without adequate alternative measures.
Art.56(1): AI Office Facilitation of Code of Practice Development
Art.56(1) assigns the AI Office primary responsibility for facilitating CoP development at Union level, with the aim of contributing to the proper application of the Regulation and taking into account international approaches.
What "Facilitation" Means in Practice
The AI Office does not write the CoP itself. Facilitation means:
| Facilitation Function | What the AI Office Does |
|---|---|
| Process design | Establishes working groups, timelines, consultation rounds |
| Stakeholder coordination | Invites providers, authorities, civil society, researchers |
| Draft consolidation | Synthesises input into coherent commitments |
| Adequacy assessment | Evaluates whether draft CoP sufficiently ensures compliance |
| Publication and maintenance | Publishes final CoP, tracks signatories, manages updates |
The phrase "taking into account international approaches" is significant for frontier AI developers: it means the AI Office is expected to align CoP requirements with international AI governance frameworks — G7 AI principles, OECD AI guidelines, ISO/IEC 42001 — rather than creating a purely EU-centric compliance regime. This alignment reduces duplicative compliance burden for globally operating providers.
The 2024-2025 AI Office CoP Process
Before the Regulation applied on 2 August 2025, the AI Office launched a pre-deployment CoP process:
- Call for expression of interest (Q4 2024): Open invitation to GPAI providers, researchers, civil society
- Working group formation (Q1 2025): Multiple thematic groups covering capability evaluation, systemic risk, transparency, cybersecurity
- Draft CoP v1 (Q2 2025): First consolidated draft circulated for comment
- Draft CoP v2 (Q3 2025): Revised following public consultation
- Final CoP (August 2025): Applicable date alignment with Chapter V
The real-world CoP process was notable for including frontier AI providers (including non-EU companies), civil society organisations, academic researchers, and national AI authorities — reflecting Art.56(3)'s broad participation mandate.
Art.56(2): Mandatory CoP Content — Art.52 and Art.53 Coverage
Art.56(2) specifies what the CoP must cover. The AI Office and the AI Board aim to ensure the CoP addresses:
Art.52 Obligations (All GPAI Providers)
| Art.52 Obligation | CoP Commitment Area |
|---|---|
| Art.52(1) Technical documentation (Annex XI) | Documentation templates, update frequency, version control |
| Art.52(2) Information provision to downstream providers | Contractual clauses, API documentation standards |
| Art.52(3) Copyright policy | Robots.txt compliance, training data filtering policy |
| Art.52(4) Summary of training data | Disclosure scope, format, update mechanism |
Art.53 Obligations (Systemic Risk Providers Only)
| Art.53 Obligation | CoP Commitment Area |
|---|---|
| Art.53(1)(a) Adversarial testing | Standardised red-teaming protocols, scope (CBRN, jailbreak, agentic), AI Office review submission |
| Art.53(1)(b) Incident reporting | Detection-to-notification workflow, Art.87 interaction, timelines |
| Art.53(1)(c) Cybersecurity | Weight protection standards, inference security, supply chain |
| Art.53(1)(d) Energy efficiency | Training FLOPs disclosure, inference kWh reporting, PUE methodology |
The CoP also addresses Annex XI (technical documentation for GPAI models) and Annex XII (summary information for downstream providers). Art.56(2) specifically mentions adversarial testing procedure documentation as a required CoP element — reflecting the EU legislature's view that standardised adversarial testing is the cornerstone of systemic risk compliance.
Art.56(3): Participation — Who Can Join
Art.56(3) establishes a tiered participation structure:
Mandatory Invitees
The AI Office must invite:
- All providers of GPAI models with systemic risk — the Art.51 threshold providers whose obligations are most extensive
Discretionary Invitees
The AI Office may invite:
- Other GPAI model providers (below systemic risk threshold) — voluntary participation extends the conformity presumption to Art.52 compliance
- Downstream providers and deployers of GPAI models
- National competent authorities (AIAs)
- Civil society organisations
- Independent experts and research bodies
Third-Country Authority Cooperation
Art.56(3) explicitly anticipates cooperation with authorities of third countries that are significant providers of GPAI models. This provision reflects the practical reality that the most capable GPAI models are developed by US, UK, and other non-EU entities. The AI Office may coordinate with:
- US NIST (AI Risk Management Framework alignment)
- UK DSIT / AI Safety Institute (frontier AI evaluation methodology)
- Japanese METI (AI governance principles)
- Canadian ISED (voluntary AI code frameworks)
For providers operating globally, this cross-border coordination is positive: it reduces the risk of conflicting compliance obligations between the EU CoP and equivalent national frameworks.
Art.56(4): AI Office and Board Oversight of CoP Quality
Art.56(4) assigns the AI Office and AI Board joint responsibility for ensuring CoP quality:
(a) The CoP must clearly outline specific objectives with concrete commitments or measures and, where appropriate, key performance indicators (KPIs) for measuring achievement.
(b) The CoP must take into account the specific nature and complexity of GPAI models and related value chains.
KPI Examples in Practice
The Art.56(4)(a) KPI requirement transforms abstract commitments into measurable obligations:
| Obligation Area | Example KPI |
|---|---|
| Adversarial testing | Minimum testing hours per model release; CBRN uplift threshold scores |
| Incident reporting | Time-to-detection and time-to-AI-Office-notification metrics |
| Copyright policy | Percentage of training data sources with documented opt-out compliance |
| Energy efficiency | Maximum kWh/1M tokens normalised inference energy |
| Documentation currency | Maximum lag between model update and Annex XI documentation update |
The KPI framework is significant for developers because it creates objective compliance thresholds — a CoP with KPIs transforms a vague commitment ("we take security seriously") into a verifiable metric ("we achieve time-to-notification ≤ 72 hours for all Art.3(49) serious incidents").
Value Chain Complexity Acknowledgment
Art.56(4)(b) acknowledges that GPAI models operate within complex multi-layer value chains: foundation model provider → fine-tuning provider → API provider → application developer → end deployer → end user. The CoP must be structured so commitments are meaningful across this chain — not just at the foundation model level.
Art.56(5): AI Office Monitoring and Board Reporting
Art.56(5) creates a continuous monitoring obligation:
- The AI Office monitors and evaluates CoP signatories' achievement of objectives
- Regular reports are submitted to the AI Board
- Signatories must provide the AI Office with information necessary for monitoring
What Monitoring Looks Like
| Monitoring Activity | Frequency | Mechanism |
|---|---|---|
| KPI reporting by signatories | Periodic (quarterly or annual) | Structured data submissions |
| AI Office spot checks | Event-triggered (incidents, deviations) | Information requests under Art.55(2) |
| Board reporting | Periodic | AI Office → Board summary reports |
| Public transparency | Annual | AI Office publishes monitoring summary |
The monitoring obligation has a direct operational implication: providers cannot simply sign the CoP and file it away. CoP participation requires ongoing evidence collection, structured reporting, and audit-readiness — the same disciplines that Art.53 self-compliance requires, but now with external oversight cadence.
Art.56(6): Commission Implementing Acts — Fallback When CoP Fails
Art.56(6) is the most consequential provision for providers who choose not to participate. If:
- An adequate CoP cannot be finalised within the AI Office process, or
- The AI Office (after AI Board consultation) considers that the CoP would not sufficiently ensure compliance,
then the Commission may issue implementing acts providing common rules for implementing Art.52 and Art.53 obligations. These implementing acts are adopted under the examination procedure (Art.98(2)).
The Implementing Act Risk for Non-Participants
The Art.56(6) fallback mechanism creates a strategic dynamic for GPAI providers:
| Provider Action | Regulatory Outcome |
|---|---|
| Participates in CoP drafting | Shapes KPIs, documentation standards, and adversarial testing protocols before they become binding |
| Signs CoP, follows it | Benefits from conformity presumption; low Art.55 evaluation risk |
| Does not participate | Implementing acts are written without their input; less flexibility in how obligations are met |
| Non-participant after implementing act | Must comply with Commission rules verbatim — no equivalent-measure flexibility |
The implementing act is harder to comply with than the CoP for one structural reason: the Commission writes implementing acts as legal rules, not as flexible commitments. A CoP can accommodate equivalent measures and judgment calls; an implementing act cannot.
Timeline Risk
Art.56(6) does not specify a deadline for implementing act issuance. The Commission's authority activates when the AI Office determines CoP inadequacy. For planning purposes, providers should assume:
- If no adequate CoP exists by mid-2026, implementing act drafting begins
- Implementing acts typically take 12-18 months from initiation to adoption
- Once adopted, they apply to all providers, whether or not they participated in the CoP process
Art.56(7): Three-Year Review Cycle
Art.56(7) requires the AI Office to review CoPs at least every three years from their entry into force, updating them based on:
- Evolution of the state of the art — new model capabilities, new risk categories (e.g., agentic AI that didn't exist at CoP drafting)
- Evolution of the Regulation — Commission delegated acts, implementing acts, or legislative amendments that affect Art.52-53 scope
Practical Implication for Providers
The three-year review cycle means CoP compliance is not a one-time certification. Providers must:
- Monitor AI Office CoP review announcements
- Submit comments and evidence during review rounds
- Update internal compliance programs to reflect revised CoP KPIs
- Maintain version-controlled compliance records showing which CoP version applied at which point in time
For GPAI model providers with rapidly evolving model lines, the three-year cycle may lag capability development. A provider who trains a substantially more capable model between CoP reviews is expected to apply the existing CoP to the new model — with the understanding that the next review will incorporate updated adversarial testing standards for that capability level.
Art.56(8): Information Obligation for CoP Signatories
Art.56(8) imposes a specific information obligation on signatories: upon request by the AI Office, signatories must provide information and evidence enabling assessment of compliance with CoP objectives and commitments.
This is distinct from Art.55(2) (AI Office powers to request information during model evaluations). Art.56(8) is a CoP-specific obligation that applies solely because the provider signed the CoP — it does not require the AI Office to initiate a formal Art.55 evaluation to trigger.
Documentation Requirements Under Art.56(8)
| CoP Commitment Category | Documentation to Maintain |
|---|---|
| Adversarial testing | Test protocols, scope, results, third-party red-team reports |
| Incident reporting | Detection logs, notification records, AI Office correspondence |
| Cybersecurity | Architecture diagrams, penetration test reports, access control logs |
| Energy efficiency | Training FLOPs records, inference kWh measurements, datacenter PUE |
| Training data copyright | Opt-out compliance records, robots.txt states at training time |
| Annex XI documentation | Version history, update logs, downstream communication records |
The Conformity Presumption in Practice
The conformity presumption under Art.56 works as follows:
Provider signs CoP
↓
Provider follows CoP commitments + KPIs
↓
AI Office monitoring confirms ongoing compliance
↓
Rebuttable presumption: Provider complies with Art.52 + Art.53
↓
Art.55 evaluation risk reduced (AI Office resources focus on non-signatories)
The presumption is rebuttable: if the AI Office finds evidence that a CoP signatory is systematically failing CoP commitments despite positive KPI reports, it may initiate an Art.55 evaluation notwithstanding CoP participation.
CoP vs. Individual Compliance Demonstration
For providers who choose not to sign the CoP, Art.56 does not explicitly require an alternative — but Art.55 and general Chapter V enforcement practice create strong incentives:
| Approach | Conformity Presumption | Art.55 Risk | Flexibility |
|---|---|---|---|
| CoP signatory, following CoP | Yes (Art.56) | Low | Medium (KPI-bound) |
| CoP signatory, deviating with equivalents | Conditional | Medium | Medium |
| Non-signatory, individual compliance | No | Higher | High |
| Non-signatory, post implementing act | No | Highest | Low |
Art.56 × Art.53: The Adversarial Testing Bridge
Art.56(2) specifically requires the CoP to address adversarial testing procedures — creating a direct bridge between Art.53(1)(a) adversarial testing and Art.56 CoP compliance.
In practice this means:
- The CoP defines standardised adversarial testing protocols (red-teaming scope, CBRN uplift evaluation, jailbreak resistance thresholds)
- Providers who follow the CoP's adversarial testing protocol simultaneously satisfy Art.53(1)(a) standardised protocol requirement
- Adversarial testing results submitted to the AI Office under the CoP constitute evidence under Art.55(2) if an evaluation is triggered
This integration reduces compliance burden: instead of separately satisfying Art.53(1)(a) and maintaining CoP compliance, providers use the CoP's testing protocol as the single source of truth for both obligations.
CLOUD Act Implications for CoP Evidence Records
CoP compliance requires maintaining substantial documentation under Art.56(8). For providers using US-incorporated cloud infrastructure for these records:
| Documentation Type | CLOUD Act Risk |
|---|---|
| Adversarial testing results | High — US government access request possible |
| Incident detection logs | High — contains model capability evidence |
| AI Office correspondence | Medium — may reveal regulatory posture |
| Training data copyright records | Medium — IP-sensitive |
| Energy efficiency reports | Low — publicly disclosed anyway |
The dual-jurisdiction risk is structural: EU AI Office requests documentation under Art.56(8); US government potentially seeks the same documentation under CLOUD Act 18 U.S.C. §2713. EU-hosted infrastructure eliminates the CLOUD Act exposure entirely: EU data protection law (GDPR) and EU blocking statutes apply to records stored on EU-established infrastructure, and EU courts would not honour a CLOUD Act order for records physically in EU jurisdiction.
For sota.io as a EU PaaS provider, this is the concrete compliance argument: GPAI model providers can satisfy Art.56(8) documentation obligations with zero CLOUD Act exposure by storing evidence records on EU infrastructure.
Python Implementation: CoP Compliance Tracker
from dataclasses import dataclass, field
from datetime import datetime, date
from enum import Enum
from typing import Optional
class CoPStatus(str, Enum):
SIGNATORY = "signatory"
NON_SIGNATORY = "non_signatory"
PENDING = "pending"
class CommitmentStatus(str, Enum):
ON_TRACK = "on_track"
AT_RISK = "at_risk"
DEVIATED = "deviated"
EQUIVALENT_MEASURE = "equivalent_measure"
@dataclass
class CoPCommitment:
area: str # e.g. "adversarial_testing", "incident_reporting"
article_reference: str # e.g. "Art.53(1)(a)"
kpi_description: str
kpi_target: str
current_status: CommitmentStatus
last_evidence_date: Optional[date] = None
evidence_location: Optional[str] = None # EU-hosted storage path
notes: str = ""
@dataclass
class CoPComplianceRecord:
provider_name: str
cop_version: str
cop_signed_date: Optional[date]
status: CoPStatus
commitments: list[CoPCommitment] = field(default_factory=list)
def conformity_presumption_active(self) -> bool:
"""True if provider qualifies for Art.56 conformity presumption."""
if self.status != CoPStatus.SIGNATORY:
return False
deviated = [
c for c in self.commitments
if c.current_status == CommitmentStatus.DEVIATED
]
return len(deviated) == 0
def ai_office_request_ready(self) -> dict:
"""Returns Art.56(8) readiness assessment for each commitment."""
return {
c.area: {
"evidence_available": c.last_evidence_date is not None,
"evidence_location": c.evidence_location,
"status": c.current_status,
"days_since_evidence": (
(date.today() - c.last_evidence_date).days
if c.last_evidence_date else None
),
}
for c in self.commitments
}
def at_risk_commitments(self) -> list[CoPCommitment]:
return [
c for c in self.commitments
if c.current_status in (CommitmentStatus.AT_RISK, CommitmentStatus.DEVIATED)
]
def report(self) -> str:
lines = [
f"CoP Compliance Report: {self.provider_name}",
f"CoP Version: {self.cop_version} | Signed: {self.cop_signed_date}",
f"Status: {self.status} | Conformity Presumption: {self.conformity_presumption_active()}",
"",
"Commitment Status:",
]
for c in self.commitments:
flag = "✅" if c.current_status == CommitmentStatus.ON_TRACK else "⚠️"
lines.append(f" {flag} [{c.article_reference}] {c.area}: {c.current_status}")
if c.current_status != CommitmentStatus.ON_TRACK:
lines.append(f" KPI target: {c.kpi_target}")
if c.notes:
lines.append(f" Note: {c.notes}")
return "\n".join(lines)
# Example usage
record = CoPComplianceRecord(
provider_name="Acme GPAI GmbH",
cop_version="CoP v2.0 (2025)",
cop_signed_date=date(2025, 8, 1),
status=CoPStatus.SIGNATORY,
commitments=[
CoPCommitment(
area="adversarial_testing",
article_reference="Art.53(1)(a)",
kpi_description="Red-team evaluation before each major release",
kpi_target="Minimum 500 hours CBRN-scope testing per release",
current_status=CommitmentStatus.ON_TRACK,
last_evidence_date=date(2025, 10, 15),
evidence_location="eu-storage://cop-evidence/adversarial/2025-Q4/",
),
CoPCommitment(
area="incident_reporting",
article_reference="Art.53(1)(b)",
kpi_description="Time-to-AI-Office-notification for Art.3(49) events",
kpi_target="≤ 72 hours from detection",
current_status=CommitmentStatus.ON_TRACK,
last_evidence_date=date(2025, 11, 1),
evidence_location="eu-storage://cop-evidence/incidents/2025/",
),
CoPCommitment(
area="energy_efficiency",
article_reference="Art.53(1)(d)",
kpi_description="Quarterly inference kWh/1M token disclosure",
kpi_target="Published within 30 days of quarter end",
current_status=CommitmentStatus.AT_RISK,
last_evidence_date=date(2025, 7, 30),
notes="Q3 2025 disclosure overdue — internal reporting lag",
),
],
)
print(record.report())
print("\nAI Office Request Readiness:")
for area, info in record.ai_office_request_ready().items():
print(f" {area}: {'Ready' if info['evidence_available'] else 'NOT READY'}")
Art.56 Compliance Checklist (12 Items)
Before Signing the CoP
- Map Art.52 obligations — confirm Annex XI documentation is current and version-controlled
- Map Art.53 obligations (if Art.51 systemic risk applies) — adversarial testing protocols, incident detection workflow, cybersecurity architecture
- Infrastructure audit — identify where CoP evidence records are stored; assess CLOUD Act exposure for US-hosted records
- KPI feasibility review — assess whether the CoP's KPI targets are achievable with current operational cadence
After Signing
- Establish evidence collection workflows for each CoP commitment area
- Configure evidence storage on EU-hosted infrastructure to mitigate Art.56(8) × CLOUD Act dual-jurisdiction risk
- Assign CoP compliance owner with authority to halt releases failing adversarial testing KPIs
- Integrate CoP KPIs into model development release gates
Ongoing (Per Monitoring Cycle)
- Submit periodic KPI reports to AI Office in required format and timeline
- Document any CoP deviations with equivalent-measure justification before the deviation occurs, not after
- Monitor AI Office CoP review announcements — participate in consultation rounds every three years
- Update compliance program when AI Office publishes revised CoP version after three-year review
See Also
- EU AI Act Art.52: General Obligations for All GPAI Model Providers — Documentation, Copyright, Transparency (2026)
- EU AI Act Art.53: Additional Obligations for GPAI Models with Systemic Risk — Adversarial Testing, Incident Reporting, Cybersecurity (2026)
- EU AI Act Art.55: AI Office Evaluation Powers over GPAI Models with Systemic Risk — Developer Guide (2026)
- EU AI Act Art.51: Classification of GPAI Models with Systemic Risk — 10²⁵ FLOPs Threshold and Commission Decision (2026)