GPAI Code of Practice Final: Implementation Guide for AI Developers (2026)
The EU AI Office adopted the final General-Purpose AI (GPAI) Code of Practice in July 2025 — exactly one year after the EU AI Act entered into force on August 1, 2024. The CoP is the primary compliance pathway for providers of general-purpose AI models under Chapter V of the EU AI Act (Art.51–56). With GPAI obligations already in force since August 2, 2025 and AI Office enforcement beginning August 2, 2026, GPAI providers now have both the final CoP text and a hard deadline.
This is not voluntary guidance. The GPAI Code of Practice is a structured compliance mechanism with legal consequences: adherence creates a presumption of conformity with Art.52–55 obligations. Non-adherents do not escape enforcement — they must demonstrate equivalent compliance through an alternative pathway, with the burden of proof on the provider.
If you build, fine-tune, or distribute a general-purpose AI model that is made available in the EU, this guide explains what the final CoP requires, how the compliance pathways work, and what your team needs to implement before August 2026.
What the GPAI Code of Practice Actually Is
The GPAI Code of Practice was developed by the EU AI Office under Art.56 of the AI Act through a structured multi-stakeholder process. Three working groups — covering Transparency, Copyright, and Safety & Security — produced the three chapters of the final CoP text over approximately 12 months of drafts, workshops, and public consultations.
The CoP is not a technical standard in the sense of a CEN/CENELEC harmonised standard. It is a compliance instrument created by the AI Office under delegated authority, incorporating industry-developed measures that the AI Office has assessed as satisfying the underlying statutory obligations in Art.52–55. Adherence to the CoP creates a legal presumption that the underlying obligations are met — but the CoP itself is not the source of those obligations. The obligations exist in the statute. The CoP is the roadmap for meeting them.
Who the CoP applies to:
The CoP applies to providers of GPAI models as defined in Art.3(63) of the EU AI Act — entities that develop a GPAI model and make it available on the EU market, whether via direct deployment, API, open source release, or downstream licensing to AI system providers. The CoP distinguishes between:
- All GPAI providers: Subject to the general obligations in Art.52 (transparency, documentation, copyright compliance) and Art.54 (downstream provider information). Must implement Chapter 1 (Transparency) and Chapter 2 (Copyright) CoP measures.
- GPAI providers with systemic risk: Subject to additional obligations in Art.53 (adversarial testing, incident reporting, cybersecurity). Must additionally implement Chapter 3 (Safety & Security) CoP measures. The systemic risk threshold is 10^25 floating-point operations (FLOPs) training compute, or a Commission designation under Art.51(2).
The two compliance pathways:
Pathway A — CoP Signatory: The provider signs the CoP, implements the specified measures, and benefits from the presumption of conformity. The AI Office maintains a public register of CoP signatories. Enforcement inspections focus on CoP implementation quality rather than independent obligation satisfaction.
Pathway B — Equivalence: The provider does not sign the CoP but demonstrates to the AI Office and national competent authorities that its own measures satisfy the underlying Art.52–55 obligations to an equivalent degree. The burden of proof is on the provider. Equivalence assessments are provider-initiated and resource-intensive compared to the CoP pathway.
Most GPAI providers with EU market presence will find Pathway A substantially lower-friction. The equivalence pathway exists primarily for providers with pre-existing compliance frameworks that overlap substantially with CoP requirements.
Chapter 1: Transparency — What GPAI Providers Must Document
Chapter 1 of the GPAI CoP implements Art.52(1)(a) and (b): the general technical documentation obligation and the downstream provider information obligation.
Technical Documentation (Art.52(1)(a) + Annex XI)
The CoP Chapter 1 requires GPAI providers to maintain a technical documentation record covering:
Model architecture and training methodology:
- Architecture type (transformer, diffusion, mixture-of-experts, etc.) and parameter count if publicly known
- Training methodology (pre-training objective, fine-tuning stages, RLHF or equivalent alignment techniques)
- Compute used in training expressed in FLOPs (relevant for systemic risk threshold assessment)
- Training infrastructure jurisdiction (relevant for CLOUD Act risk assessment — see below)
Training data:
- Types and sources of training data (publicly available web data, licensed datasets, synthetic data, proprietary corpora)
- Geographic coverage and dominant language distribution
- Quality filtering and data governance measures applied pre-training
- Copyright compliance policy applicable to the training data corpus (feeds into Chapter 2)
Capabilities and limitations:
- Intended uses and reasonably foreseeable use cases
- Known limitations, biases, and failure modes identified in evaluations
- Languages and modalities supported
- Context window, output constraints, and API behaviour characteristics
Evaluation results:
- Benchmark performance on standard capability evaluations
- Safety evaluation results (harmlessness, refusal behaviour, factuality)
- Red-team evaluation summaries (for systemic risk providers — see Chapter 3)
The CoP specifies that technical documentation must be updated when a material change to the model occurs — a new major version, a significant fine-tuning update, or a change that affects the model's capabilities, limitations, or risk profile. Version control of technical documentation is a CoP requirement, not merely a recommendation.
CLOUD Act risk: Technical documentation records are among the most sensitive documents a GPAI provider creates. If stored on US-jurisdiction cloud infrastructure (AWS, Azure, GCP, regardless of EU data centre location), they are subject to extraterritorial US government access under the CLOUD Act (18 U.S.C. §2713) without requiring MLAT procedures or judicial oversight. For GPAI providers building EU market trust, storing technical documentation on EU-sovereign infrastructure eliminates this jurisdictional exposure.
Downstream Provider Information Obligation (Art.52(1)(b))
The CoP requires GPAI providers to publish a machine-readable model card that downstream providers — companies building applications on top of the GPAI model via API or open-source licensing — can use to satisfy their own EU AI Act obligations.
The model card must include:
- The technical documentation summary in structured JSON-LD or comparable machine-readable format
- Intended uses and prohibited uses (enabling downstream providers to assess Art.6 high-risk classification for their AI systems)
- Known limitations and evaluation results summaries
- Copyright compliance policy and training data licence information
- Update notification mechanism (downstream providers must be notified of material model updates affecting their compliance posture)
The downstream information obligation creates a chain of compliance responsibility. If a downstream provider builds a high-risk AI system using a GPAI model, the GPAI provider's model card is part of the downstream provider's own technical documentation. Deficiencies in the model card propagate downstream.
Chapter 2: Copyright — Training Data and TDM Compliance
Chapter 2 of the GPAI CoP implements Art.52(1)(a)(ii): the obligation to maintain and make available a sufficiently detailed copyright compliance policy for training data.
What a Copyright Compliance Policy Must Cover
The CoP specifies that a copyright compliance policy must document:
The legal basis for each category of training data:
- Public domain works (copyright expired or dedicated to public domain)
- Works under open licences permitting commercial text and data mining (Creative Commons with commercial permission, permissive open source licences)
- Works accessed under the EU text and data mining exception in Art.4 of Directive (EU) 2019/790 (DSMD) — the TDM exception for lawful access
- Licensed works — direct licence agreements with rights holders or collecting societies
- Synthetic data generated by other AI systems (with the copyright lineage of the source model documented)
TDM opt-out detection and respect mechanism: Art.4(3) of the DSMD allows rights holders to opt out of text and data mining by appropriate means. The CoP requires GPAI providers to document their mechanism for detecting and respecting TDM opt-outs, including:
robots.txtDisallow directives from pre-training crawlstdmrep.json(TDM Reservation Protocol) signals on crawled domains- Header-based opt-out signals (
X-Robots-Tag: tdmrep) - Contractual opt-outs in licensing agreements
The CoP requires providers to demonstrate that their training data pipeline included systematic opt-out detection prior to data ingestion — not retrospective filtering after training.
Licence inventory for licensed datasets: For each licensed training dataset, the CoP requires a documented record of the licence terms, the licensor identity, and the scope of permitted use (commercial, non-commercial, derivative works). This licence inventory feeds into the technical documentation record and is potentially disclosable in litigation under the ALD disclosure obligation.
Making the Copyright Policy Available
The CoP requires the copyright compliance policy to be publicly accessible in a form that allows rights holders to understand what data was collected, what opt-out signals were respected, and what mechanism exists to submit infringement claims. The policy must be available on the provider's website and cross-referenced in the model card.
Rights holders who believe their works were included in training data in violation of the opt-out mechanism or applicable licence have a claim pathway under the ALD framework, enhanced by the AI Office disclosure obligation. A well-documented copyright compliance policy is both a CoP requirement and litigation-risk management.
Chapter 3: Safety & Security — Systemic Risk GPAI Obligations
Chapter 3 of the GPAI CoP applies exclusively to GPAI providers with systemic risk (Art.51 designation or 10^25 FLOPs training compute). It implements the additional obligations in Art.53.
Adversarial Testing (Art.53(1)(a))
The CoP requires systemic risk GPAI providers to conduct adversarial testing — red-teaming — before deployment and at periodic intervals. The Chapter 3 measures specify:
Pre-deployment red-teaming:
- Structured adversarial evaluation covering at least: jailbreak resistance, harmful content generation, sensitive topic handling (CSAM, weapons of mass destruction, critical infrastructure attacks), and autonomous capability assessment (self-replication, model exfiltration, long-horizon task execution)
- Involvement of both internal safety teams and independent third-party evaluators
- Documentation of red-team methodology, scope, findings, and mitigations applied
Ongoing adversarial testing schedule:
- Red-team evaluation must be repeated after material model updates (new major version, significant fine-tuning change affecting capability level)
- AI Office may request adversarial testing reports as part of an inspection under Art.91 — the records must be available and must cover the scope the CoP specifies
The CoP does not define a minimum red-team budget or team size, but specifies outcome requirements: providers must demonstrate that their adversarial testing would detect the categories of systemic risk identified in the AI Office's systemic risk catalogue.
Incident Reporting (Art.53(1)(b))
Systemic risk GPAI providers must report serious incidents involving their model to the AI Office within 72 hours of becoming aware of the incident. Chapter 3 defines what constitutes a serious incident for a GPAI model:
- Incidents causing or contributing to death or serious physical harm
- Incidents causing large-scale harm to the rights of EU citizens
- Incidents involving the model being used to facilitate a criminal offence covered by Annex II (terrorism, weapons development, critical infrastructure attack)
- Unexpected model capability emergences that materially exceed documented capability levels
- Significant security breaches of the model's alignment or safety measures
The incident report must include: a description of the incident, the model version involved, the scope of impact (number of users affected, geographic spread), the immediate mitigation taken, and the longer-term corrective action planned. Post-incident reports are due within 15 days with full root-cause analysis.
Incident reporting and the ALD: Incident reports to the AI Office become documentary evidence available in ALD disclosure proceedings. If an incident report demonstrates that the provider was aware of a failure mode that subsequently caused harm to a third party, the ALD disclosure obligation can surface that report in tort litigation. Accurate, complete incident reporting that demonstrates responsive mitigation is better evidentially than an incomplete report that appears to minimise the incident's scope.
Cybersecurity Measures (Art.53(1)(c))
Chapter 3 requires systemic risk GPAI providers to implement adequate cybersecurity measures for the model and its infrastructure. The CoP specifies:
- Protection against prompt injection attacks that could cause the model to act contrary to its alignment measures
- Model weight protection — technical and organisational measures to prevent unauthorised access to model weights (weight exfiltration is a critical systemic risk vector)
- Access control for model fine-tuning and inference infrastructure
- Monitoring for anomalous usage patterns that may indicate adversarial probing or systematic jailbreaking campaigns
- Incident detection and response capabilities for security breaches
The cybersecurity measures must be documented in the technical documentation record and are subject to AI Office inspection.
Presumption of Conformity: The Legal Mechanism
Art.56(8) of the EU AI Act establishes the legal effect of CoP adherence: GPAI providers who adhere to an approved Code of Practice are presumed to comply with the obligations of Art.52–55. This presumption is rebuttable — an AI Office investigation can find non-compliance even for a CoP signatory — but it substantially shifts the enforcement dynamic.
What the presumption does:
- Shifts the burden of proof in enforcement proceedings. The AI Office must demonstrate that the signatory's CoP implementation failed to satisfy the underlying statutory obligations, rather than the provider demonstrating that its measures are sufficient.
- Narrows the scope of AI Office inspection. Inspections of CoP signatories focus on CoP implementation quality — did the provider actually implement what the CoP requires? — rather than conducting an open-ended assessment of whether any unspecified measure satisfies Art.52–55.
- Reduces regulatory uncertainty. The CoP converts the statutory obligations (which are expressed at a relatively abstract level) into specific, concrete implementation measures. A provider that has implemented the CoP measures has documented evidence of compliance intent.
What the presumption does not do:
- It does not prevent enforcement if a CoP signatory materially fails to implement the CoP measures (in which case the presumption is rebutted and the underlying obligation applies).
- It does not protect against liability under the ALD, the new PLD, or national tort law. CoP compliance is a regulatory compliance instrument, not a liability shield.
- It does not substitute for Art.5 prohibited practices compliance — the prohibitions apply regardless of CoP status.
Signatory Registration and AI Office Register
The AI Office maintains a public register of GPAI CoP signatories. Registration requires:
- Signing the CoP adherence declaration (available from the AI Office)
- Submitting an initial implementation report documenting which CoP measures have been implemented and on what timeline
- Agreeing to periodic implementation audits by the AI Office or its designees
The AI Office register is publicly accessible. Downstream providers, deployers, and enterprise customers increasingly use CoP signatory status as a procurement criterion. GPAI providers that are not on the register face commercial pressure in addition to regulatory risk.
Python Tooling: GPAI CoP Adherence Tracker
from dataclasses import dataclass, field
from enum import Enum
from datetime import date, timedelta
from typing import Optional
class CoPChapter(Enum):
TRANSPARENCY = "Chapter 1: Transparency"
COPYRIGHT = "Chapter 2: Copyright"
SAFETY_SECURITY = "Chapter 3: Safety & Security"
class ImplementationStatus(Enum):
NOT_STARTED = "not_started"
IN_PROGRESS = "in_progress"
IMPLEMENTED = "implemented"
VERIFIED = "verified"
@dataclass
class CoPMeasure:
measure_id: str
chapter: CoPChapter
title: str
art_reference: str
applies_to_systemic_risk_only: bool = False
status: ImplementationStatus = ImplementationStatus.NOT_STARTED
implementation_date: Optional[date] = None
evidence_location: Optional[str] = None
notes: str = ""
@dataclass
class GPAICoPAdherenceRecord:
provider_name: str
model_name: str
model_version: str
is_systemic_risk: bool
signatory_status: bool = False
registration_date: Optional[date] = None
enforcement_deadline: date = date(2026, 8, 2)
measures: list[CoPMeasure] = field(default_factory=list)
def days_to_enforcement(self) -> int:
return (self.enforcement_deadline - date.today()).days
def chapter_completion(self, chapter: CoPChapter) -> dict:
chapter_measures = [m for m in self.measures if m.chapter == chapter]
if not chapter_measures:
return {"total": 0, "complete": 0, "percentage": 0}
complete = sum(
1 for m in chapter_measures
if m.status in (ImplementationStatus.IMPLEMENTED, ImplementationStatus.VERIFIED)
)
return {
"total": len(chapter_measures),
"complete": complete,
"percentage": round(complete / len(chapter_measures) * 100, 1),
}
def overall_cop_readiness(self) -> dict:
applicable = [
m for m in self.measures
if not m.applies_to_systemic_risk_only or self.is_systemic_risk
]
if not applicable:
return {"total": 0, "complete": 0, "percentage": 0, "gap_count": 0}
complete = sum(
1 for m in applicable
if m.status in (ImplementationStatus.IMPLEMENTED, ImplementationStatus.VERIFIED)
)
return {
"total": len(applicable),
"complete": complete,
"percentage": round(complete / len(applicable) * 100, 1),
"gap_count": len(applicable) - complete,
}
def critical_gaps(self) -> list[CoPMeasure]:
applicable = [
m for m in self.measures
if (not m.applies_to_systemic_risk_only or self.is_systemic_risk)
and m.status == ImplementationStatus.NOT_STARTED
]
return applicable
def generate_implementation_report(self) -> str:
readiness = self.overall_cop_readiness()
report = [
f"=== GPAI CoP Adherence Report: {self.provider_name} / {self.model_name} ===",
f"Model version: {self.model_version}",
f"Systemic risk provider: {'YES' if self.is_systemic_risk else 'NO'}",
f"CoP signatory: {'YES (registered)' if self.signatory_status else 'NO (not registered)'}",
f"Days to enforcement (2026-08-02): {self.days_to_enforcement()}",
"",
f"Overall readiness: {readiness['complete']}/{readiness['total']} measures "
f"({readiness['percentage']}%)",
"",
]
for chapter in CoPChapter:
ch = self.chapter_completion(chapter)
if ch["total"] > 0:
report.append(
f" {chapter.value}: {ch['complete']}/{ch['total']} ({ch['percentage']}%)"
)
gaps = self.critical_gaps()
if gaps:
report.extend(["", f"Critical gaps ({len(gaps)} not started):"])
for g in gaps[:10]:
report.append(f" - [{g.measure_id}] {g.title} ({g.art_reference})")
return "\n".join(report)
def build_standard_cop_measures() -> list[CoPMeasure]:
"""Return the standard CoP measure set for self-assessment."""
return [
# Chapter 1: Transparency
CoPMeasure("T-01", CoPChapter.TRANSPARENCY,
"Model architecture and parameter count documented",
"Art.52(1)(a), Annex XI"),
CoPMeasure("T-02", CoPChapter.TRANSPARENCY,
"Training methodology and objectives documented",
"Art.52(1)(a), Annex XI"),
CoPMeasure("T-03", CoPChapter.TRANSPARENCY,
"Training compute (FLOPs) documented",
"Art.52(1)(a), Art.51"),
CoPMeasure("T-04", CoPChapter.TRANSPARENCY,
"Training data types and sources documented",
"Art.52(1)(a)(i)"),
CoPMeasure("T-05", CoPChapter.TRANSPARENCY,
"Training infrastructure jurisdiction documented (CLOUD Act risk)",
"Art.52(1)(a)"),
CoPMeasure("T-06", CoPChapter.TRANSPARENCY,
"Capabilities, limitations, and known failure modes documented",
"Art.52(1)(a)"),
CoPMeasure("T-07", CoPChapter.TRANSPARENCY,
"Benchmark and safety evaluation results documented",
"Art.52(1)(a)"),
CoPMeasure("T-08", CoPChapter.TRANSPARENCY,
"Machine-readable model card published",
"Art.52(1)(b)"),
CoPMeasure("T-09", CoPChapter.TRANSPARENCY,
"Downstream provider update notification mechanism implemented",
"Art.52(1)(b), Art.54"),
CoPMeasure("T-10", CoPChapter.TRANSPARENCY,
"Technical documentation version control and change log",
"Art.52(1)(a)"),
# Chapter 2: Copyright
CoPMeasure("C-01", CoPChapter.COPYRIGHT,
"Copyright compliance policy published",
"Art.52(1)(a)(ii)"),
CoPMeasure("C-02", CoPChapter.COPYRIGHT,
"Legal basis documented for each training data category",
"Art.52(1)(a)(ii)"),
CoPMeasure("C-03", CoPChapter.COPYRIGHT,
"TDM opt-out detection mechanism documented (robots.txt, tdmrep.json)",
"Art.52(1)(a)(ii), DSMD Art.4(3)"),
CoPMeasure("C-04", CoPChapter.COPYRIGHT,
"TDM opt-out enforcement in training pipeline documented",
"Art.52(1)(a)(ii)"),
CoPMeasure("C-05", CoPChapter.COPYRIGHT,
"Licence inventory for licensed datasets maintained",
"Art.52(1)(a)(ii)"),
CoPMeasure("C-06", CoPChapter.COPYRIGHT,
"Rights holder infringement claim pathway published",
"Art.52(1)(a)(ii)"),
CoPMeasure("C-07", CoPChapter.COPYRIGHT,
"Synthetic data copyright lineage documented",
"Art.52(1)(a)(ii)"),
# Chapter 3: Safety & Security (systemic risk providers only)
CoPMeasure("S-01", CoPChapter.SAFETY_SECURITY,
"Pre-deployment red-team evaluation conducted",
"Art.53(1)(a)", applies_to_systemic_risk_only=True),
CoPMeasure("S-02", CoPChapter.SAFETY_SECURITY,
"Adversarial testing covers AI Office systemic risk catalogue",
"Art.53(1)(a)", applies_to_systemic_risk_only=True),
CoPMeasure("S-03", CoPChapter.SAFETY_SECURITY,
"Third-party red-team evaluation engaged",
"Art.53(1)(a)", applies_to_systemic_risk_only=True),
CoPMeasure("S-04", CoPChapter.SAFETY_SECURITY,
"Post-update adversarial testing schedule implemented",
"Art.53(1)(a)", applies_to_systemic_risk_only=True),
CoPMeasure("S-05", CoPChapter.SAFETY_SECURITY,
"Serious incident definition and detection mechanism implemented",
"Art.53(1)(b)", applies_to_systemic_risk_only=True),
CoPMeasure("S-06", CoPChapter.SAFETY_SECURITY,
"72-hour incident reporting to AI Office procedure documented",
"Art.53(1)(b)", applies_to_systemic_risk_only=True),
CoPMeasure("S-07", CoPChapter.SAFETY_SECURITY,
"Incident root-cause analysis and 15-day post-incident report",
"Art.53(1)(b)", applies_to_systemic_risk_only=True),
CoPMeasure("S-08", CoPChapter.SAFETY_SECURITY,
"Prompt injection protection measures implemented",
"Art.53(1)(c)", applies_to_systemic_risk_only=True),
CoPMeasure("S-09", CoPChapter.SAFETY_SECURITY,
"Model weight access controls and exfiltration protection",
"Art.53(1)(c)", applies_to_systemic_risk_only=True),
CoPMeasure("S-10", CoPChapter.SAFETY_SECURITY,
"Anomalous usage monitoring for adversarial probing",
"Art.53(1)(c)", applies_to_systemic_risk_only=True),
]
GPAI CoP Implementation Checklist (30 Items)
Chapter 1: Transparency (10 items)
- T-01 Model architecture type and parameter count (if public) documented in technical record
- T-02 Training methodology, pre-training objective, and fine-tuning stages documented
- T-03 Training compute expressed in FLOPs — systemic risk threshold assessed (10^25 FLOPs)
- T-04 Training data types, sources, geographic coverage, and dominant languages documented
- T-05 Training infrastructure jurisdiction confirmed — CLOUD Act exposure assessed
- T-06 Capabilities, known limitations, biases, and failure modes documented from evaluations
- T-07 Benchmark performance and safety evaluation results documented
- T-08 Machine-readable model card (JSON-LD or equivalent) published and accessible
- T-09 Downstream provider update notification mechanism tested and operational
- T-10 Technical documentation version control implemented — change log maintained
Chapter 2: Copyright (7 items)
- C-01 Copyright compliance policy drafted, reviewed, and publicly published
- C-02 Legal basis for each training data category documented (public domain, open licence, TDM exception, licenced)
- C-03 TDM opt-out detection mechanism documented — robots.txt, tdmrep.json, header signals covered
- C-04 TDM opt-out enforcement in training data pipeline confirmed — pre-ingestion not post-training
- C-05 Licence inventory for licensed datasets complete and version-controlled
- C-06 Rights holder infringement claim pathway published and operational
- C-07 Synthetic training data — copyright lineage of source models documented
Chapter 3: Safety & Security — Systemic Risk Providers Only (8 items)
- S-01 Pre-deployment red-team evaluation completed and documented before EU deployment
- S-02 Adversarial testing scope covers AI Office systemic risk catalogue (jailbreaks, CBRN, autonomous capability)
- S-03 Independent third-party red-team evaluation engaged — findings and mitigations documented
- S-04 Post-update adversarial testing schedule formalised — triggered by material model changes
- S-05 Serious incident definition implemented — detection and escalation procedure operational
- S-06 72-hour AI Office incident reporting procedure tested — responsible team designated
- S-07 15-day post-incident report template prepared — root-cause analysis workflow defined
- S-08 Prompt injection protection measures implemented and tested
- S-09 Model weight access controls implemented — exfiltration risk assessment complete
Registration and Governance (5 items)
- R-01 Assessed CoP signatory pathway vs. equivalence pathway — decision documented
- R-02 CoP adherence declaration prepared — legal review complete
- R-03 Initial implementation report drafted for AI Office submission
- R-04 Internal CoP implementation owner designated — board or C-level sponsor confirmed
- R-05 AI Office register registration submitted before enforcement date (2026-08-02)
Key Dates for GPAI Providers
| Date | Event |
|---|---|
| 2024-08-01 | EU AI Act entry into force |
| 2025-07-01 | Final GPAI Code of Practice adopted by AI Office |
| 2025-08-02 | GPAI obligations (Chapter V, Art.51–56) in force |
| 2026-08-02 | AI Office enforcement begins — penalties applicable under Art.99(3) |
| Ongoing | Material model updates require documentation refresh and re-assessment |
GPAI model providers that began CoP implementation when GPAI obligations entered into force in August 2025 now have approximately 12 months of implementation runway remaining. Providers starting implementation in 2026 should prioritise Chapter 1 (Transparency) and Chapter 2 (Copyright) — these apply to all GPAI providers regardless of systemic risk status — and assess their FLOPs count for systemic risk threshold applicability.
The EU AI Act enforcement calendar is accelerating. The GPAI CoP is the most clearly defined compliance pathway available. For developers building or deploying GPAI models in the EU, the choice between the CoP signatory pathway and the equivalence pathway is the most important near-term compliance decision.
See Also
- EU AI Act Art.52: GPAI Model General Obligations — Technical Documentation, Training Data & Copyright Developer Guide — Art.52 baseline obligations that CoP adherence creates a presumption of conformity with
- EU AI Act Art.53: GPAI Models with Systemic Risk — Adversarial Testing, Incident Reporting & Cybersecurity Developer Guide — Art.53 statutory obligations that Chapter 3 of the CoP implements for systemic risk providers
- GPAI Code of Practice Chapter 3: Adversarial Testing, Red-Teaming & Incident Reporting for Systemic Risk AI — deep-dive into the ten Safety & Security measures (S-01 to S-10) that apply to providers above the 10^25 FLOPs threshold
- EU AI Act Art.56: GPAI Codes of Practice — Systemic Risk Compliance Developer Guide — Art.56 is the legal basis for the CoP mechanism and the presumption-of-conformity effect
- EU AI Act Art.51: GPAI Model Classification — Systemic Risk Threshold and Provider Obligations Developer Guide — Art.51 determines which CoP chapters apply: all providers get Chapters 1–2, systemic risk providers additionally get Chapter 3