GPAI Enforcement Countdown: 98 Days to August 2, 2026 — The GPAI Provider Compliance Checklist
August 2, 2026 is 98 days away. On that date, the EU AI Act enters full application — meaning every provision that was not already in force becomes enforceable, market surveillance authorities across all 27 Member States are fully operational, and the complete penalty regime under Art.99 is available to regulators.
For providers of general-purpose AI (GPAI) models, this deadline requires attention for a specific reason: your core obligations under Art.51–55 have been in force since August 2, 2025. One year in, the question is no longer whether the rules apply to you. It is whether you can demonstrate compliance to an AI Office that now has investigation, inspection, and enforcement tools at its disposal.
This guide provides a 30-item countdown checklist across two tracks — one for all GPAI providers, one for providers of models that present systemic risk — plus the enforcement context, Python compliance tooling, and the CLOUD Act dimension that makes infrastructure choices a compliance question.
What August 2, 2026 Means for GPAI Providers
The EU AI Act has a tiered application schedule governed by Art.113. The relevant milestones for GPAI providers are:
February 2, 2025 — Chapter I definitions and the Art.5 prohibited AI practices became enforceable.
August 2, 2025 — Chapter III (Art.51–68), which contains all GPAI-specific obligations, entered into force. GPAI providers have been subject to Art.53 technical documentation, copyright compliance, and downstream transparency requirements for twelve months as of this writing.
August 2, 2026 — Full application. The market surveillance framework, the complete Art.99 penalty regime, and Annex III high-risk AI obligations activate. For GPAI providers, this means the enforcement infrastructure that was being assembled during the transition year is now fully operational.
The practical implication: the AI Office has had twelve months to observe whether GPAI providers are meeting their Art.53–55 obligations. The providers who have not established compliant technical documentation, copyright policies, and (for systemic-risk models) adversarial testing programmes will enter the August 2026 enforcement landscape with demonstrable compliance gaps.
Two Compliance Tracks: All GPAI vs. Systemic Risk
EU AI Act Art.51 establishes two classification tiers for GPAI models, each carrying different obligations.
Track 1 — All GPAI Providers (Art.51(1)): Any model trained on large amounts of data using self-supervised learning at scale, capable of generating text, images, audio, video, code, or other content, and designed to serve a wide range of downstream tasks. This captures all foundation models, large language models, and general-purpose multimodal models regardless of training compute.
Track 2 — Systemic Risk Models (Art.51(2)): A GPAI model that poses systemic risk to the EU. The threshold indicator under Art.51(2) is training compute exceeding 10²⁵ floating-point operations (FLOPs). Models above this threshold are presumed to present systemic risk; models below it can still be designated as presenting systemic risk by the AI Office based on other criteria — model reach, capability evaluations, or downstream deployment patterns.
The compliance obligations are cumulative: Track 2 providers must satisfy all Track 1 obligations plus the additional systemic-risk requirements under Art.55.
What the AI Office Can Do to You After August 2, 2026
The AI Office has operated its GPAI oversight function since the chapter entered into force in August 2025. Its enforcement toolkit — which becomes most relevant once the full penalty framework activates — includes:
Art.91 — Information requests: The AI Office can require GPAI providers to provide any information necessary for it to carry out its supervisory tasks. This includes technical documentation, model evaluation results, incident reports, training data descriptions, and access to source code where necessary for evaluating compliance. Providers who cannot produce compliant Art.53 documentation when requested face immediate enforcement exposure.
Art.92 — On-site and remote inspections: The AI Office can conduct inspections of GPAI providers' premises, access IT systems, and examine documentation. For providers of systemic-risk models, the AI Office can also require access to the model itself for evaluation purposes under Art.55(1)(d).
Art.93 — Monitoring of GPAI market developments: The AI Office monitors GPAI model capabilities, tracks systemic risk indicators, and can require providers to submit periodic reports on model performance, deployment patterns, and capability evaluations.
Art.94 — Commitments and settlement decisions: If the AI Office identifies potential non-compliance, it can accept binding commitments from a GPAI provider to address its concerns without formal enforcement proceedings. This cooperative resolution mechanism — which we covered in detail in our Art.94 guide — is only available before a formal finding of infringement. Providers who have documented compliance efforts are better positioned to use this pathway.
The penalty exposure for GPAI providers under Art.99: violations of GPAI obligations (Art.51–55) fall under Art.99(2), carrying fines of up to EUR 15,000,000 or 3% of global annual turnover, whichever is higher. Supplying incorrect, incomplete, or misleading information to the AI Office triggers Art.99(3) at up to EUR 7,500,000 or 1.5% of global turnover.
The 30-Item GPAI Compliance Checklist
Track 1: All GPAI Providers (Art.53)
Technical Documentation (Art.53(1)(a))
The Art.53(1)(a) requirement is for GPAI providers to draw up and maintain technical documentation before the model is placed on the market and kept up to date. The documentation must be sufficient to allow the AI Office to assess compliance. Annex XI specifies the content requirements in detail.
- 1. Model description document: A general description of the GPAI model covering its architecture, training approach, and capabilities. This is not a marketing document — it must give regulators enough information to assess Art.53 compliance and identify potential systemic-risk indicators.
- 2. Training data description: Documentation of the datasets used to train and fine-tune the model, including the sources of the data, the data selection methodology, data filtering and cleaning processes, and the volume and nature of training data.
- 3. Training methodology: A description of the training process, including the training compute (measured in FLOPs — this is your systemic-risk threshold documentation), training infrastructure, and the techniques used for instruction tuning, RLHF, or other alignment methods.
- 4. Capability evaluation results: Results of testing performed to evaluate the model's capabilities, including benchmark performance, safety evaluations, and any capability restrictions deliberately imposed.
- 5. Known limitations documentation: A description of known limitations, failure modes, and foreseeable harmful uses identified during model development and testing.
- 6. Instructions for downstream providers: Documentation of how downstream providers (those building applications on top of the GPAI model) should use the model, including safe use guidelines and applicable use restrictions.
- 7. Technical documentation update process: A defined process for keeping technical documentation current as the model is updated, fine-tuned, or its deployment context changes.
Copyright Compliance Policy (Art.53(1)(b))
- 8. Publicly available copyright policy: Art.53(1)(b) requires GPAI providers to implement a policy for complying with EU copyright law in relation to training data. This policy must be publicly available — it is not a purely internal document.
- 9. Opt-out mechanism compliance: The copyright policy must address how the provider respects the rights reservation mechanism under the EU Copyright Directive (Text and Data Mining exception, Art.4 DSM Directive). Where rights holders have reserved their rights under Art.4(3) DSM, the GPAI provider must not use that content for training without a licence.
- 10. Training data provenance records: Internal records documenting the legal basis for using each major dataset in training, including licences, legitimate TDM carve-outs, and rights clearances.
- 11. Ongoing rights monitoring: A process for monitoring changes to rights reservations, licensing terms, and new training data sources as the model evolves.
Downstream Provider Information (Art.53(1)(c) and Art.53(2))
- 12. Published summary of training data: Art.53(2) requires GPAI providers to make publicly available a sufficiently detailed summary of the content used to train the model. This is distinct from the full training data description in technical documentation — it is a public-facing summary for transparency.
- 13. Machine-readable capability documentation: Information about the model's capabilities, limitations, and recommended use contexts published in a format accessible to downstream providers who need to assess whether the model is suitable for their applications.
- 14. Downstream integration guidance: Documentation specifying what downstream providers must communicate to their users, what uses are prohibited, and what technical constraints apply when building applications on top of the GPAI model.
- 15. Art.55 downstream obligations: If you provide a systemic-risk GPAI model to downstream providers who build applications used in Annex III high-risk AI contexts, you must ensure those downstream providers have the information they need to comply with their own Art.10, Art.11, and Art.13 obligations.
Code of Practice (Art.56)
- 16. CoP participation status: The EU AI Act establishes a Code of Practice mechanism under Art.56 as the primary means by which GPAI providers can demonstrate compliance with Art.53–55. Providers participating in the CoP process benefit from an assumption that their practices conform to the obligations it covers.
- 17. CoP commitments documented: If your organisation has participated in the CoP process — including the GPAI CoP development facilitated by the AI Office — your commitments and compliance status must be documented.
- 18. Alternative compliance pathway: If your organisation has not participated in the CoP, you must have a documented alternative means of demonstrating compliance with Art.53(1)(a)–(c) and Art.55.
Track 2: Systemic Risk Models (Art.55)
These items apply only to GPAI models that meet or exceed the 10²⁵ FLOPs threshold under Art.51(2), or that have been designated as presenting systemic risk by the AI Office based on other criteria.
Model Evaluation (Art.55(1)(a))
- 19. Pre-deployment model evaluation: Art.55(1)(a) requires systemic-risk GPAI providers to perform model evaluations in accordance with standardised protocols before the model is placed on the market and after significant updates. These evaluations must assess the model's capabilities across dimensions relevant to systemic risk.
- 20. Adversarial testing programme: Art.55(1)(a) specifically requires adversarial testing, commonly referred to as "red-teaming." This means structured testing by internal teams or external experts attempting to identify failure modes, harmful capabilities, and ways the model could be misused at scale.
- 21. State-of-the-art safety evaluation methodology: The evaluation must track state-of-the-art safety assessment methods. As the field evolves, providers must update their evaluation approaches — static evaluation programmes that do not incorporate new assessment techniques will not satisfy this requirement.
- 22. Evaluation results retention: Results of all model evaluations and adversarial tests must be documented, retained, and available for submission to the AI Office upon request under Art.91.
Incident Reporting (Art.55(1)(b))
- 23. AI Office incident reporting mechanism: Systemic-risk GPAI providers must notify the AI Office of serious incidents and possible corrective measures without undue delay. A "serious incident" in the GPAI context includes: incidents causing death or serious harm, incidents affecting the security of critical infrastructure, incidents involving discriminatory outputs at scale, and large-scale privacy violations enabled by the model.
- 24. Incident classification system: An internal classification system determining which events trigger Art.55(1)(b) reporting obligations versus internal-only remediation.
- 25. Post-incident remediation documentation: Documentation of corrective measures taken following a serious incident, including timeline, root cause analysis, and model changes implemented.
Cybersecurity (Art.55(1)(c))
- 26. Model-level cybersecurity assessment: Systemic-risk GPAI providers must ensure adequate cybersecurity protection for the model and its infrastructure. This includes protection against adversarial inputs, model extraction attacks, membership inference attacks, and prompt injection at scale.
- 27. Safeguards for the physical security of model weights: The model weights of a systemic-risk GPAI model are a high-value target. Physical and logical access controls, encryption at rest and in transit, and access logging are baseline requirements.
- 28. Cybersecurity incident response for model infrastructure: A documented incident response plan specifically addressing cybersecurity incidents affecting the model's availability, integrity, or confidentiality.
AI Office Access (Art.55(1)(d))
- 29. Model access readiness for AI Office evaluations: Art.55(1)(d) requires systemic-risk GPAI providers to grant the AI Office access to the model for evaluation purposes upon request. This means having a technical process for providing controlled model access without compromising proprietary systems more broadly.
- 30. Documentation production process: A defined process for responding to Art.91 information requests within the time constraints the AI Office may specify — from documentation retrieval through to the technical documentation completeness checks required for a compliant response.
Python GPAI Compliance Tracker
The following implementation provides a structured tool for tracking GPAI compliance status across both tracks and generating countdown reports.
from dataclasses import dataclass, field
from enum import Enum
from datetime import date, datetime
from typing import Optional
class ComplianceStatus(Enum):
COMPLIANT = "compliant"
IN_PROGRESS = "in_progress"
GAP = "gap"
NOT_APPLICABLE = "not_applicable"
class GPAITrack(Enum):
ALL_GPAI = "all_gpai" # Art.53 obligations
SYSTEMIC_RISK = "systemic_risk" # Art.55 additional obligations
@dataclass
class ComplianceItem:
item_id: str
description: str
article: str
track: GPAITrack
status: ComplianceStatus = ComplianceStatus.GAP
evidence_location: Optional[str] = None
last_reviewed: Optional[date] = None
notes: str = ""
def is_ready(self) -> bool:
return self.status == ComplianceStatus.COMPLIANT
def days_without_review(self) -> Optional[int]:
if self.last_reviewed:
return (date.today() - self.last_reviewed).days
return None
@dataclass
class GPAIComplianceTracker:
organisation: str
model_name: str
training_flops: Optional[float] = None # If known, triggers systemic-risk track
is_systemic_risk_designated: bool = False
items: list[ComplianceItem] = field(default_factory=list)
def __post_init__(self):
if not self.items:
self._initialise_checklist()
@property
def is_systemic_risk(self) -> bool:
systemic_risk_threshold = 1e25 # 10^25 FLOPs
if self.training_flops and self.training_flops >= systemic_risk_threshold:
return True
return self.is_systemic_risk_designated
def _initialise_checklist(self):
track1_items = [
("T1-01", "Model description document", "Art.53(1)(a)"),
("T1-02", "Training data description", "Art.53(1)(a)"),
("T1-03", "Training methodology + FLOPs documentation", "Art.53(1)(a)"),
("T1-04", "Capability evaluation results", "Art.53(1)(a)"),
("T1-05", "Known limitations documentation", "Art.53(1)(a)"),
("T1-06", "Instructions for downstream providers", "Art.53(1)(a)"),
("T1-07", "Technical documentation update process", "Art.53(1)(a)"),
("T1-08", "Publicly available copyright policy", "Art.53(1)(b)"),
("T1-09", "Opt-out mechanism compliance", "Art.53(1)(b)"),
("T1-10", "Training data provenance records", "Art.53(1)(b)"),
("T1-11", "Ongoing rights monitoring process", "Art.53(1)(b)"),
("T1-12", "Published training data summary", "Art.53(2)"),
("T1-13", "Machine-readable capability documentation", "Art.53(1)(c)"),
("T1-14", "Downstream integration guidance", "Art.53(1)(c)"),
("T1-15", "Downstream Art.55 obligations documentation", "Art.53(1)(c)"),
("T1-16", "CoP participation status documented", "Art.56"),
("T1-17", "CoP commitments documented", "Art.56"),
("T1-18", "Alternative compliance pathway (if no CoP)", "Art.56"),
]
for item_id, desc, article in track1_items:
self.items.append(ComplianceItem(
item_id=item_id, description=desc, article=article,
track=GPAITrack.ALL_GPAI
))
if self.is_systemic_risk:
track2_items = [
("T2-01", "Pre-deployment model evaluation", "Art.55(1)(a)"),
("T2-02", "Adversarial testing programme", "Art.55(1)(a)"),
("T2-03", "State-of-the-art evaluation methodology", "Art.55(1)(a)"),
("T2-04", "Evaluation results retention", "Art.55(1)(a)"),
("T2-05", "AI Office incident reporting mechanism", "Art.55(1)(b)"),
("T2-06", "Incident classification system", "Art.55(1)(b)"),
("T2-07", "Post-incident remediation documentation", "Art.55(1)(b)"),
("T2-08", "Cybersecurity assessment for model", "Art.55(1)(c)"),
("T2-09", "Model weights security safeguards", "Art.55(1)(c)"),
("T2-10", "Cybersecurity incident response plan", "Art.55(1)(c)"),
("T2-11", "AI Office model access readiness", "Art.55(1)(d)"),
("T2-12", "Documentation production process (Art.91)", "Art.55(1)(d)"),
]
for item_id, desc, article in track2_items:
self.items.append(ComplianceItem(
item_id=item_id, description=desc, article=article,
track=GPAITrack.SYSTEMIC_RISK
))
def update_status(self, item_id: str, status: ComplianceStatus,
evidence: Optional[str] = None, notes: str = "") -> None:
for item in self.items:
if item.item_id == item_id:
item.status = status
item.last_reviewed = date.today()
if evidence:
item.evidence_location = evidence
if notes:
item.notes = notes
return
raise ValueError(f"Item {item_id} not found")
def compliance_summary(self) -> dict:
enforcement_date = date(2026, 8, 2)
days_remaining = (enforcement_date - date.today()).days
track1 = [i for i in self.items if i.track == GPAITrack.ALL_GPAI]
track2 = [i for i in self.items if i.track == GPAITrack.SYSTEMIC_RISK]
t1_ready = sum(1 for i in track1 if i.is_ready())
t1_gaps = [i for i in track1 if i.status == ComplianceStatus.GAP]
t2_ready = sum(1 for i in track2 if i.is_ready()) if track2 else None
t2_gaps = [i for i in track2 if i.status == ComplianceStatus.GAP] if track2 else []
return {
"organisation": self.organisation,
"model": self.model_name,
"days_to_august_2_2026": days_remaining,
"systemic_risk_track": self.is_systemic_risk,
"track1": {
"total": len(track1),
"compliant": t1_ready,
"completion_pct": round(t1_ready / len(track1) * 100) if track1 else 0,
"gaps": [{"id": i.item_id, "desc": i.description, "article": i.article}
for i in t1_gaps],
},
"track2": {
"total": len(track2),
"compliant": t2_ready,
"completion_pct": round(t2_ready / len(track2) * 100) if track2 else 0,
"gaps": [{"id": i.item_id, "desc": i.description, "article": i.article}
for i in t2_gaps],
} if self.is_systemic_risk else None,
}
def print_countdown_report(self) -> None:
summary = self.compliance_summary()
print(f"\n{'='*60}")
print(f"GPAI ENFORCEMENT COUNTDOWN — {summary['organisation']}")
print(f"Model: {summary['model']}")
print(f"Days to August 2, 2026: {summary['days_to_august_2_2026']}")
print(f"Systemic Risk Track: {'YES' if summary['systemic_risk_track'] else 'NO'}")
print(f"{'='*60}")
t1 = summary['track1']
print(f"\nTrack 1 (Art.53 — All GPAI): {t1['compliant']}/{t1['total']} "
f"({t1['completion_pct']}%)")
if t1['gaps']:
print(" GAPS:")
for g in t1['gaps']:
print(f" [{g['id']}] {g['desc']} ({g['article']})")
if summary['track2']:
t2 = summary['track2']
print(f"\nTrack 2 (Art.55 — Systemic Risk): {t2['compliant']}/{t2['total']} "
f"({t2['completion_pct']}%)")
if t2['gaps']:
print(" GAPS:")
for g in t2['gaps']:
print(f" [{g['id']}] {g['desc']} ({g['article']})")
print(f"\n{'='*60}\n")
Usage example
# Initialise tracker for a systemic-risk GPAI model
tracker = GPAIComplianceTracker(
organisation="Acme AI GmbH",
model_name="AcmeGPT-3",
training_flops=2e25, # Above 10^25 FLOPs — systemic risk track activated
)
# Mark completed items
tracker.update_status("T1-08", ComplianceStatus.COMPLIANT,
evidence="https://acme.ai/copyright-policy",
notes="Policy published April 2025")
tracker.update_status("T1-12", ComplianceStatus.COMPLIANT,
evidence="https://acme.ai/training-data-summary")
tracker.update_status("T2-02", ComplianceStatus.IN_PROGRESS,
notes="Red-teaming engagement with external firm Q2 2026")
# Generate countdown report
tracker.print_countdown_report()
The CLOUD Act Dimension for GPAI Providers
GPAI providers operating on US-incorporated cloud infrastructure face a structural compliance risk that is distinct from operational cybersecurity. Under the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act), US authorities can serve warrants on US cloud providers for data held on servers anywhere in the world — including in the EU.
For GPAI providers, the relevant assets include: model weights, training data, fine-tuning datasets, evaluation results, and technical documentation. If these assets are stored on AWS, Azure, or Google Cloud, and the GPAI provider uses a US-incorporated entity's services, US law enforcement and intelligence agencies have a legal pathway to that data that bypasses EU data protection frameworks.
This is relevant to Art.55(1)(c) (cybersecurity safeguards for systemic-risk models) in two ways. First, the AI Office's assessment of whether cybersecurity safeguards are adequate will increasingly consider jurisdiction risk alongside technical controls. Second, a CLOUD Act production order served on your infrastructure provider could extract model weights or training data without your knowledge — a scenario that the technical security safeguards of Art.55(1)(c) are designed to prevent but cannot address if the threat vector is the cloud provider's legal obligations rather than a technical attack.
EU-native infrastructure — providers incorporated under EU law with no US parent — removes this CLOUD Act exposure. The model weights, training data, and evaluation results stored on EU-native infrastructure are not subject to US warrants served on a US corporate parent because there is no US corporate parent. This is a compliance structural argument, not just a privacy preference.
See Also
- EU AI Act Art.51–52: GPAI Classification and Base Obligations — Developer Guide
- EU AI Act Art.53: GPAI Model Obligations — Technical Documentation, Copyright, and Downstream Transparency
- EU AI Act Art.55: Systemic Risk GPAI — Evaluation, Incident Reporting, and Cybersecurity Obligations
- EU AI Act Art.91: AI Office Information Requests for GPAI Providers
- EU AI Act Art.94: Commitments and Settlement Decisions — How to Resolve GPAI Enforcement Cooperatively
- EU AI Act GPAI Enforcement: Commission Powers and AI Office Actions (August 2026)