2026-04-25·15 min read·

GPAI Enforcement Countdown: 98 Days to August 2, 2026 — The GPAI Provider Compliance Checklist

August 2, 2026 is 98 days away. On that date, the EU AI Act enters full application — meaning every provision that was not already in force becomes enforceable, market surveillance authorities across all 27 Member States are fully operational, and the complete penalty regime under Art.99 is available to regulators.

For providers of general-purpose AI (GPAI) models, this deadline requires attention for a specific reason: your core obligations under Art.51–55 have been in force since August 2, 2025. One year in, the question is no longer whether the rules apply to you. It is whether you can demonstrate compliance to an AI Office that now has investigation, inspection, and enforcement tools at its disposal.

This guide provides a 30-item countdown checklist across two tracks — one for all GPAI providers, one for providers of models that present systemic risk — plus the enforcement context, Python compliance tooling, and the CLOUD Act dimension that makes infrastructure choices a compliance question.

What August 2, 2026 Means for GPAI Providers

The EU AI Act has a tiered application schedule governed by Art.113. The relevant milestones for GPAI providers are:

February 2, 2025 — Chapter I definitions and the Art.5 prohibited AI practices became enforceable.

August 2, 2025 — Chapter III (Art.51–68), which contains all GPAI-specific obligations, entered into force. GPAI providers have been subject to Art.53 technical documentation, copyright compliance, and downstream transparency requirements for twelve months as of this writing.

August 2, 2026 — Full application. The market surveillance framework, the complete Art.99 penalty regime, and Annex III high-risk AI obligations activate. For GPAI providers, this means the enforcement infrastructure that was being assembled during the transition year is now fully operational.

The practical implication: the AI Office has had twelve months to observe whether GPAI providers are meeting their Art.53–55 obligations. The providers who have not established compliant technical documentation, copyright policies, and (for systemic-risk models) adversarial testing programmes will enter the August 2026 enforcement landscape with demonstrable compliance gaps.

Two Compliance Tracks: All GPAI vs. Systemic Risk

EU AI Act Art.51 establishes two classification tiers for GPAI models, each carrying different obligations.

Track 1 — All GPAI Providers (Art.51(1)): Any model trained on large amounts of data using self-supervised learning at scale, capable of generating text, images, audio, video, code, or other content, and designed to serve a wide range of downstream tasks. This captures all foundation models, large language models, and general-purpose multimodal models regardless of training compute.

Track 2 — Systemic Risk Models (Art.51(2)): A GPAI model that poses systemic risk to the EU. The threshold indicator under Art.51(2) is training compute exceeding 10²⁵ floating-point operations (FLOPs). Models above this threshold are presumed to present systemic risk; models below it can still be designated as presenting systemic risk by the AI Office based on other criteria — model reach, capability evaluations, or downstream deployment patterns.

The compliance obligations are cumulative: Track 2 providers must satisfy all Track 1 obligations plus the additional systemic-risk requirements under Art.55.

What the AI Office Can Do to You After August 2, 2026

The AI Office has operated its GPAI oversight function since the chapter entered into force in August 2025. Its enforcement toolkit — which becomes most relevant once the full penalty framework activates — includes:

Art.91 — Information requests: The AI Office can require GPAI providers to provide any information necessary for it to carry out its supervisory tasks. This includes technical documentation, model evaluation results, incident reports, training data descriptions, and access to source code where necessary for evaluating compliance. Providers who cannot produce compliant Art.53 documentation when requested face immediate enforcement exposure.

Art.92 — On-site and remote inspections: The AI Office can conduct inspections of GPAI providers' premises, access IT systems, and examine documentation. For providers of systemic-risk models, the AI Office can also require access to the model itself for evaluation purposes under Art.55(1)(d).

Art.93 — Monitoring of GPAI market developments: The AI Office monitors GPAI model capabilities, tracks systemic risk indicators, and can require providers to submit periodic reports on model performance, deployment patterns, and capability evaluations.

Art.94 — Commitments and settlement decisions: If the AI Office identifies potential non-compliance, it can accept binding commitments from a GPAI provider to address its concerns without formal enforcement proceedings. This cooperative resolution mechanism — which we covered in detail in our Art.94 guide — is only available before a formal finding of infringement. Providers who have documented compliance efforts are better positioned to use this pathway.

The penalty exposure for GPAI providers under Art.99: violations of GPAI obligations (Art.51–55) fall under Art.99(2), carrying fines of up to EUR 15,000,000 or 3% of global annual turnover, whichever is higher. Supplying incorrect, incomplete, or misleading information to the AI Office triggers Art.99(3) at up to EUR 7,500,000 or 1.5% of global turnover.

The 30-Item GPAI Compliance Checklist

Track 1: All GPAI Providers (Art.53)

Technical Documentation (Art.53(1)(a))

The Art.53(1)(a) requirement is for GPAI providers to draw up and maintain technical documentation before the model is placed on the market and kept up to date. The documentation must be sufficient to allow the AI Office to assess compliance. Annex XI specifies the content requirements in detail.

Copyright Compliance Policy (Art.53(1)(b))

Downstream Provider Information (Art.53(1)(c) and Art.53(2))

Code of Practice (Art.56)

Track 2: Systemic Risk Models (Art.55)

These items apply only to GPAI models that meet or exceed the 10²⁵ FLOPs threshold under Art.51(2), or that have been designated as presenting systemic risk by the AI Office based on other criteria.

Model Evaluation (Art.55(1)(a))

Incident Reporting (Art.55(1)(b))

Cybersecurity (Art.55(1)(c))

AI Office Access (Art.55(1)(d))

Python GPAI Compliance Tracker

The following implementation provides a structured tool for tracking GPAI compliance status across both tracks and generating countdown reports.

from dataclasses import dataclass, field
from enum import Enum
from datetime import date, datetime
from typing import Optional


class ComplianceStatus(Enum):
    COMPLIANT = "compliant"
    IN_PROGRESS = "in_progress"
    GAP = "gap"
    NOT_APPLICABLE = "not_applicable"


class GPAITrack(Enum):
    ALL_GPAI = "all_gpai"          # Art.53 obligations
    SYSTEMIC_RISK = "systemic_risk" # Art.55 additional obligations


@dataclass
class ComplianceItem:
    item_id: str
    description: str
    article: str
    track: GPAITrack
    status: ComplianceStatus = ComplianceStatus.GAP
    evidence_location: Optional[str] = None
    last_reviewed: Optional[date] = None
    notes: str = ""

    def is_ready(self) -> bool:
        return self.status == ComplianceStatus.COMPLIANT

    def days_without_review(self) -> Optional[int]:
        if self.last_reviewed:
            return (date.today() - self.last_reviewed).days
        return None


@dataclass
class GPAIComplianceTracker:
    organisation: str
    model_name: str
    training_flops: Optional[float] = None  # If known, triggers systemic-risk track
    is_systemic_risk_designated: bool = False
    items: list[ComplianceItem] = field(default_factory=list)

    def __post_init__(self):
        if not self.items:
            self._initialise_checklist()

    @property
    def is_systemic_risk(self) -> bool:
        systemic_risk_threshold = 1e25  # 10^25 FLOPs
        if self.training_flops and self.training_flops >= systemic_risk_threshold:
            return True
        return self.is_systemic_risk_designated

    def _initialise_checklist(self):
        track1_items = [
            ("T1-01", "Model description document", "Art.53(1)(a)"),
            ("T1-02", "Training data description", "Art.53(1)(a)"),
            ("T1-03", "Training methodology + FLOPs documentation", "Art.53(1)(a)"),
            ("T1-04", "Capability evaluation results", "Art.53(1)(a)"),
            ("T1-05", "Known limitations documentation", "Art.53(1)(a)"),
            ("T1-06", "Instructions for downstream providers", "Art.53(1)(a)"),
            ("T1-07", "Technical documentation update process", "Art.53(1)(a)"),
            ("T1-08", "Publicly available copyright policy", "Art.53(1)(b)"),
            ("T1-09", "Opt-out mechanism compliance", "Art.53(1)(b)"),
            ("T1-10", "Training data provenance records", "Art.53(1)(b)"),
            ("T1-11", "Ongoing rights monitoring process", "Art.53(1)(b)"),
            ("T1-12", "Published training data summary", "Art.53(2)"),
            ("T1-13", "Machine-readable capability documentation", "Art.53(1)(c)"),
            ("T1-14", "Downstream integration guidance", "Art.53(1)(c)"),
            ("T1-15", "Downstream Art.55 obligations documentation", "Art.53(1)(c)"),
            ("T1-16", "CoP participation status documented", "Art.56"),
            ("T1-17", "CoP commitments documented", "Art.56"),
            ("T1-18", "Alternative compliance pathway (if no CoP)", "Art.56"),
        ]
        for item_id, desc, article in track1_items:
            self.items.append(ComplianceItem(
                item_id=item_id, description=desc, article=article,
                track=GPAITrack.ALL_GPAI
            ))

        if self.is_systemic_risk:
            track2_items = [
                ("T2-01", "Pre-deployment model evaluation", "Art.55(1)(a)"),
                ("T2-02", "Adversarial testing programme", "Art.55(1)(a)"),
                ("T2-03", "State-of-the-art evaluation methodology", "Art.55(1)(a)"),
                ("T2-04", "Evaluation results retention", "Art.55(1)(a)"),
                ("T2-05", "AI Office incident reporting mechanism", "Art.55(1)(b)"),
                ("T2-06", "Incident classification system", "Art.55(1)(b)"),
                ("T2-07", "Post-incident remediation documentation", "Art.55(1)(b)"),
                ("T2-08", "Cybersecurity assessment for model", "Art.55(1)(c)"),
                ("T2-09", "Model weights security safeguards", "Art.55(1)(c)"),
                ("T2-10", "Cybersecurity incident response plan", "Art.55(1)(c)"),
                ("T2-11", "AI Office model access readiness", "Art.55(1)(d)"),
                ("T2-12", "Documentation production process (Art.91)", "Art.55(1)(d)"),
            ]
            for item_id, desc, article in track2_items:
                self.items.append(ComplianceItem(
                    item_id=item_id, description=desc, article=article,
                    track=GPAITrack.SYSTEMIC_RISK
                ))

    def update_status(self, item_id: str, status: ComplianceStatus,
                      evidence: Optional[str] = None, notes: str = "") -> None:
        for item in self.items:
            if item.item_id == item_id:
                item.status = status
                item.last_reviewed = date.today()
                if evidence:
                    item.evidence_location = evidence
                if notes:
                    item.notes = notes
                return
        raise ValueError(f"Item {item_id} not found")

    def compliance_summary(self) -> dict:
        enforcement_date = date(2026, 8, 2)
        days_remaining = (enforcement_date - date.today()).days

        track1 = [i for i in self.items if i.track == GPAITrack.ALL_GPAI]
        track2 = [i for i in self.items if i.track == GPAITrack.SYSTEMIC_RISK]

        t1_ready = sum(1 for i in track1 if i.is_ready())
        t1_gaps = [i for i in track1 if i.status == ComplianceStatus.GAP]

        t2_ready = sum(1 for i in track2 if i.is_ready()) if track2 else None
        t2_gaps = [i for i in track2 if i.status == ComplianceStatus.GAP] if track2 else []

        return {
            "organisation": self.organisation,
            "model": self.model_name,
            "days_to_august_2_2026": days_remaining,
            "systemic_risk_track": self.is_systemic_risk,
            "track1": {
                "total": len(track1),
                "compliant": t1_ready,
                "completion_pct": round(t1_ready / len(track1) * 100) if track1 else 0,
                "gaps": [{"id": i.item_id, "desc": i.description, "article": i.article}
                         for i in t1_gaps],
            },
            "track2": {
                "total": len(track2),
                "compliant": t2_ready,
                "completion_pct": round(t2_ready / len(track2) * 100) if track2 else 0,
                "gaps": [{"id": i.item_id, "desc": i.description, "article": i.article}
                         for i in t2_gaps],
            } if self.is_systemic_risk else None,
        }

    def print_countdown_report(self) -> None:
        summary = self.compliance_summary()
        print(f"\n{'='*60}")
        print(f"GPAI ENFORCEMENT COUNTDOWN — {summary['organisation']}")
        print(f"Model: {summary['model']}")
        print(f"Days to August 2, 2026: {summary['days_to_august_2_2026']}")
        print(f"Systemic Risk Track: {'YES' if summary['systemic_risk_track'] else 'NO'}")
        print(f"{'='*60}")
        t1 = summary['track1']
        print(f"\nTrack 1 (Art.53 — All GPAI): {t1['compliant']}/{t1['total']} "
              f"({t1['completion_pct']}%)")
        if t1['gaps']:
            print("  GAPS:")
            for g in t1['gaps']:
                print(f"    [{g['id']}] {g['desc']} ({g['article']})")
        if summary['track2']:
            t2 = summary['track2']
            print(f"\nTrack 2 (Art.55 — Systemic Risk): {t2['compliant']}/{t2['total']} "
                  f"({t2['completion_pct']}%)")
            if t2['gaps']:
                print("  GAPS:")
                for g in t2['gaps']:
                    print(f"    [{g['id']}] {g['desc']} ({g['article']})")
        print(f"\n{'='*60}\n")

Usage example

# Initialise tracker for a systemic-risk GPAI model
tracker = GPAIComplianceTracker(
    organisation="Acme AI GmbH",
    model_name="AcmeGPT-3",
    training_flops=2e25,  # Above 10^25 FLOPs — systemic risk track activated
)

# Mark completed items
tracker.update_status("T1-08", ComplianceStatus.COMPLIANT,
                      evidence="https://acme.ai/copyright-policy",
                      notes="Policy published April 2025")
tracker.update_status("T1-12", ComplianceStatus.COMPLIANT,
                      evidence="https://acme.ai/training-data-summary")
tracker.update_status("T2-02", ComplianceStatus.IN_PROGRESS,
                      notes="Red-teaming engagement with external firm Q2 2026")

# Generate countdown report
tracker.print_countdown_report()

The CLOUD Act Dimension for GPAI Providers

GPAI providers operating on US-incorporated cloud infrastructure face a structural compliance risk that is distinct from operational cybersecurity. Under the US CLOUD Act (Clarifying Lawful Overseas Use of Data Act), US authorities can serve warrants on US cloud providers for data held on servers anywhere in the world — including in the EU.

For GPAI providers, the relevant assets include: model weights, training data, fine-tuning datasets, evaluation results, and technical documentation. If these assets are stored on AWS, Azure, or Google Cloud, and the GPAI provider uses a US-incorporated entity's services, US law enforcement and intelligence agencies have a legal pathway to that data that bypasses EU data protection frameworks.

This is relevant to Art.55(1)(c) (cybersecurity safeguards for systemic-risk models) in two ways. First, the AI Office's assessment of whether cybersecurity safeguards are adequate will increasingly consider jurisdiction risk alongside technical controls. Second, a CLOUD Act production order served on your infrastructure provider could extract model weights or training data without your knowledge — a scenario that the technical security safeguards of Art.55(1)(c) are designed to prevent but cannot address if the threat vector is the cloud provider's legal obligations rather than a technical attack.

EU-native infrastructure — providers incorporated under EU law with no US parent — removes this CLOUD Act exposure. The model weights, training data, and evaluation results stored on EU-native infrastructure are not subject to US warrants served on a US corporate parent because there is no US corporate parent. This is a compliance structural argument, not just a privacy preference.

See Also