2026-04-27·13 min read·sota.io team

EU AI Act Transitional Provisions for Existing AI Systems: What Your 2024-Built Product Must Do by August 2026

You shipped an AI feature in 2023. Or 2024. Your recommendation engine, your credit-scoring model, your CV-screening tool — it was live before the EU AI Act entered into force.

Does the regulation apply to you? And if so, when?

The EU AI Act includes transitional provisions — grace periods that give existing AI systems time to come into compliance. But these provisions have a specific legal structure. "Already on the market" has a precise meaning. "Substantial modification" resets your compliance clock to zero. And the August 2026 deadline for high-risk AI systems is 98 days away as of this writing.

This guide answers the question every developer with a pre-regulation AI product is asking: exactly when do I need to comply, and what counts as a compliance-triggering event?


The EU AI Act Timeline: Four Compliance Dates

The EU AI Act (Regulation EU 2024/1689) entered into force on 1 August 2024. From that date, four staggered deadlines apply:

DeadlineDateWhat Applies
Prohibited practices2 February 2025Art.5 prohibitions: social scoring, real-time biometric surveillance, subliminal manipulation AI
GPAI model obligations2 August 2025General-purpose AI models (Art.51–56), including systemic-risk models
High-risk AI (Annex III)2 August 2026All high-risk AI systems in Annex III categories (employment, credit, biometric, education, etc.)
High-risk AI (Annex I)2 August 2027AI used as safety components in Annex I regulated products (machinery, medical devices, vehicles)

The transitional provisions for existing systems sit on top of this timeline. They do not eliminate compliance obligations — they determine which compliance deadline applies to systems that were already available before each of these dates.


What "Placed on the Market" Means

The EU AI Act uses "placed on the market" as the trigger concept (Art.3(9)):

Making an AI system available for the first time on the Union market.

This is a product-law concept borrowed from EU product regulation. "Placed on the market" is the moment a system is first made commercially available in the EU — regardless of where it is hosted, who built it, or when users started using it.

Critical distinctions:

If you first offered your AI product in the EU before 2 August 2024, your system was placed on the market before the regulation entered into force.


The Transitional Window for High-Risk AI

For high-risk AI systems in Annex III categories, Art.103 establishes a 24-month transitional period. The key provision:

High-risk AI systems which have been placed on the market or put into service before 2 August 2026 shall comply with this Regulation by 2 August 2027, if they undergo a substantial modification after 2 August 2026.

Read that carefully. It means:

  1. If your Annex III high-risk AI was live before 2 August 2026 and you do not substantially modify it after that date, you do not automatically face immediate enforcement action under the transitional grace.
  2. If you substantially modify your system after 2 August 2026, the system is treated as a newly placed product and must comply immediately.
  3. The grace period is not indefinite — it applies to systems already on the market at the moment of each deadline. Systems placed on the market after the deadline must comply immediately.

For Annex I products (safety-component AI), the equivalent provision extends to 2 August 2027.


The Substantial Modification Trigger

"Substantial modification" (Art.3(23)) is the most important concept for existing AI systems:

A change to an AI system after its placing on the market or putting into service which affects the compliance of the AI system with this Regulation or results in a modification to the intended purpose for which the AI system has been assessed.

In practice, courts and enforcement authorities will look at whether the change:

What does not constitute substantial modification:

The practical challenge: there is no bright-line test. The EU AI Act Office and national market surveillance authorities will develop guidance, but as of August 2026, the "substantial modification" boundary remains a legal gray area that development teams need to document proactively.


Risk Classification and Your Compliance Deadline

Not every AI system faces the same transitional timeline. Your compliance deadline depends on which Annex III category (if any) applies to your system.

Annex III High-Risk Categories

Annex III PointCategoryExamplesDeadline
1Biometric identification and categorisationFace verification, emotion recognition2 August 2026
2Critical infrastructureAI in energy grids, water systems, traffic management2 August 2026
3Education and vocational trainingAutomated scoring, student admission AI2 August 2026
4Employment and workforceCV screening, performance evaluation AI2 August 2026
5Essential servicesCredit scoring, insurance risk AI, emergency services2 August 2026
6Law enforcementRisk assessment tools used by police2 August 2026
7Migration and border managementAsylum case processing AI2 August 2026
8Administration of justiceAI used in court decisions or dispute resolution2 August 2026

If your existing system falls into any of these categories, 2 August 2026 is your deadline for systems placed on the market after that date. For systems already on the market before that date and not substantially modified, the transitional window gives additional time — but that window closes.

Not High-Risk

If your system does not fall into an Annex III category, and it is not a general-purpose AI model (GPAI), the transitional provisions are less immediately urgent. Transparency obligations under Art.50 (for AI that interacts with humans or generates synthetic content) applied from 2 August 2026, but these are less burdensome than the full high-risk compliance stack.


GPAI Models: A Different Transitional Structure

General-purpose AI models (GPAI) under Art.51–56 have their own transitional regime:

If your product integrates a GPAI model but is not itself a GPAI provider (i.e., you use OpenAI, Anthropic, Mistral, etc. via API), the GPAI obligations fall on the model provider, not on you. Your compliance focus should be on whether your use of the model creates a high-risk AI system under Annex III.

CLOUD Act note: If you use a US-hosted GPAI model (GPT-4, Claude, Gemini) and your system processes EU personal data, you have two distinct compliance obligations: (1) EU AI Act compliance as a deployer, and (2) GDPR Art.44 transfer compliance for data sent to US infrastructure. These are independent of each other. Even a fully EU AI Act-compliant system can still violate GDPR if personal data is sent to US servers subject to CLOUD Act compelled disclosure.


What You Need to Document Before August 2026

For any system that might qualify as high-risk under Annex III, here is the minimum documentation set to prepare before the August 2026 deadline:

1. Risk Classification Assessment A documented determination of whether your system falls under Annex III, referencing the specific point number and the intended purpose mapping. Art.6(3) allows providers to register systems in the EU database with a "no significant risk" determination — but that determination must be documented and defensible.

2. Substantial Modification Log A change-log that records each system update and documents whether it constitutes a substantial modification. This log protects you if a regulator later questions whether your transitional grace period applies.

3. Intended Purpose Declaration A formal statement of the intended purpose (Art.3(12)) — the use for which the system was designed including its context, target users, and deployment environment. This is the anchor for both risk classification and substantial modification assessment.

4. CLOUD Act Infrastructure Audit If your system processes EU personal data on US infrastructure, document the transfer mechanism (adequacy, SCCs, BCRs) and the CLOUD Act exposure surface. EU AI Act compliance does not neutralise GDPR Art.44 obligations — and deploying on EU-native infrastructure (servers owned and operated by EU entities, not US subsidiaries) eliminates the transfer mechanism requirement entirely.


Python: ExistingAISystemComplianceTracker

from dataclasses import dataclass, field
from datetime import date
from enum import Enum
from typing import Optional


class AnnexIIIPoint(Enum):
    BIOMETRIC = "Annex III Point 1: Biometric identification"
    CRITICAL_INFRA = "Annex III Point 2: Critical infrastructure"
    EDUCATION = "Annex III Point 3: Education"
    EMPLOYMENT = "Annex III Point 4: Employment"
    ESSENTIAL_SERVICES = "Annex III Point 5: Essential services"
    LAW_ENFORCEMENT = "Annex III Point 6: Law enforcement"
    MIGRATION = "Annex III Point 7: Migration/border"
    JUSTICE = "Annex III Point 8: Justice"
    NOT_HIGH_RISK = "Not Annex III"
    UNKNOWN = "Classification pending"


class GPAIStatus(Enum):
    GPAI_PROVIDER = "Provider of GPAI model"
    GPAI_USER = "Downstream user/deployer of GPAI"
    NOT_GPAI = "No GPAI involvement"


@dataclass
class ModificationRecord:
    date: date
    description: str
    is_substantial: bool
    rationale: str


@dataclass
class ExistingAISystem:
    name: str
    first_eu_market_date: date
    annex_iii_classification: AnnexIIIPoint
    gpai_status: GPAIStatus
    modification_history: list[ModificationRecord] = field(default_factory=list)
    cloud_act_exposure: bool = False
    intended_purpose: str = ""

    EU_AI_ACT_IN_FORCE = date(2024, 8, 1)
    HIGH_RISK_DEADLINE = date(2026, 8, 2)
    ANNEX_I_DEADLINE = date(2027, 8, 2)
    GPAI_DEADLINE = date(2025, 8, 2)

    @property
    def placed_before_ai_act(self) -> bool:
        return self.first_eu_market_date < self.EU_AI_ACT_IN_FORCE

    @property
    def last_substantial_modification(self) -> Optional[ModificationRecord]:
        substantial = [m for m in self.modification_history if m.is_substantial]
        return max(substantial, key=lambda m: m.date) if substantial else None

    @property
    def compliance_clock_reset(self) -> bool:
        lsm = self.last_substantial_modification
        if lsm is None:
            return False
        return lsm.date >= self.HIGH_RISK_DEADLINE

    def compliance_deadline(self) -> tuple[date, str]:
        if self.annex_iii_classification == AnnexIIIPoint.NOT_HIGH_RISK:
            return date(2026, 8, 2), "Transparency obligations only (Art.50)"

        if self.compliance_clock_reset:
            return self.HIGH_RISK_DEADLINE, "Immediate — substantial modification after deadline"

        if self.placed_before_ai_act and not self.compliance_clock_reset:
            return self.ANNEX_I_DEADLINE, "Transitional grace: pre-Act placement, no substantial modification"

        return self.HIGH_RISK_DEADLINE, "Standard Annex III deadline"

    def assess_compliance_gap(self) -> dict:
        deadline, reason = self.compliance_deadline()
        today = date.today()
        days_remaining = (deadline - today).days

        issues = []
        if not self.intended_purpose:
            issues.append("MISSING: Intended purpose declaration (Art.3(12))")
        if self.cloud_act_exposure:
            issues.append("RISK: US infrastructure exposure — GDPR Art.44 transfer mechanism required")
        if not self.modification_history:
            issues.append("MISSING: Substantial modification log — transitional grace needs documentation")
        if self.annex_iii_classification == AnnexIIIPoint.UNKNOWN:
            issues.append("MISSING: Annex III risk classification assessment")

        return {
            "system": self.name,
            "deadline": str(deadline),
            "days_remaining": days_remaining,
            "deadline_reason": reason,
            "compliance_gaps": issues,
            "transitional_grace_available": self.placed_before_ai_act and not self.compliance_clock_reset,
            "cloud_act_risk": self.cloud_act_exposure,
        }


# Example usage
cv_screening_ai = ExistingAISystem(
    name="CVScreener v2.1",
    first_eu_market_date=date(2023, 6, 1),
    annex_iii_classification=AnnexIIIPoint.EMPLOYMENT,
    gpai_status=GPAIStatus.GPAI_USER,
    cloud_act_exposure=True,
    intended_purpose="Automated CV screening for initial candidate shortlisting",
    modification_history=[
        ModificationRecord(
            date=date(2024, 11, 15),
            description="Updated scoring model with new training data",
            is_substantial=False,
            rationale="Same intended purpose and risk profile, performance improvement only",
        ),
        ModificationRecord(
            date=date(2025, 3, 20),
            description="Added salary prediction feature",
            is_substantial=True,
            rationale="New intended purpose element — salary assessment adds Annex III Point 4 scope",
        ),
    ],
)

result = cv_screening_ai.assess_compliance_gap()
print(f"System: {result['system']}")
print(f"Deadline: {result['deadline']} ({result['days_remaining']} days)")
print(f"Reason: {result['deadline_reason']}")
print(f"Transitional grace: {result['transitional_grace_available']}")
for gap in result['compliance_gaps']:
    print(f"  ⚠ {gap}")

Infrastructure Jurisdiction and Transitional Compliance

The transitional provisions address when you must comply — not how to eliminate compliance risk efficiently. But infrastructure jurisdiction affects both dimensions.

If your AI system is hosted on US-parent cloud infrastructure (AWS, Azure, GCP, or their EU-region subsidiaries), you face compliance obligations under two separate frameworks simultaneously:

  1. EU AI Act compliance (risk management, technical documentation, conformity assessment)
  2. GDPR Art.44 transfer compliance (legal basis for sending EU personal data to US-jurisdiction servers)

Both clocks run independently. An AI system can be fully EU AI Act-compliant while still violating GDPR Art.44 because the underlying training pipeline or inference infrastructure is subject to CLOUD Act compelled disclosure.

The August 2026 deadline is a forcing function to audit your full infrastructure stack — not just your model documentation. Teams that wait until July 2026 often discover that their training data pipeline, model weights storage, and inference logs are all on US infrastructure, creating a remediation timeline that extends beyond the compliance deadline.

EU-native infrastructure (providers incorporated and operated in EU member states, without US parent entities) eliminates the Art.44 transfer mechanism requirement for data processed within that infrastructure. This is structural compliance rather than documentation compliance — and it does not need to be re-demonstrated each audit cycle.


20-Item Transitional Compliance Checklist for Existing AI Systems

Classification and Scope (Items 1–5)

Transitional Grace and Modification History (Items 6–10)

Technical Documentation (Items 11–14)

Infrastructure and Transfer Compliance (Items 15–18)

Deadline and Monitoring (Items 19–20)


Key Takeaways for Development Teams

The transitional provisions are not a perpetual exemption. They provide a structured window for existing systems — but that window closes at the Annex III deadline (August 2026) for new placements, and transitional grace requires documentation that you qualify.

Substantial modification is the most dangerous trigger. Development teams that ship model updates, expand to new use cases, or add new data categories after August 2026 may unknowingly reset their compliance clock. Build the modification assessment into your release process now.

Infrastructure jurisdiction affects EU AI Act compliance indirectly. Hosting your AI on US-parent infrastructure creates concurrent GDPR Art.44 obligations that persist independently of your EU AI Act transitional status. Resolving infrastructure jurisdiction before August 2026 eliminates one compliance obligation entirely.

Document everything before the deadline, not after. Transitional grace is available — but regulators will ask for documentation that the system qualified. Retroactive documentation is harder to defend than contemporaneous records.

The 98-day window to August 2026 is enough time to complete classification, document modification history, and initiate technical documentation — but not if you start in late July. The teams that navigate the transitional provisions successfully will be the ones who run the classification and modification-log exercise now, while there is still time to resolve gaps.


sota.io is an EU-native managed PaaS — incorporated in the EU, operated on EU infrastructure, without US parent entities. Deploying your AI workloads on sota.io means your inference infrastructure is not subject to CLOUD Act compelled disclosure and does not require GDPR Art.44 transfer mechanisms. Start your free deployment →