EU AI Act Transitional Provisions for Existing AI Systems: What Your 2024-Built Product Must Do by August 2026
You shipped an AI feature in 2023. Or 2024. Your recommendation engine, your credit-scoring model, your CV-screening tool — it was live before the EU AI Act entered into force.
Does the regulation apply to you? And if so, when?
The EU AI Act includes transitional provisions — grace periods that give existing AI systems time to come into compliance. But these provisions have a specific legal structure. "Already on the market" has a precise meaning. "Substantial modification" resets your compliance clock to zero. And the August 2026 deadline for high-risk AI systems is 98 days away as of this writing.
This guide answers the question every developer with a pre-regulation AI product is asking: exactly when do I need to comply, and what counts as a compliance-triggering event?
The EU AI Act Timeline: Four Compliance Dates
The EU AI Act (Regulation EU 2024/1689) entered into force on 1 August 2024. From that date, four staggered deadlines apply:
| Deadline | Date | What Applies |
|---|---|---|
| Prohibited practices | 2 February 2025 | Art.5 prohibitions: social scoring, real-time biometric surveillance, subliminal manipulation AI |
| GPAI model obligations | 2 August 2025 | General-purpose AI models (Art.51–56), including systemic-risk models |
| High-risk AI (Annex III) | 2 August 2026 | All high-risk AI systems in Annex III categories (employment, credit, biometric, education, etc.) |
| High-risk AI (Annex I) | 2 August 2027 | AI used as safety components in Annex I regulated products (machinery, medical devices, vehicles) |
The transitional provisions for existing systems sit on top of this timeline. They do not eliminate compliance obligations — they determine which compliance deadline applies to systems that were already available before each of these dates.
What "Placed on the Market" Means
The EU AI Act uses "placed on the market" as the trigger concept (Art.3(9)):
Making an AI system available for the first time on the Union market.
This is a product-law concept borrowed from EU product regulation. "Placed on the market" is the moment a system is first made commercially available in the EU — regardless of where it is hosted, who built it, or when users started using it.
Critical distinctions:
- Internal use (AI used within a company, not provided to others) is deployment, not placing on the market. Different obligations apply to deployers (Art.26) vs. providers (Art.22).
- SaaS products are placed on the market when the service is first made available to EU customers — not when individual customers sign up.
- API products are placed on the market when the API is first offered commercially in the EU.
- Open-source AI models that are released publicly are also "placed on the market" under the GPAI provisions (subject to specific open-source exceptions in Art.53(2)).
If you first offered your AI product in the EU before 2 August 2024, your system was placed on the market before the regulation entered into force.
The Transitional Window for High-Risk AI
For high-risk AI systems in Annex III categories, Art.103 establishes a 24-month transitional period. The key provision:
High-risk AI systems which have been placed on the market or put into service before 2 August 2026 shall comply with this Regulation by 2 August 2027, if they undergo a substantial modification after 2 August 2026.
Read that carefully. It means:
- If your Annex III high-risk AI was live before 2 August 2026 and you do not substantially modify it after that date, you do not automatically face immediate enforcement action under the transitional grace.
- If you substantially modify your system after 2 August 2026, the system is treated as a newly placed product and must comply immediately.
- The grace period is not indefinite — it applies to systems already on the market at the moment of each deadline. Systems placed on the market after the deadline must comply immediately.
For Annex I products (safety-component AI), the equivalent provision extends to 2 August 2027.
The Substantial Modification Trigger
"Substantial modification" (Art.3(23)) is the most important concept for existing AI systems:
A change to an AI system after its placing on the market or putting into service which affects the compliance of the AI system with this Regulation or results in a modification to the intended purpose for which the AI system has been assessed.
In practice, courts and enforcement authorities will look at whether the change:
- Modifies the intended purpose of the system
- Changes the risk level (e.g., adds a new Annex III use case)
- Alters the AI model itself in a way that changes its behavior materially
- Introduces new training data categories that affect protected-attribute processing
- Changes the scope of affected persons (e.g., expanding from one sector to another)
What does not constitute substantial modification:
- Bug fixes that do not affect intended purpose or performance characteristics
- Routine updates that do not change the model architecture or outputs materially
- UI or API changes that do not affect the underlying AI system
- Performance improvements within the same intended purpose and risk profile
- Security patches
The practical challenge: there is no bright-line test. The EU AI Act Office and national market surveillance authorities will develop guidance, but as of August 2026, the "substantial modification" boundary remains a legal gray area that development teams need to document proactively.
Risk Classification and Your Compliance Deadline
Not every AI system faces the same transitional timeline. Your compliance deadline depends on which Annex III category (if any) applies to your system.
Annex III High-Risk Categories
| Annex III Point | Category | Examples | Deadline |
|---|---|---|---|
| 1 | Biometric identification and categorisation | Face verification, emotion recognition | 2 August 2026 |
| 2 | Critical infrastructure | AI in energy grids, water systems, traffic management | 2 August 2026 |
| 3 | Education and vocational training | Automated scoring, student admission AI | 2 August 2026 |
| 4 | Employment and workforce | CV screening, performance evaluation AI | 2 August 2026 |
| 5 | Essential services | Credit scoring, insurance risk AI, emergency services | 2 August 2026 |
| 6 | Law enforcement | Risk assessment tools used by police | 2 August 2026 |
| 7 | Migration and border management | Asylum case processing AI | 2 August 2026 |
| 8 | Administration of justice | AI used in court decisions or dispute resolution | 2 August 2026 |
If your existing system falls into any of these categories, 2 August 2026 is your deadline for systems placed on the market after that date. For systems already on the market before that date and not substantially modified, the transitional window gives additional time — but that window closes.
Not High-Risk
If your system does not fall into an Annex III category, and it is not a general-purpose AI model (GPAI), the transitional provisions are less immediately urgent. Transparency obligations under Art.50 (for AI that interacts with humans or generates synthetic content) applied from 2 August 2026, but these are less burdensome than the full high-risk compliance stack.
GPAI Models: A Different Transitional Structure
General-purpose AI models (GPAI) under Art.51–56 have their own transitional regime:
- GPAI obligations applied from 2 August 2025
- Providers of GPAI models already placed on the market before that date had until 2 February 2026 to comply (6-month grace)
- GPAI models with systemic risk (> 10^25 FLOPs training compute) face additional obligations under Art.55
If your product integrates a GPAI model but is not itself a GPAI provider (i.e., you use OpenAI, Anthropic, Mistral, etc. via API), the GPAI obligations fall on the model provider, not on you. Your compliance focus should be on whether your use of the model creates a high-risk AI system under Annex III.
CLOUD Act note: If you use a US-hosted GPAI model (GPT-4, Claude, Gemini) and your system processes EU personal data, you have two distinct compliance obligations: (1) EU AI Act compliance as a deployer, and (2) GDPR Art.44 transfer compliance for data sent to US infrastructure. These are independent of each other. Even a fully EU AI Act-compliant system can still violate GDPR if personal data is sent to US servers subject to CLOUD Act compelled disclosure.
What You Need to Document Before August 2026
For any system that might qualify as high-risk under Annex III, here is the minimum documentation set to prepare before the August 2026 deadline:
1. Risk Classification Assessment A documented determination of whether your system falls under Annex III, referencing the specific point number and the intended purpose mapping. Art.6(3) allows providers to register systems in the EU database with a "no significant risk" determination — but that determination must be documented and defensible.
2. Substantial Modification Log A change-log that records each system update and documents whether it constitutes a substantial modification. This log protects you if a regulator later questions whether your transitional grace period applies.
3. Intended Purpose Declaration A formal statement of the intended purpose (Art.3(12)) — the use for which the system was designed including its context, target users, and deployment environment. This is the anchor for both risk classification and substantial modification assessment.
4. CLOUD Act Infrastructure Audit If your system processes EU personal data on US infrastructure, document the transfer mechanism (adequacy, SCCs, BCRs) and the CLOUD Act exposure surface. EU AI Act compliance does not neutralise GDPR Art.44 obligations — and deploying on EU-native infrastructure (servers owned and operated by EU entities, not US subsidiaries) eliminates the transfer mechanism requirement entirely.
Python: ExistingAISystemComplianceTracker
from dataclasses import dataclass, field
from datetime import date
from enum import Enum
from typing import Optional
class AnnexIIIPoint(Enum):
BIOMETRIC = "Annex III Point 1: Biometric identification"
CRITICAL_INFRA = "Annex III Point 2: Critical infrastructure"
EDUCATION = "Annex III Point 3: Education"
EMPLOYMENT = "Annex III Point 4: Employment"
ESSENTIAL_SERVICES = "Annex III Point 5: Essential services"
LAW_ENFORCEMENT = "Annex III Point 6: Law enforcement"
MIGRATION = "Annex III Point 7: Migration/border"
JUSTICE = "Annex III Point 8: Justice"
NOT_HIGH_RISK = "Not Annex III"
UNKNOWN = "Classification pending"
class GPAIStatus(Enum):
GPAI_PROVIDER = "Provider of GPAI model"
GPAI_USER = "Downstream user/deployer of GPAI"
NOT_GPAI = "No GPAI involvement"
@dataclass
class ModificationRecord:
date: date
description: str
is_substantial: bool
rationale: str
@dataclass
class ExistingAISystem:
name: str
first_eu_market_date: date
annex_iii_classification: AnnexIIIPoint
gpai_status: GPAIStatus
modification_history: list[ModificationRecord] = field(default_factory=list)
cloud_act_exposure: bool = False
intended_purpose: str = ""
EU_AI_ACT_IN_FORCE = date(2024, 8, 1)
HIGH_RISK_DEADLINE = date(2026, 8, 2)
ANNEX_I_DEADLINE = date(2027, 8, 2)
GPAI_DEADLINE = date(2025, 8, 2)
@property
def placed_before_ai_act(self) -> bool:
return self.first_eu_market_date < self.EU_AI_ACT_IN_FORCE
@property
def last_substantial_modification(self) -> Optional[ModificationRecord]:
substantial = [m for m in self.modification_history if m.is_substantial]
return max(substantial, key=lambda m: m.date) if substantial else None
@property
def compliance_clock_reset(self) -> bool:
lsm = self.last_substantial_modification
if lsm is None:
return False
return lsm.date >= self.HIGH_RISK_DEADLINE
def compliance_deadline(self) -> tuple[date, str]:
if self.annex_iii_classification == AnnexIIIPoint.NOT_HIGH_RISK:
return date(2026, 8, 2), "Transparency obligations only (Art.50)"
if self.compliance_clock_reset:
return self.HIGH_RISK_DEADLINE, "Immediate — substantial modification after deadline"
if self.placed_before_ai_act and not self.compliance_clock_reset:
return self.ANNEX_I_DEADLINE, "Transitional grace: pre-Act placement, no substantial modification"
return self.HIGH_RISK_DEADLINE, "Standard Annex III deadline"
def assess_compliance_gap(self) -> dict:
deadline, reason = self.compliance_deadline()
today = date.today()
days_remaining = (deadline - today).days
issues = []
if not self.intended_purpose:
issues.append("MISSING: Intended purpose declaration (Art.3(12))")
if self.cloud_act_exposure:
issues.append("RISK: US infrastructure exposure — GDPR Art.44 transfer mechanism required")
if not self.modification_history:
issues.append("MISSING: Substantial modification log — transitional grace needs documentation")
if self.annex_iii_classification == AnnexIIIPoint.UNKNOWN:
issues.append("MISSING: Annex III risk classification assessment")
return {
"system": self.name,
"deadline": str(deadline),
"days_remaining": days_remaining,
"deadline_reason": reason,
"compliance_gaps": issues,
"transitional_grace_available": self.placed_before_ai_act and not self.compliance_clock_reset,
"cloud_act_risk": self.cloud_act_exposure,
}
# Example usage
cv_screening_ai = ExistingAISystem(
name="CVScreener v2.1",
first_eu_market_date=date(2023, 6, 1),
annex_iii_classification=AnnexIIIPoint.EMPLOYMENT,
gpai_status=GPAIStatus.GPAI_USER,
cloud_act_exposure=True,
intended_purpose="Automated CV screening for initial candidate shortlisting",
modification_history=[
ModificationRecord(
date=date(2024, 11, 15),
description="Updated scoring model with new training data",
is_substantial=False,
rationale="Same intended purpose and risk profile, performance improvement only",
),
ModificationRecord(
date=date(2025, 3, 20),
description="Added salary prediction feature",
is_substantial=True,
rationale="New intended purpose element — salary assessment adds Annex III Point 4 scope",
),
],
)
result = cv_screening_ai.assess_compliance_gap()
print(f"System: {result['system']}")
print(f"Deadline: {result['deadline']} ({result['days_remaining']} days)")
print(f"Reason: {result['deadline_reason']}")
print(f"Transitional grace: {result['transitional_grace_available']}")
for gap in result['compliance_gaps']:
print(f" ⚠ {gap}")
Infrastructure Jurisdiction and Transitional Compliance
The transitional provisions address when you must comply — not how to eliminate compliance risk efficiently. But infrastructure jurisdiction affects both dimensions.
If your AI system is hosted on US-parent cloud infrastructure (AWS, Azure, GCP, or their EU-region subsidiaries), you face compliance obligations under two separate frameworks simultaneously:
- EU AI Act compliance (risk management, technical documentation, conformity assessment)
- GDPR Art.44 transfer compliance (legal basis for sending EU personal data to US-jurisdiction servers)
Both clocks run independently. An AI system can be fully EU AI Act-compliant while still violating GDPR Art.44 because the underlying training pipeline or inference infrastructure is subject to CLOUD Act compelled disclosure.
The August 2026 deadline is a forcing function to audit your full infrastructure stack — not just your model documentation. Teams that wait until July 2026 often discover that their training data pipeline, model weights storage, and inference logs are all on US infrastructure, creating a remediation timeline that extends beyond the compliance deadline.
EU-native infrastructure (providers incorporated and operated in EU member states, without US parent entities) eliminates the Art.44 transfer mechanism requirement for data processed within that infrastructure. This is structural compliance rather than documentation compliance — and it does not need to be re-demonstrated each audit cycle.
20-Item Transitional Compliance Checklist for Existing AI Systems
Classification and Scope (Items 1–5)
- 1. Document the date your system was first made available to EU customers or users
- 2. Determine whether your system qualifies as "placed on the market" or "put into service" under Art.3(9)/(11)
- 3. Complete a formal Annex III risk classification assessment referencing specific point numbers
- 4. If Annex III applies: document the specific intended purpose that triggers the classification
- 5. If GPAI is involved: confirm whether you are a GPAI provider (Art.51) or a downstream deployer (Art.26)
Transitional Grace and Modification History (Items 6–10)
- 6. Create a modification log documenting all system changes since August 2024
- 7. For each modification, record whether it constitutes a substantial modification under Art.3(23)
- 8. Document the legal rationale for each non-substantial determination
- 9. Confirm whether your transitional grace period applies based on placement date and modification history
- 10. Set monitoring alerts for any planned modifications that might trigger the substantial modification threshold
Technical Documentation (Items 11–14)
- 11. Prepare intended purpose declaration (Art.3(12)) — specific, not generic
- 12. Initiate technical documentation per Art.11 and Annex IV — even if deadline is transitional
- 13. Map your risk management system against Art.9 requirements
- 14. Identify gaps in logging, traceability, and audit-trail requirements (Art.12)
Infrastructure and Transfer Compliance (Items 15–18)
- 15. Audit infrastructure jurisdiction: are servers and operators EU entities?
- 16. If US-parent infrastructure: identify all personal data flows to US-jurisdiction systems
- 17. Verify GDPR Art.44 transfer mechanism for each data flow (adequacy, SCCs, or BCRs)
- 18. Document CLOUD Act exposure surface — which US entities could receive a CLOUD Act order affecting your data
Deadline and Monitoring (Items 19–20)
- 19. Set calendar reminders for August 2, 2026 (Annex III deadline) and August 2, 2027 (Annex I deadline)
- 20. Assign a named compliance owner internally for ongoing substantial-modification assessment
Key Takeaways for Development Teams
The transitional provisions are not a perpetual exemption. They provide a structured window for existing systems — but that window closes at the Annex III deadline (August 2026) for new placements, and transitional grace requires documentation that you qualify.
Substantial modification is the most dangerous trigger. Development teams that ship model updates, expand to new use cases, or add new data categories after August 2026 may unknowingly reset their compliance clock. Build the modification assessment into your release process now.
Infrastructure jurisdiction affects EU AI Act compliance indirectly. Hosting your AI on US-parent infrastructure creates concurrent GDPR Art.44 obligations that persist independently of your EU AI Act transitional status. Resolving infrastructure jurisdiction before August 2026 eliminates one compliance obligation entirely.
Document everything before the deadline, not after. Transitional grace is available — but regulators will ask for documentation that the system qualified. Retroactive documentation is harder to defend than contemporaneous records.
The 98-day window to August 2026 is enough time to complete classification, document modification history, and initiate technical documentation — but not if you start in late July. The teams that navigate the transitional provisions successfully will be the ones who run the classification and modification-log exercise now, while there is still time to resolve gaps.
sota.io is an EU-native managed PaaS — incorporated in the EU, operated on EU infrastructure, without US parent entities. Deploying your AI workloads on sota.io means your inference infrastructure is not subject to CLOUD Act compelled disclosure and does not require GDPR Art.44 transfer mechanisms. Start your free deployment →