2026-04-29·14 min read·

EU AI Act August 2026: Developer Action Checklist for Art.50, GPAI Enforcement, and Omnibus Stalemate

Post #711 in the sota.io EU Compliance Series

The EU AI Act enforcement calendar has two dates that matter for most developers. The first — 2 February 2025 — has already passed (prohibited AI practices, Art.5). The second is 2 August 2026: the date when transparency obligations for publicly deployed AI systems (Art.50) and GPAI Code of Practice enforcement activate. You have 95 days.

The situation got more complicated on 28 April 2026. The Digital Omnibus Regulation Trilogue — which was supposed to simplify the AI Act's Annex III high-risk classification and push certain deadlines — failed after 12 hours of negotiations. Council wants longer transition periods for Annex I (safety-regulated products). Parliament wants to keep the original timeline. The next Trilogue session is 13 May 2026. The outcome is genuinely uncertain.

Here is what this means for developers: some obligations are now locked in regardless of what happens on 13 May. Others remain in a holding pattern. This guide separates the certain from the uncertain, and gives you a concrete checklist for the next 95 days.

The Fixed Deadline: 2 August 2026

The AI Act entered into force on 1 August 2024. The two-year application timeline for most provisions runs out on 1 August 2026, making 2 August 2026 the enforcement start date. This date is set in primary law — the Regulation itself — and no Omnibus negotiation can move it without a separate Regulation amendment that clears the full co-decision procedure. That is not happening in 95 days.

What activates on 2 August 2026, regardless of Omnibus outcome:

What the Omnibus negotiations could still affect:

The critical insight: Annex III uncertainty does NOT affect Art.50. Transparency obligations for chatbots, synthetic content generators, emotion recognition systems, and biometric categorization systems are NOT in the Omnibus negotiation scope. They activate on 2 August 2026. Full stop.

Art.50 in Plain Language: What Triggers Disclosure Requirements

Art.50 establishes transparency obligations for AI systems that interact with EU persons. The structure is four separate requirements with different triggers:

Art.50(1): Chatbot disclosure obligation

Any AI system designed to interact directly with natural persons must be disclosed as AI to the persons interacting with it — unless the AI nature is obvious from context. "Designed to interact with" includes chatbots, virtual assistants, automated customer service systems, and conversational interfaces where the person might reasonably believe they are interacting with a human.

Who this hits: Any SaaS with a chat interface powered by an LLM. Any customer support widget using AI. Any automated phone/text/email response system that generates natural language responses.

What you need: A clear, upfront disclosure that the interaction is with an AI system. The disclosure must occur before or at the start of the interaction, not buried in terms of service.

Art.50(2): AI-generated content disclosure

AI systems that generate audio, image, video, or text content must mark the output in machine-readable format to indicate it is AI-generated. This is the watermarking/provenance requirement. The AI Office is developing technical standards for the machine-readable format.

Who this hits: Image generators, video generation tools, text generation services (if the output is presented as stand-alone content), voice synthesis services.

What you need: Implement a metadata/watermarking mechanism once technical standards are published. The AI Office has indicated Transparency CoP (which covers machine-readable marking) will be finalized by June 2026.

Art.50(3): Emotion recognition and biometric categorization disclosure

AI systems that perform emotion recognition — inferring emotional states from facial expressions, voice patterns, or physiological signals — or biometric categorization (inferring sensitive attributes such as race, political opinions, religion) must disclose this processing to the persons subject to it.

Who this hits: HR tech with mood detection, video call analytics, any system that categorizes users by inferred demographic attributes.

What you need: Disclosure at the time of processing, not just in privacy policies.

Art.50(4): Deep fake labeling obligation

AI-generated or AI-manipulated images, audio, or video that depict real persons, events, or places — commonly called deepfakes — must be labeled as artificially created or manipulated. The label must be visible, audible, or otherwise perceptible.

Who this hits: Content platforms that host AI-generated media, video/audio editing tools with AI manipulation features, any tool that can realistically alter the appearance of real persons.

Who is exempt: Legitimate art, satire, and editorial content — but the exemption requires that deepfake labeling "does not prevent the legitimate exercise of freedom of expression" (Recital 134). The exemption is narrow and requires context.

GPAI Code of Practice: Who It Applies To and What It Requires

The GPAI Code of Practice (CoP) governs providers of general-purpose AI models — models that can perform a wide range of tasks and are made available to third parties. The AI Office coordinated the drafting process; the fourth draft was published in March 2026, with final text expected June 2026.

Who is a GPAI provider under Art.51?

A provider of a general-purpose AI model is any entity that trains and distributes a foundation model to third parties — regardless of where the provider is established, if the model is deployed in the EU. This includes:

Who is NOT a GPAI provider?

For most SaaS developers: You are a GPAI deployer, not a GPAI provider. Your obligations under Art.50 (transparency) are real, but the heavier GPAI provider obligations (Art.51/52 documentation, AI Office registration) fall on your upstream model vendor — Mistral, Anthropic, OpenAI, etc.

GPAI CoP: Key requirements for providers

The final GPAI CoP (draft 4) organizes requirements around five measures:

  1. Transparency and copyright: Providers must publish a summary of the training data used, adopt an opt-out mechanism for rights holders (for copyright-protected training data), and document model capabilities and limitations.

  2. Technical documentation: Full technical specification: architecture, parameters, training compute (FLOPs), training data description, benchmark evaluations, known limitations.

  3. Instruction to downstream deployers: Providers must give deployers sufficient information to implement Art.50 transparency requirements. If you use an API from a GPAI provider, that provider must tell you what disclosure obligations apply to deployments of their model.

  4. Security incident reporting: GPAI providers must report security incidents to the AI Office within defined timelines.

  5. Systemic risk measures (Art.52 providers only): Adversarial testing (red teaming), incident reporting, cybersecurity measures, energy efficiency reporting.

Presumption of conformity: GPAI providers that adhere to the CoP are presumed to comply with Art.51/52. The AI Office can audit adherence. Non-adherence is not automatically a violation, but shifts the burden of proof to the provider to demonstrate compliance by other means.

The 7-Step Developer Action Checklist

With 95 days to enforcement, here is what you should be doing in priority order:

Step 1: Inventory your AI features for Art.50 exposure (Week 1)

Create a list of every AI feature in your product that:

For each feature: document the trigger, the model used, whether EU users are in scope, and what disclosure currently exists (if any). This inventory is your compliance gap analysis.

Step 2: Implement chatbot disclosure for all conversational AI (Week 2-3)

For every conversational AI interface accessible to EU users: add a visible disclosure that the user is interacting with an AI. Requirements:

Implementation options: a banner at the start of a chat session, a system message displayed before first AI response, a persistent "AI" badge visible during the conversation.

Step 3: Check your model provider's GPAI CoP status (Week 2)

If you use a commercial LLM API (Mistral, Anthropic Claude, OpenAI GPT, Google Gemini), check whether the provider is enrolled in the GPAI CoP process. Major providers have all participated in the AI Office's CoP drafting consultations, but final adherence declarations will come with the June 2026 final text. Ensure your vendor agreement includes the downstream deployer information required by CoP Measure 3 — your provider must tell you what transparency obligations apply to your deployment.

If you use self-hosted open-source models (Llama, Mistral open weights, Falcon), the model weights distributor is the GPAI provider for registration purposes — but your deployment obligation under Art.50 remains yours.

Step 4: Assess high-risk AI exposure under Annex III (Week 3-4)

Despite the Omnibus uncertainty, run a preliminary assessment of whether any of your AI features fall under Annex III high-risk categories. The eight Annex III domains are:

  1. Biometric identification and categorization
  2. Critical infrastructure management
  3. Education and vocational training
  4. Employment and workers management
  5. Access to essential services (banking, insurance, healthcare)
  6. Law enforcement
  7. Migration and asylum
  8. Administration of justice

If you operate in any of these domains, begin a gap analysis against Annex III obligations even if enforcement is ultimately delayed. The documentation, risk management system, data governance, and human oversight requirements under Art.8-15 take months to implement properly — starting now is not premature.

Step 5: EU-hosted AI infrastructure consideration (Week 2-4)

If Art.50 compliance requires you to ensure AI-generated content is correctly marked, or Art.50(3) processing is disclosed, you need control over the AI pipeline that generates and marks content. Routing AI inference through US-jurisdiction APIs introduces a gap: the processing that triggers disclosure obligations occurs under infrastructure you cannot fully audit.

EU-native AI options reduce CLOUD Act exposure and give you more complete audit trails:

The GDPR Art.32 obligation to implement appropriate technical and organizational measures for AI processing is easier to demonstrate when the AI infrastructure is under EU-jurisdiction control.

Step 6: Monitor Omnibus Trilogue #3 (13 May 2026)

The next negotiation session is 13 May 2026 — two weeks from now. If agreement is reached, you will need to update your Annex III compliance timeline. If the Trilogue fails again, the June 2026 Zypern-Presidency deadline increases pressure for a compromise. Subscribe to AI Office notifications (ai-office.ec.europa.eu) and set a calendar reminder for 14 May 2026 to review any Omnibus outcome.

Step 7: Prepare for market surveillance authority registration (June-July 2026)

Member States are in the process of designating national competent authorities (NCAs) under Art.70. Germany has designated the Bundesnetzagentur as one NCA alongside the Bavarian Data Protection Authority for AI-specific obligations. France has designated the CNIL with an AI task force. Austria, Netherlands, and Spain have announced their designations.

If your organization deploys high-risk AI in specific Member States, identify your relevant NCA and understand their registration/notification process. Some NCAs are already accepting pre-registrations; doing this in June-July 2026 positions you for a smooth compliance transition.

What August 2026 Does NOT Mean for Most Developers

The media coverage of the EU AI Act tends toward catastrophism. Before you restructure your product around AI Act compliance, here is what 2 August 2026 does NOT trigger for most SaaS developers:

No general "ban" on AI features. The AI Act prohibits specific practices (Art.5) that have been enforceable since February 2025: social scoring by government, real-time biometric surveillance in public spaces, manipulation of persons via subliminal techniques. If you are not doing these, August 2026 adds transparency obligations, not prohibitions.

No "CE marking" requirement for most AI. CE marking (conformity marking) applies to Annex III high-risk AI under Art.48. Most AI features — recommendation systems, summarization, code generation, content generation — are not in Annex III. The Omnibus negotiations are partly about narrowing Annex III further. For the broad majority of SaaS AI features, the August 2026 obligation is disclosure (Art.50), not conformity assessment.

No immediate Annex III enforcement cliff. Even if Omnibus fails and Annex III activates in August 2026, market surveillance authorities will phase enforcement. The EU AI Act explicitly requires national market surveillance authorities to "encourage and prioritize" voluntary compliance and proportionate enforcement for first-year violations. An organization demonstrating good-faith compliance effort — documented gap analysis, implementation roadmap, partial implementation — is in a fundamentally different position than one that ignored the Act entirely.

No prohibition on using US-hosted AI APIs. Using OpenAI or AWS Bedrock is not illegal under the AI Act. The CLOUD Act compliance concern (EU personal data subject to US government access orders) is a separate GDPR issue, not an AI Act issue. The AI Act focuses on the characteristics of the AI system and how it interacts with users, not exclusively on where inference runs.

Key Dates Summary

DateEventAction
13 May 2026Omnibus Trilogue #3Review outcome; update Annex III timeline
June 2026GPAI CoP final text + AI Office Art.50 guidelinesReview CoP; update chatbot disclosure if needed
27 May 2026CADA Proposal expectedMonitor; new cloud/AI data act may affect deployment decisions
2 August 2026Art.50 + GPAI enforcementChatbot disclosure live; GPAI providers registered
October 2027Annex III + CRA complianceHigh-risk AI + CRA product obligations

The Most Common Developer Mistake Right Now

The biggest compliance risk for EU SaaS developers is not that they will miss a complex obligation — it is that they will assume the Omnibus stalemate means "AI Act is delayed" and stop preparing.

The Omnibus negotiations are specifically about Annex III (high-risk classification) and Annex I (safety-regulated products) transition periods. They have no effect on Art.50 transparency obligations, GPAI provider obligations, or the AI Office's enforcement authority. Developers who conflate "Omnibus delay" with "AI Act delay" will find themselves unprepared for a 2 August 2026 enforcement date that is not moving.

The 7-step checklist above is actionable today, with or without Omnibus clarity. Step 1 (inventory) and Step 2 (chatbot disclosure) can be completed in the next three weeks and require no regulatory certainty — Art.50 is not in scope for the Omnibus negotiations.

If you operate EU infrastructure and want to align your AI deployment with EU-jurisdiction processing, sota.io provides EU-native deployment for AI workloads — without US-entity data exposure.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.