2026-05-05·14 min read·
## The Clock Is Running: August 2, 2026 On August 2, 2026, Articles 50 through 55 of the EU AI Act become enforceable. These articles govern **General Purpose AI (GPAI) models** — the foundation models and APIs that power an enormous share of SaaS products built in 2025 and 2026. If your product uses an LLM API, embeds a foundation model, or is itself a GPAI provider, you have roughly 89 days to reach compliance. The primary vehicle for doing that is the **EU AI Act Code of Practice (CoP)** — a framework being finalized by the AI Office under Article 56. The word "voluntary" appears in the regulation. Do not let that word fool you. --- ## What "Voluntary" Actually Means for the CoP The CoP is voluntary in the sense that you are not legally compelled to sign a document. But Article 55(1)(b) makes clear that GPAI providers **can demonstrate compliance with the regulation's requirements by adhering to the Code of Practice**. If you do not adopt the CoP, you must demonstrate compliance through **alternative measures** — your own technical documentation, third-party audits, or internal frameworks that satisfy the same requirements. The AI Office can scrutinize those alternatives and request changes. Practically: if you are a GPAI provider who ships without CoP adoption and without credible alternative measures, you face enforcement under Article 99. The fine ceiling for GPAI violations is **€15 million or 3% of global annual turnover**, whichever is higher. For most startups and scale-ups, CoP adoption is not just the cheapest path — it is the only realistic one. --- ## Who Must Act: GPAI Provider vs. GPAI Deployer vs. SaaS Consumer The EU AI Act draws a hard line between three categories: **GPAI Provider (Art.52)** You train or significantly modify a foundation model and make it available — via API, open-source release, or integration. You are subject to the full GPAI obligations: transparency documentation, copyright compliance, capability evaluation, and (if you meet systemic risk thresholds) adversarial testing. Examples: companies offering their own LLM APIs, organizations fine-tuning open-weight models for commercial deployment. **Downstream Provider using GPAI (Art.50, Art.28)** You build a product using a GPAI API (OpenAI, Anthropic, Mistral, etc.). You are not a GPAI provider, but you have **Art.50 transparency obligations** when your system interacts with natural persons. If your chatbot, AI assistant, or content generator uses a GPAI foundation, you must disclose AI involvement. **SaaS deployer not touching GPAI directly** If your SaaS product does not use foundation models or AI-generated content that interacts with users, you are outside the GPAI chapter — but may still be subject to Art.6 risk classification if your system makes consequential decisions. The practical split for most developer teams: - Using GPT-4, Claude, Gemini, or Mistral → downstream GPAI consumer, Art.50 applies - Hosting or fine-tuning your own model for external use → GPAI provider, full CoP scope applies --- ## What the Code of Practice Covers The AI Office published the third draft of the CoP in January 2026. It organizes obligations into four workstreams: ### Transparency and Information Obligations GPAI providers must publish a technical summary explaining model capabilities, limitations, and intended use cases. The summary must be publicly accessible and updated when material changes occur. Downstream systems that rely on the model must be able to reference this documentation when building their own Art.28 risk assessments. ### Copyright and Training Data Compliance Under Art.53(1)(c), GPAI providers must implement a **copyright policy** covering training data. This includes documenting the data sources used, honoring web-crawl opt-out signals (the EU AI Act explicitly references the Text and Data Mining exception under the DSM Directive), and maintaining records that can be audited by rights holders or the AI Office. For open-weight models, this obligation persists even after release — you remain responsible for the training data policy even if the weights are freely available. ### Capability Evaluation GPAI providers must evaluate their models for capabilities that could contribute to systemic risks (Art.55). The current CoP specifies evaluations for: dangerous capabilities (CBRN uplift, cyberoffense), emergent behaviors at deployment scale, and dual-use potential of fine-tuned derivatives. This is distinct from standard product testing. Capability evaluations must be documented in a form the AI Office can review, and updated when model versions change materially. ### Adversarial Testing (Systemic Risk Providers Only) If your GPAI model meets the **10^25 FLOPs threshold** for systemic risk designation under Art.51, adversarial testing (red-teaming) becomes mandatory. For most startups, this threshold is not relevant — it currently applies to frontier models from a handful of major labs. However, the AI Office has reserved the right to designate models below this threshold if systemic risks are identified through other means. Track the systemic risk registry if your model is in a sensitive domain. --- ## The Eight-Action Developer Checklist These actions apply before August 2, 2026. Sequence matters — documentation must precede submission. **Action 1: Classify your AI role** Determine whether you are a GPAI provider, a downstream GPAI consumer, or neither. Document the reasoning. If you are unsure, the trigger question is: do you make a trained model (or fine-tuned variant) available to third parties? **Action 2: Inventory all GPAI APIs in use** If you consume GPAI APIs, list every model, version, and provider. Check whether each provider has adopted the CoP or published alternative compliance documentation. You need this for your own Art.28 and Art.50 records. **Action 3: Implement Art.50 AI transparency disclosures** For any user-facing feature that generates text, images, audio, or video using a GPAI model, add a disclosure. The disclosure must be machine-readable (for deepfake detection purposes) and human-readable. The AI Office has indicated that a tooltip or persistent banner is acceptable; it need not interrupt every interaction. **Action 4: Prepare or adopt technical documentation** GPAI providers must have documentation covering model card equivalent information, training data provenance, capability boundaries, and known limitations. If you are adopting the CoP, the CoP workstream documentation templates satisfy this requirement. If you are pursuing alternative measures, draft a standalone document now and have it reviewed before July. **Action 5: Establish a copyright compliance process** Audit your training data pipeline for opt-out signals. If you scraped web content, check against the Common Crawl robots.txt exclusion list and any domain-level opt-out signals. Document the process. The burden of proof sits with you if a rights holder challenges your training data. **Action 6: Assign an AI Act compliance owner** Designate a responsible person for GPAI compliance. This person does not need to be a lawyer, but they must be able to respond to an AI Office inquiry within a reasonable timeframe. For smaller companies, this is typically the CTO or a senior engineer with documentation access. **Action 7: Register on the AI Office's GPAI model database** Under Art.52(1), GPAI providers must register on the EU AI Office's model database before placing the model on the market. The registry is being built — check the AI Office portal for the live submission link as it approaches launch. Registration is not equivalent to approval; it is an information requirement. **Action 8: Review your EU deployment stack for Art.50 overlap with data protection** Art.50 transparency applies across all EU deployments. If your AI features process personal data (common for any user-facing product), you have a dual obligation: GDPR Art.22 (automated decision-making) and AI Act Art.50 (AI transparency). These requirements are complementary, not redundant — document both separately. --- ## Non-GPAI SaaS Using GPAI APIs: Your Art.50 Exposure If you build a SaaS product that calls OpenAI, Anthropic, Mistral, or any other GPAI provider, you are not off the hook at August 2, 2026. Article 50(2) requires that **providers of AI systems that interact with natural persons** disclose the AI nature of the interaction. This applies to you even though you are not the GPAI provider. The obligation is triggered when: - Your product generates content that a user consumes (chat, summaries, recommendations) - The user might reasonably be unaware they are interacting with an AI system The disclosure does not need to be prominent — it needs to be **present and accessible**. Most compliance teams implement this as a persistent disclaimer in the product UI and a machine-readable `meta name="ai-generated"` or equivalent marker for synthetic media. Failure to disclose is subject to Art.99 enforcement, with a ceiling of €15 million or 3% of global annual turnover. --- ## The CLOUD Act Overlap: Why Your Deployment Stack Matters There is a structural risk in using US-hosted GPAI APIs that the EU AI Act does not address directly — but that your GPAI compliance documentation will expose. When you document your training data provenance (Action 5) and your capability evaluation process (CoP workstream), you are creating records that contain sensitive model information and potentially personal data from your own users. If these records live on AWS, GCP, or Azure under US-parent jurisdiction, they are reachable under the CLOUD Act without a court order visible to you. The EU AI Act requires you to make documentation available to the AI Office on request. It says nothing about making it available to US law enforcement. But a US-resident cloud provider hosting your compliance documentation does not have the same obligations. For GPAI providers building in Europe, deploying compliance documentation and model records on an EU-native infrastructure that has no US-parent avoids this exposure entirely. There is no CLOUD Act jurisdiction question if your cloud provider is incorporated and operated exclusively within the EU. sota.io is an EU-native PaaS without US-parent jurisdiction. If you are building an AI-enabled product that must maintain Art.52 technical documentation or Art.50 disclosure infrastructure, running that on EU-native infrastructure removes one class of compliance risk before August 2 even arrives. --- ## What Happens If You Miss the Deadline The AI Office does not start issuing fines on August 3, 2026. Enforcement under the EU AI Act follows an investigation-and-notice process similar to GDPR. The first enforcement actions are expected to focus on the most visible GPAI providers — frontier model companies — before extending to smaller providers and downstream consumers. However, three things happen automatically at August 2, 2026 regardless of enforcement activity: **1. Your legal exposure becomes real.** A complaint filed after August 2 about non-compliant GPAI deployment or missing Art.50 disclosures is actionable. Competitors, civil society organizations, and data protection authorities can trigger investigations. **2. Enterprise procurement requirements change.** In the same way NIS2 created contractual requirements that enterprise buyers now impose on SaaS vendors, the AI Act will create AI compliance questions in enterprise procurement checklists. "Do you have GPAI CoP documentation?" will appear in vendor questionnaires from financial services and public sector buyers starting in Q3 2026. **3. Your customers' Art.28 risk assessments reference your documentation.** If a downstream user builds on your GPAI offering and their DPA asks them for your technical documentation, you need to have it. If you do not, you block their compliance process and risk contractual penalties. --- ## Preparing Now, Not in July The 89-day window to August 2, 2026 sounds comfortable. In practice, the documentation, copyright audit, and registration steps each take longer than they appear. Organizations that start in June will be finishing in August — after the deadline. The CoP adoption path is the fastest: use the AI Office's workstream templates, assign a compliance owner, and submit. If you choose alternative measures instead, start building that documentation now and have it externally reviewed before July 1. The August 2 deadline is fixed. The compliance preparation timeline is yours to control.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.