2026-04-09·10 min read·sota.io team

EU AI Act 2026: Conformity Assessment Guide for PaaS and SaaS Developers

The EU AI Act — Regulation (EU) 2024/1689 — entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026. The AI Act is the world's first comprehensive legal framework for artificial intelligence, and it applies to any developer, company, or infrastructure provider that places AI systems on the EU market or puts them into service in the EU.

For most developers, the central question is not whether the AI Act applies — it almost certainly does if your product includes any AI functionality deployed in the EU — but rather which tier of obligation applies and what the conformity assessment process looks like.

The AI Act's Tiered Risk Structure

The AI Act uses a risk-tiered approach across four categories:

Risk LevelDescriptionKey Obligation
Unacceptable riskProhibited AI practices (Art. 5)Complete ban — cannot be placed on market
High riskAnnex III categories (Art. 6)Conformity assessment before deployment (Art. 43)
Limited riskChatbots, emotion recognition (Art. 50)Transparency obligation — must disclose AI interaction
Minimal riskAll other AI (spam filters, AI in games)No specific obligations (voluntary codes of practice)

The most operationally significant tier for developers building real products is high-risk AI — because it requires a formal conformity assessment before you can deploy, not just a transparency notice.

Prohibited AI Practices (From 2 February 2025)

Article 5 prohibitions apply from 2 February 2025 — already in effect. These are absolute bans, not high-risk obligations:

If your product uses any of these techniques, the AI Act prohibits it in the EU regardless of your compliance posture for other obligations.

High-Risk AI Systems: Annex III

Article 6 defines the main category of high-risk AI systems through Annex III. There are 8 categories:

Category 1: Biometric Identification and Categorisation

Developer relevance: Any SaaS using facial recognition, voice analysis, or biometric authentication for high-stakes decisions falls here. Notably, real-time remote biometric identification is subject to Annex III regardless of purpose — with the strictest conformity path.

Category 2: Critical Infrastructure Management

Developer relevance: AI systems used in critical infrastructure SCADA, grid management, or network reliability tools deployed in EU member states.

Category 3: Education and Vocational Training

Developer relevance: EdTech platforms using AI for admissions screening, proctoring, or automated grading in the EU.

Category 4: Employment, Worker Management, and Access to Self-Employment

Developer relevance: This is one of the most widely triggered categories. Any AI recruitment tool, HR screening system, or performance monitoring platform deployed in EU workplaces is high-risk. Workday, SAP SuccessFactors, and LinkedIn Recruiter features fall here.

Category 5: Access to Essential Private and Public Services

Developer relevance: Any AI that affects whether a person gets a loan, insurance policy, or emergency service falls here. FinTech credit scoring models are directly in scope. This category overlaps with existing EU financial regulation (GDPR Article 22 automated decision-making, PSD2).

Category 6: Law Enforcement

Developer relevance: GovTech and public safety analytics. The most sensitive category — third-party conformity assessment is typically required.

Category 7: Migration, Asylum, and Border Control

Developer relevance: GovTech for immigration authorities. Niche but strictly regulated.

Category 8: Administration of Justice and Democratic Processes

Developer relevance: LegalTech AI for court use, and any platform affecting electoral processes (voter targeting, political advertising AI).

Article 43: Conformity Assessment — Two Paths

Once you determine your AI system is high-risk under Annex III, Article 43 defines two conformity assessment paths:

Path A: Internal Assessment (Self-Declaration)

For most Annex III categories (categories 2–8 with some exceptions), developers can conduct an internal conformity assessment — a structured self-evaluation documented in technical documentation (Art. 11).

You must demonstrate that your system complies with Chapter III requirements:

You then complete an EU Declaration of Conformity (Art. 47), affix CE marking (Art. 48), and register in the EU database of high-risk AI systems (Art. 51). No third-party audit is required for most categories.

Path B: Third-Party Conformity Assessment (Notified Body)

For specific high-risk systems — primarily:

A notified body (an accredited third-party organisation designated by an EU member state under Art. 33) must conduct the assessment. Notified bodies are listed in the NANDO database (New Approach Notified and Designated Organisations), the same infrastructure used for CE marking in other product safety directives.

Which notified bodies exist? As of 2025, the AI Act notified body infrastructure is still being established. The European AI Office (operational since early 2024) is coordinating member state designation processes. Germany (DAkkS), France (Cofrac), and the Netherlands (RvA) are expected to be early designating authorities.

The Art. 9 Risk Management System — What It Actually Requires

Article 9 is the operational core of compliance. It requires:

  1. Identification and analysis of all known and foreseeable risks the system poses to health, safety, and fundamental rights
  2. Risk estimation and evaluation considering the intended purpose and reasonably foreseeable misuse
  3. Evaluation of post-market monitoring data (once deployed)
  4. Testing to confirm the risk management measures work in deployment conditions

Crucially, Art. 9 requires risk management to be ongoing — not a one-time pre-deployment exercise. Providers must update their risk management systems when they discover new risks post-deployment. This creates a mandatory feedback loop between your production monitoring and your compliance documentation.

Practical implementation:

Art. 10: Data Governance for High-Risk AI

High-risk AI systems must use training, validation, and testing datasets that are:

Practical implication: You cannot use opaque third-party datasets without understanding their provenance. If your high-risk AI is fine-tuned on licensed data, you need documentation covering that dataset's composition, known biases, and limitations.

Art. 11 and Annex IV: Technical Documentation

The technical documentation for a high-risk AI system (Annex IV format) must include:

SectionContent Required
General descriptionIntended purpose, version history, technical specs
Development processTraining methodology, data sources, architecture
Validation and testingMetrics used, test datasets, performance results
Post-market monitoringMonitoring plan, data collection approach
Risk managementSummary of Art. 9 risk management measures
Human oversightHow human oversight is implemented
CybersecurityResilience measures, attack testing performed

This documentation must be maintained for 10 years after the last AI system of that version is placed on the market (Art. 18).

GPAI Models: Separate Obligations (From 2 August 2025)

General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini — have their own obligations under Art. 51–55, applicable from 2 August 2025 (already in effect):

If you are deploying a fine-tuned version of an open-weight GPAI model (e.g., Llama 3, Mistral) for a high-risk use case, you are a provider of both a GPAI system (Art. 51 obligations) and potentially a high-risk AI system (Art. 43 conformity assessment). These obligations stack.

Provider vs Deployer: Who Is Responsible?

The AI Act distinguishes:

Critical nuance for SaaS developers:

If you are building a SaaS application that includes a high-risk AI system (e.g., an AI recruitment screening tool), you are the provider — even if the underlying model is built on an API from a foundation model company. The obligation to conduct a conformity assessment rests on you, not the foundation model provider.

If you are a company that subscribes to such a SaaS tool and uses it in your HR process, you are a deployer — your obligations are lighter, but you must:

The Infrastructure Question: Why PaaS Jurisdiction Matters

The AI Act applies to the AI system — but infrastructure jurisdiction affects two specific compliance areas:

1. Art. 12 Record-keeping and Automatic Logging

High-risk AI systems must automatically log certain events throughout their lifecycle. These logs must be stored and accessible for post-market surveillance by national authorities. If your AI system runs on EU-incorporated infrastructure, those logs are subject to EU law and EU judicial oversight exclusively. If your AI runs on US-incorporated infrastructure, those same logs are potentially accessible to US authorities under the CLOUD Act (18 U.S.C. § 2713) — simultaneously with EU surveillance authority requests.

2. Art. 9 Post-Market Monitoring

The AI Act requires ongoing post-market monitoring, including collection and analysis of production data. Data residency regulations in Germany (BDSG), France (CNIL guidance), and Austria (DSK) impose restrictions on where EU personal data used in AI training and monitoring can be processed. Deploying your AI system's monitoring infrastructure on EU-native PaaS eliminates cross-border data transfer concerns for the post-market monitoring pipeline.

3. Conformity Assessment Documentation

Notified bodies and market surveillance authorities may request access to your technical documentation (Art. 11) and risk management records (Art. 9). Storing these on infrastructure with clear EU jurisdiction simplifies production of records — there is no CLOUD Act conflict to navigate when an EU authority requests document access.

The EU AI Act Registration Requirement

Before deploying a high-risk AI system in the EU, providers must register in the EU database of high-risk AI systems managed by the European AI Office (Art. 51). This database is publicly accessible for most categories (except law enforcement and migration categories, which are in a non-public section).

Registration includes:

The database went live in 2025. Registration is a prerequisite for CE marking and placing the system on the EU market.

Timeline for AI Act Compliance

DateMilestone
1 August 2024AI Act enters into force
2 February 2025Prohibited AI practices apply (Art. 5)
2 August 2025GPAI model obligations apply (Art. 51–55)
2 August 2026High-risk AI conformity assessment required (Art. 43, Annex III)
2 August 2027High-risk AI systems embedded in regulated products (Annex I) fully applicable
2 August 2030Certain AI systems already on market before 2026 must comply (transition period)

The August 2026 date is the key milestone for most SaaS and PaaS developers building AI-powered products.

Practical Checklist for AI Act Compliance (PaaS/SaaS Developer)

Step 1: Classify your AI system

Step 2: Identify your role

Step 3: If provider, conduct conformity assessment (before 2 August 2026)

Step 4: Establish post-market monitoring

Step 5: Choose compliant infrastructure

Summary

The EU AI Act creates a staged compliance path for developers deploying AI in the EU:

For most developers, the operative questions are whether your AI falls under Annex III and whether you need a notified body or can self-assess. Most Annex III categories allow self-assessment — but the documentation and risk management system requirements are substantial regardless.

Infrastructure jurisdiction matters for two compliance areas: record-keeping (Art. 12) and post-market monitoring (Art. 9). Running high-risk AI systems on EU-native infrastructure eliminates cross-jurisdictional conflicts between EU market surveillance authority access rights and US CLOUD Act obligations — a structural simplification that reduces legal overhead for any company operating under both frameworks.

The AI Act's conformity assessment regime is modeled on existing EU product safety law (CE marking, Notified Bodies, EU Declarations of Conformity). Developers already familiar with GDPR data protection impact assessments (DPIAs) will recognise the risk-management-as-documentation pattern — the AI Act formalises it with heavier documentation requirements and a public registration database.

See Also