EU AI Act 2026: Conformity Assessment Guide for PaaS and SaaS Developers
The EU AI Act — Regulation (EU) 2024/1689 — entered into force on 1 August 2024 and becomes fully applicable on 2 August 2026. The AI Act is the world's first comprehensive legal framework for artificial intelligence, and it applies to any developer, company, or infrastructure provider that places AI systems on the EU market or puts them into service in the EU.
For most developers, the central question is not whether the AI Act applies — it almost certainly does if your product includes any AI functionality deployed in the EU — but rather which tier of obligation applies and what the conformity assessment process looks like.
The AI Act's Tiered Risk Structure
The AI Act uses a risk-tiered approach across four categories:
| Risk Level | Description | Key Obligation |
|---|---|---|
| Unacceptable risk | Prohibited AI practices (Art. 5) | Complete ban — cannot be placed on market |
| High risk | Annex III categories (Art. 6) | Conformity assessment before deployment (Art. 43) |
| Limited risk | Chatbots, emotion recognition (Art. 50) | Transparency obligation — must disclose AI interaction |
| Minimal risk | All other AI (spam filters, AI in games) | No specific obligations (voluntary codes of practice) |
The most operationally significant tier for developers building real products is high-risk AI — because it requires a formal conformity assessment before you can deploy, not just a transparency notice.
Prohibited AI Practices (From 2 February 2025)
Article 5 prohibitions apply from 2 February 2025 — already in effect. These are absolute bans, not high-risk obligations:
- Social scoring by public authorities (comprehensive scoring of natural persons based on social behavior or personal characteristics)
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions for targeted searches)
- Subliminal techniques that bypass conscious awareness to materially distort behavior
- Exploitation of vulnerabilities of specific groups (age, disability) to distort behavior harmfully
- Emotion recognition in workplace or educational settings (with exceptions for safety purposes)
- Untargeted scraping of biometric data from internet or CCTV to build facial recognition databases
- Predictive policing based purely on profiling (without a specific risk event triggering it)
If your product uses any of these techniques, the AI Act prohibits it in the EU regardless of your compliance posture for other obligations.
High-Risk AI Systems: Annex III
Article 6 defines the main category of high-risk AI systems through Annex III. There are 8 categories:
Category 1: Biometric Identification and Categorisation
- Remote biometric identification systems (real-time and post-event)
- AI-based biometric categorisation systems inferring sensitive attributes (race, political opinion, trade union membership, religion, sexual orientation)
- Emotion recognition systems used by public authorities or in employment/education contexts
Developer relevance: Any SaaS using facial recognition, voice analysis, or biometric authentication for high-stakes decisions falls here. Notably, real-time remote biometric identification is subject to Annex III regardless of purpose — with the strictest conformity path.
Category 2: Critical Infrastructure Management
- AI used for safety components in road traffic management, water/gas/electricity/heating supply, digital infrastructure
Developer relevance: AI systems used in critical infrastructure SCADA, grid management, or network reliability tools deployed in EU member states.
Category 3: Education and Vocational Training
- AI systems determining access to educational institutions or making decisions about learners (performance assessment, monitoring)
Developer relevance: EdTech platforms using AI for admissions screening, proctoring, or automated grading in the EU.
Category 4: Employment, Worker Management, and Access to Self-Employment
- AI systems for recruitment and selection (CV screening, interview analysis, promotion decisions)
- AI for task allocation, monitoring, and evaluation of workers
Developer relevance: This is one of the most widely triggered categories. Any AI recruitment tool, HR screening system, or performance monitoring platform deployed in EU workplaces is high-risk. Workday, SAP SuccessFactors, and LinkedIn Recruiter features fall here.
Category 5: Access to Essential Private and Public Services
- AI for credit scoring and creditworthiness assessment
- AI for risk assessments in life/health insurance
- AI for emergency call dispatching (police, fire, ambulance)
Developer relevance: Any AI that affects whether a person gets a loan, insurance policy, or emergency service falls here. FinTech credit scoring models are directly in scope. This category overlaps with existing EU financial regulation (GDPR Article 22 automated decision-making, PSD2).
Category 6: Law Enforcement
- AI systems used for profiling of natural persons by police/judicial authorities
- Lie detectors and similar tools
- Assessment of risk posed by persons to commit offences or recidivate
- Crime analytics tools
Developer relevance: GovTech and public safety analytics. The most sensitive category — third-party conformity assessment is typically required.
Category 7: Migration, Asylum, and Border Control
- AI for risk assessment of irregular migration
- AI systems for document authenticity verification used by authorities
- AI assisting asylum, visa, and permit applications
Developer relevance: GovTech for immigration authorities. Niche but strictly regulated.
Category 8: Administration of Justice and Democratic Processes
- AI assisting courts in researching and interpreting facts and law
- AI influencing electoral outcomes
Developer relevance: LegalTech AI for court use, and any platform affecting electoral processes (voter targeting, political advertising AI).
Article 43: Conformity Assessment — Two Paths
Once you determine your AI system is high-risk under Annex III, Article 43 defines two conformity assessment paths:
Path A: Internal Assessment (Self-Declaration)
For most Annex III categories (categories 2–8 with some exceptions), developers can conduct an internal conformity assessment — a structured self-evaluation documented in technical documentation (Art. 11).
You must demonstrate that your system complies with Chapter III requirements:
- Art. 9: Risk management system (documented, iterative, covering the full lifecycle)
- Art. 10: Training, validation, and testing data governance
- Art. 11: Technical documentation (Annex IV format)
- Art. 12: Automatic logging / record-keeping
- Art. 13: Transparency — system limitations, intended purpose, human oversight
- Art. 14: Human oversight — design must allow meaningful human intervention
- Art. 15: Accuracy, robustness, and cybersecurity
You then complete an EU Declaration of Conformity (Art. 47), affix CE marking (Art. 48), and register in the EU database of high-risk AI systems (Art. 51). No third-party audit is required for most categories.
Path B: Third-Party Conformity Assessment (Notified Body)
For specific high-risk systems — primarily:
- Real-time remote biometric identification systems (Annex III, Point 1(a))
- AI systems intended to be used for making decisions or assisting in decisions in the areas of (a) biometric identification, (b) critical infrastructure, and (c) high-risk AI in law enforcement where the provider opts for third-party review
- Any system where internal assessment identified significant issues
A notified body (an accredited third-party organisation designated by an EU member state under Art. 33) must conduct the assessment. Notified bodies are listed in the NANDO database (New Approach Notified and Designated Organisations), the same infrastructure used for CE marking in other product safety directives.
Which notified bodies exist? As of 2025, the AI Act notified body infrastructure is still being established. The European AI Office (operational since early 2024) is coordinating member state designation processes. Germany (DAkkS), France (Cofrac), and the Netherlands (RvA) are expected to be early designating authorities.
The Art. 9 Risk Management System — What It Actually Requires
Article 9 is the operational core of compliance. It requires:
- Identification and analysis of all known and foreseeable risks the system poses to health, safety, and fundamental rights
- Risk estimation and evaluation considering the intended purpose and reasonably foreseeable misuse
- Evaluation of post-market monitoring data (once deployed)
- Testing to confirm the risk management measures work in deployment conditions
Crucially, Art. 9 requires risk management to be ongoing — not a one-time pre-deployment exercise. Providers must update their risk management systems when they discover new risks post-deployment. This creates a mandatory feedback loop between your production monitoring and your compliance documentation.
Practical implementation:
- Maintain a risk register for your AI system covering misuse scenarios, bias risks, and edge-case failures
- Document all changes to training data, model architecture, or deployment scope in your risk register
- Ensure you have human-review protocols for high-stakes decisions the system makes
- Store risk management records in a format auditable by national market surveillance authorities
Art. 10: Data Governance for High-Risk AI
High-risk AI systems must use training, validation, and testing datasets that are:
- Subject to data governance and management practices covering data collection methodology, preprocessing, and annotation
- Relevant, representative, and free of errors to the extent possible
- Accompanied by documentation of their known limitations
- Subject to bias examination — providers must examine whether data contains protected characteristics and document mitigation measures
Practical implication: You cannot use opaque third-party datasets without understanding their provenance. If your high-risk AI is fine-tuned on licensed data, you need documentation covering that dataset's composition, known biases, and limitations.
Art. 11 and Annex IV: Technical Documentation
The technical documentation for a high-risk AI system (Annex IV format) must include:
| Section | Content Required |
|---|---|
| General description | Intended purpose, version history, technical specs |
| Development process | Training methodology, data sources, architecture |
| Validation and testing | Metrics used, test datasets, performance results |
| Post-market monitoring | Monitoring plan, data collection approach |
| Risk management | Summary of Art. 9 risk management measures |
| Human oversight | How human oversight is implemented |
| Cybersecurity | Resilience measures, attack testing performed |
This documentation must be maintained for 10 years after the last AI system of that version is placed on the market (Art. 18).
GPAI Models: Separate Obligations (From 2 August 2025)
General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini — have their own obligations under Art. 51–55, applicable from 2 August 2025 (already in effect):
- All GPAI providers: Technical documentation, compliance with EU copyright law, transparency to downstream providers
- GPAI models with systemic risk (≥10²⁵ FLOPs training compute, or European AI Office designation): Model evaluation, adversarial testing, incident reporting, cybersecurity measures
If you are deploying a fine-tuned version of an open-weight GPAI model (e.g., Llama 3, Mistral) for a high-risk use case, you are a provider of both a GPAI system (Art. 51 obligations) and potentially a high-risk AI system (Art. 43 conformity assessment). These obligations stack.
Provider vs Deployer: Who Is Responsible?
The AI Act distinguishes:
- Provider: Develops or places an AI system on the market. Bears full Art. 9–15 obligations for high-risk AI.
- Deployer: Uses an AI system in their own operations (an employer using an AI recruitment tool). Lighter obligations — primarily Art. 29 (ensure use within intended purpose, implement human oversight, monitor performance).
- Importer: Brings a non-EU AI system into the EU market — same obligations as provider for EU market compliance.
- Distributor: Makes an AI system available without modification.
Critical nuance for SaaS developers:
If you are building a SaaS application that includes a high-risk AI system (e.g., an AI recruitment screening tool), you are the provider — even if the underlying model is built on an API from a foundation model company. The obligation to conduct a conformity assessment rests on you, not the foundation model provider.
If you are a company that subscribes to such a SaaS tool and uses it in your HR process, you are a deployer — your obligations are lighter, but you must:
- Ensure staff using the system are trained
- Not use it outside its intended purpose
- Implement human oversight as the provider specified
- Log incidents and report serious incidents to market surveillance authorities
The Infrastructure Question: Why PaaS Jurisdiction Matters
The AI Act applies to the AI system — but infrastructure jurisdiction affects two specific compliance areas:
1. Art. 12 Record-keeping and Automatic Logging
High-risk AI systems must automatically log certain events throughout their lifecycle. These logs must be stored and accessible for post-market surveillance by national authorities. If your AI system runs on EU-incorporated infrastructure, those logs are subject to EU law and EU judicial oversight exclusively. If your AI runs on US-incorporated infrastructure, those same logs are potentially accessible to US authorities under the CLOUD Act (18 U.S.C. § 2713) — simultaneously with EU surveillance authority requests.
2. Art. 9 Post-Market Monitoring
The AI Act requires ongoing post-market monitoring, including collection and analysis of production data. Data residency regulations in Germany (BDSG), France (CNIL guidance), and Austria (DSK) impose restrictions on where EU personal data used in AI training and monitoring can be processed. Deploying your AI system's monitoring infrastructure on EU-native PaaS eliminates cross-border data transfer concerns for the post-market monitoring pipeline.
3. Conformity Assessment Documentation
Notified bodies and market surveillance authorities may request access to your technical documentation (Art. 11) and risk management records (Art. 9). Storing these on infrastructure with clear EU jurisdiction simplifies production of records — there is no CLOUD Act conflict to navigate when an EU authority requests document access.
The EU AI Act Registration Requirement
Before deploying a high-risk AI system in the EU, providers must register in the EU database of high-risk AI systems managed by the European AI Office (Art. 51). This database is publicly accessible for most categories (except law enforcement and migration categories, which are in a non-public section).
Registration includes:
- Provider name and contact details
- System name, version, intended purpose
- Annex III category
- Conformity assessment outcome
- Whether a Declaration of Conformity was issued
- Any restrictions or conditions of use
The database went live in 2025. Registration is a prerequisite for CE marking and placing the system on the EU market.
Timeline for AI Act Compliance
| Date | Milestone |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Prohibited AI practices apply (Art. 5) |
| 2 August 2025 | GPAI model obligations apply (Art. 51–55) |
| 2 August 2026 | High-risk AI conformity assessment required (Art. 43, Annex III) |
| 2 August 2027 | High-risk AI systems embedded in regulated products (Annex I) fully applicable |
| 2 August 2030 | Certain AI systems already on market before 2026 must comply (transition period) |
The August 2026 date is the key milestone for most SaaS and PaaS developers building AI-powered products.
Practical Checklist for AI Act Compliance (PaaS/SaaS Developer)
Step 1: Classify your AI system
- Does it fall under Annex III? Use the 8 categories above as a checklist.
- If yes: proceed to Step 2. If no: check Art. 50 transparency obligations (chatbots, emotion recognition, AI-generated content).
Step 2: Identify your role
- Are you the provider (built the system)? → Full Art. 9–15 obligations.
- Are you the deployer (using someone else's system)? → Art. 29 obligations only.
Step 3: If provider, conduct conformity assessment (before 2 August 2026)
- Implement Art. 9 risk management system
- Document data governance (Art. 10)
- Prepare Annex IV technical documentation (Art. 11)
- Implement automatic logging (Art. 12)
- Set up human oversight mechanisms (Art. 14)
- Determine if self-assessment (Path A) or notified body (Path B) applies to your category
- Complete EU Declaration of Conformity (Art. 47)
- Affix CE marking (Art. 48)
- Register in EU database (Art. 51)
Step 4: Establish post-market monitoring
- Deploy production logging that captures decisions, inputs, and outcomes
- Set up a serious incident reporting process for national market surveillance authorities
- Schedule annual risk management system reviews
Step 5: Choose compliant infrastructure
- Deploy AI inference and monitoring on EU-incorporated PaaS to avoid CLOUD Act conflicts with Art. 12 logs and Art. 9 monitoring data
- Ensure data residency for personal data used in AI monitoring satisfies GDPR + national data protection law
Summary
The EU AI Act creates a staged compliance path for developers deploying AI in the EU:
- Prohibited practices (Art. 5): Already in force since February 2025 — immediate compliance required
- GPAI models (Art. 51–55): In force since August 2025 — foundation model providers already under obligation
- High-risk AI (Annex III, Art. 43): Conformity assessment required before 2 August 2026
For most developers, the operative questions are whether your AI falls under Annex III and whether you need a notified body or can self-assess. Most Annex III categories allow self-assessment — but the documentation and risk management system requirements are substantial regardless.
Infrastructure jurisdiction matters for two compliance areas: record-keeping (Art. 12) and post-market monitoring (Art. 9). Running high-risk AI systems on EU-native infrastructure eliminates cross-jurisdictional conflicts between EU market surveillance authority access rights and US CLOUD Act obligations — a structural simplification that reduces legal overhead for any company operating under both frameworks.
The AI Act's conformity assessment regime is modeled on existing EU product safety law (CE marking, Notified Bodies, EU Declarations of Conformity). Developers already familiar with GDPR data protection impact assessments (DPIAs) will recognise the risk-management-as-documentation pattern — the AI Act formalises it with heavier documentation requirements and a public registration database.
See Also
- EU AI Act Article 9: Formal Verification for High-Risk AI — deep dive on the Art. 9 risk management system requirement and how formal methods satisfy it
- EU AI Act: Hosting Compliance and EU-Native Infrastructure — how infrastructure jurisdiction affects Art. 12 logging and Art. 9 monitoring compliance
- EU NIS2 Directive 2024: Critical Infrastructure and Formal Verification — cybersecurity obligations that overlap with AI Act Art. 15 for high-risk AI systems
- EU Cyber Resilience Act 2027: Open-Source PaaS Developer Checklist — parallel regulation for software product security that stacks with AI Act obligations