2026-05-04·12 min read·sota.io team

Most EU AI Act compliance guides spend their time on obligations — logging, documentation, conformity assessment. Far fewer developers are asking the more fundamental question that comes before any obligation: is my system high-risk at all?

The EU AI Act Omnibus proposal, currently in trilogue, does not merely adjust deadlines and fine caps. It also rewrites the scope of Annex III — the list of eight categories that makes an AI system "high-risk" under Art.6. Products that fall inside Annex III face the full weight of Art.8–25 obligations. Products that fall outside face only transparency requirements under Art.50.

That is not a marginal distinction. It is the difference between a compliance programme costing six figures and one costing weeks.

Trilogue #3 meets on May 13, 2026. Whether or not it reaches agreement, the table below will define your compliance scope.


How Annex III Works Today

Under Art.6(2) of the current EU AI Act (Regulation (EU) 2024/1689), an AI system is high-risk if it falls within one of Annex III's eight categories and is not explicitly excluded by the product category's own scope condition.

The eight current categories are:

  1. Biometric identification and categorisation — real-time and post-hoc remote biometric identification systems
  2. Critical infrastructure management — AI used as safety components in water, gas, electricity, transport, and road management
  3. Education and vocational training — AI determining access, admission, or assignment to educational institutions or evaluating learning outcomes
  4. Employment, workers management, and access to self-employment — AI for recruitment, selection, promotion, assignment, and performance monitoring
  5. Access to essential private services and public services and benefits — AI for creditworthiness evaluation, risk assessment for life and health insurance, emergency dispatch, and public benefits eligibility
  6. Law enforcement — AI for individual risk assessment in crime prevention, polygraphs, evidence reliability evaluation, and profiling
  7. Migration, asylum, and border control management — AI for risk assessment of irregular migration, document authenticity, asylum assessment
  8. Administration of justice and democratic processes — AI for individual risk assessment in judicial proceedings and assisting in interpreting facts and law

Each category has sub-conditions. Not every AI system touching one of these sectors falls inside the category — only systems performing specific functions within them.


The Omnibus Amendments: Category by Category

The Omnibus proposal (Commission draft, circulating in trilogue) introduces changes in six of the eight categories. Law enforcement (Pt.6) and migration/asylum (Pt.7) are not substantively amended in the current text.

Annex III Point 1 — Biometrics

Current scope: Real-time and post-hoc remote biometric identification systems; biometric categorisation inferring sensitive attributes; emotion recognition systems.

Omnibus amendment: Emotion recognition systems are carved out for medical and research purposes when the system operator is a licensed healthcare provider or accredited research institution. Post-hoc biometric identification for law enforcement under judicial warrant is moved to a separate track with expedited conformity assessment rather than full Art.8–25 obligations.

Developer impact: If you build emotion recognition for a therapy platform integrated with an EHR (Electronic Health Record) system operated by a licensed provider, you may fall outside the high-risk category. The carve-out does not apply to wellness apps, HR stress-detection tools, or consumer emotion feedback products — those remain inside Pt.1.

Annex III Point 2 — Critical Infrastructure

Current scope: AI used as safety components in managing critical infrastructure.

Omnibus amendment: The category is extended to explicitly include digital infrastructure — cloud platforms, network management systems, and data centre automation tools are added alongside the existing energy, water, and transport systems. This is an expansion, not a narrowing.

Developer impact: If your SaaS includes AI-driven auto-scaling, anomaly detection, or incident response for cloud or network infrastructure, you are potentially newly inside Pt.2 as of the Omnibus. Review your architecture against the updated text before assuming you are outside scope.

Annex III Point 3 — Education and Vocational Training

Current scope: AI determining access to or admission to educational establishments; AI evaluating learning outcomes.

Omnibus amendment: Automated proctoring systems (exam surveillance AI) are specifically brought into scope as a named example. Simultaneously, AI that only assists a human examiner without making binding determinations is moved to a lighter track — it must comply with Art.50 transparency obligations but not the full Art.8–25 high-risk programme.

Developer impact: If you build AI exam proctoring that flags suspected violations for human review, and the human always makes the final determination, you may qualify for the lighter Art.50-only track. If your system outputs a score that feeds directly into pass/fail decisions without an independent human review step, you remain high-risk under Pt.3.

Annex III Point 4 — Employment

Current scope: AI for recruitment, selection, promotion, assignment, and performance monitoring.

Omnibus amendment: Two changes. First, an SME exemption: organisations with fewer than 250 employees and less than €50 million annual turnover that deploy, but do not develop, employment AI are exempt from the Art.9 risk management system, Art.10 data governance, and Art.14 human oversight obligations — but must still maintain incident registers under Art.20. Second, the scope is clarified to require that the AI system materially affects an employment decision; purely analytical tools that only surface data without recommendation are excluded.

Developer impact: If you build an HR analytics platform sold to SME customers, your customers may be exempt from the heaviest obligations. You, as the developer/provider, are not exempt — you still bear Art.8–25 obligations. The SME exemption is a deployer-side provision, not a provider-side one.

Annex III Point 5 — Essential Private Services and Public Benefits

Current scope: Creditworthiness evaluation and credit scoring; risk assessment and pricing for life and health insurance; emergency dispatch AI; eligibility assessment for public benefits and services.

Omnibus amendment: The creditworthiness threshold is raised. Under the current text, any AI performing creditworthiness evaluation is high-risk. Under the Omnibus, purely statistical credit scoring systems that do not use sensitive special-category attributes (health, religion, ethnicity, political opinion) and operate on purely financial transaction history are moved to an Art.50-only transparency track. Full Art.8–25 obligations remain for any system that accesses or infers special-category data.

Emergency dispatch AI remains fully high-risk without amendment. Public benefits eligibility AI remains high-risk. Insurance pricing AI remains high-risk.

Developer impact: If you build credit scoring that is purely behavioural and financial (payment history, debt ratios, transaction velocity) and does not infer health status, religion, or ethnicity, you may qualify for the lighter track under the Omnibus. Document your feature set carefully — the moment your model ingests postal code data that functions as a proxy for ethnicity or health risk, you are back inside full Pt.5 scope.

Annex III Point 8 — Justice and Democratic Processes

Current scope: AI assisting judicial authorities in fact interpretation and law application; AI influencing electoral and democratic processes.

Omnibus amendment: AI used for drafting legal documents for human review — without output directly influencing a judicial decision — is moved to an Art.50-only track. Electoral process AI remains fully high-risk. AI deployed in court proceedings by judicial authorities remains high-risk.

Developer impact: Legal tech products that produce draft contracts, briefs, or pleadings for attorney review do not directly influence judicial determinations. If your product fits that description, you may fall outside the high-risk category under the Omnibus text. Products that predict recidivism, sentencing likelihood, or fraud probability in judicial contexts remain fully high-risk.


The Art.6 De Minimis Threshold

The most consequential change in the Omnibus is not a specific Annex III amendment — it is a new paragraph in Art.6 itself.

The Omnibus draft inserts Art.6(3) (numbering in current draft versions), which provides that an AI system that would otherwise fall within Annex III is not high-risk if it satisfies all of the following:

  1. Limited autonomy condition: The system performs a narrow, predefined task and does not produce outputs that are binding or directly determinative of the outcome at stake.
  2. Human review condition: A human with appropriate domain competence reviews the AI output before any decision with legal or significant practical effect is made.
  3. No sensitive data condition: The system does not process special-category personal data under GDPR Art.9 as a primary function.
  4. Documentation condition: The provider has prepared and maintains a technical summary documenting the above conditions.

This de minimis provision is designed to exclude from the high-risk regime the large class of AI systems that assist human decision-making rather than substituting for it — support tools, suggestion engines, document summarisers, and research aids used in high-stakes sectors.

The documentation condition is not optional. The Art.6(3) exclusion does not self-apply. A provider that believes its system qualifies must prepare a technical summary and register it in the EU AI Act database. Failure to document the exclusion means the full Art.8–25 regime applies by default.


Two-Scenario Compliance Calendar

Scenario A: Trilogue #3 Agrees (May 13, 2026)

If trilogue reaches agreement on May 13, the Omnibus text enters the fast-track adoption procedure. Based on prior Regulation adoption timelines:

This means the Omnibus amendments would enter into force after the August 2, 2026 GPAI deadline but before the August 2, 2027 high-risk systems deadline. The practical consequence: the reclassified systems in this guide would be assessed against the Omnibus-amended Annex III when the high-risk deadline arrives in August 2027. The August 2026 GPAI and Art.50 obligations are unaffected.

Developer action: If your system is in a category that the Omnibus narrows, document your position under Art.6(3) now. Do not wait for publication — the documentation itself is evidence of good faith compliance.

Scenario B: Trilogue #3 Fails (No Agreement)

If May 13 produces no agreement, the Cypriot Presidency's 30 June 2026 deadline for the Omnibus becomes effectively impossible. The text would pass to the Polish and Danish presidencies (H2 2026/H1 2027) with no guarantee of finalisation.

In this scenario:

Developer action: Run your classification under the current Annex III. If you are in scope under current text, assume you will face the August 2027 deadline with no Omnibus relief.


Annex III Classification Decision Tree

Use this simplified checklist to determine your position:

Step 1: Does your system's primary output serve one of the eight Annex III categories?
→ If no: Not high-risk. Apply Art.50 transparency obligations only.

Step 2: Does the Omnibus amendment for that category narrow or qualify the scope in a way that covers your system?
→ If yes: Read the amendment carefully against your architecture.

Step 3: Does your system meet the Art.6(3) de minimis conditions (limited autonomy, human review, no Art.9 data, documented)?
→ If yes: Document the exclusion and register it. Not high-risk.

Step 4: You are provisionally high-risk. Confirm with your legal counsel and begin Art.8–25 compliance programme planning.


Infrastructure Implications

High-risk AI systems under Art.8–25 face specific infrastructure requirements that do not apply to other AI systems:

The Art.12 logging requirement is the one most frequently underestimated. Log entries must be sufficient to reconstruct the sequence of events leading to each decision. Under CLOUD Act analysis, log data stored on US infrastructure is accessible to US federal law enforcement without the subject's knowledge — a structural tension with GDPR Art.25 (data minimisation) and the confidentiality expectations of many high-risk AI deployment contexts (healthcare, legal, finance).

EU-native infrastructure for log storage eliminates this exposure. Logs held on servers within EU legal jurisdiction under contracts with EU-incorporated entities are not subject to CLOUD Act production orders.


Python: Annex III Self-Classification Checker

from dataclasses import dataclass
from typing import Optional

@dataclass
class AnnexIIIChecker:
    """
    Provisional Annex III classification checker for the Omnibus-amended EU AI Act.
    Reference: EU AI Act (2024/1689) Annex III + Omnibus draft amendments (April 2026).
    Not legal advice. Confirm with qualified EU legal counsel.
    """

    primary_output_domain: str  # biometrics|critical_infra|education|employment|services|law_enforcement|migration|justice
    makes_binding_determinations: bool
    human_review_before_effect: bool
    processes_art9_data: bool
    organisation_size_employees: int
    organisation_revenue_eur_million: float
    is_developer_or_provider: bool  # True = developer/provider; False = deployer only

    def classify(self) -> dict:
        domain = self.primary_output_domain
        result = {"domain": domain, "high_risk": None, "basis": "", "action": ""}

        # Law enforcement and migration: no Omnibus narrowing
        if domain in ("law_enforcement", "migration"):
            result.update({
                "high_risk": True,
                "basis": "Annex III Pt.6/Pt.7 — no Omnibus amendment",
                "action": "Begin Art.8-25 compliance programme immediately.",
            })
            return result

        # Art.6(3) de minimis check
        if (not self.makes_binding_determinations
                and self.human_review_before_effect
                and not self.processes_art9_data):
            result.update({
                "high_risk": False,
                "basis": "Art.6(3) de minimis — limited autonomy, human review, no Art.9 data",
                "action": "Prepare Art.6(3) technical summary and register exclusion. Apply Art.50 transparency.",
            })
            return result

        # SME deployer exemption (Pt.4 employment only)
        if (domain == "employment"
                and not self.is_developer_or_provider
                and self.organisation_size_employees < 250
                and self.organisation_revenue_eur_million < 50):
            result.update({
                "high_risk": True,
                "basis": "Annex III Pt.4 — SME deployer: reduced obligations",
                "action": "Maintain incident register (Art.20). Art.9/10/14 obligations waived. Full Art.8-25 applies to your provider.",
            })
            return result

        # Domain-specific classification
        if domain == "biometrics":
            result["high_risk"] = True
            result["basis"] = "Annex III Pt.1 — emotion recognition carve-out for licensed healthcare/research only"
        elif domain == "critical_infra":
            result["high_risk"] = True
            result["basis"] = "Annex III Pt.2 — expanded to include digital infrastructure (Omnibus)"
        elif domain == "education":
            if self.makes_binding_determinations:
                result["high_risk"] = True
                result["basis"] = "Annex III Pt.3 — AI output determines access/outcome without independent human review"
            else:
                result["high_risk"] = False
                result["basis"] = "Annex III Pt.3 — assisting role only, Omnibus lighter track"
                result["action"] = "Apply Art.50 transparency obligations."
        elif domain == "services":
            if self.processes_art9_data:
                result["high_risk"] = True
                result["basis"] = "Annex III Pt.5 — credit/insurance AI using special-category data"
            else:
                result["high_risk"] = False
                result["basis"] = "Annex III Pt.5 — purely financial/statistical scoring, Omnibus lighter track"
                result["action"] = "Apply Art.50 transparency. Document exclusion from full Pt.5."
        elif domain == "justice":
            if self.makes_binding_determinations:
                result["high_risk"] = True
                result["basis"] = "Annex III Pt.8 — judicial decision support that is binding"
            else:
                result["high_risk"] = False
                result["basis"] = "Annex III Pt.8 — legal drafting for human review, Omnibus lighter track"
                result["action"] = "Apply Art.50 transparency obligations."
        else:
            result["high_risk"] = True
            result["basis"] = "Annex III — domain not narrowed by Omnibus"

        if result["high_risk"] and not result.get("action"):
            result["action"] = "Begin Art.8-25 compliance programme. Document risk management, data governance, and logging."

        return result


# Example: HR screening SaaS sold to a 180-person company deployer
checker = AnnexIIIChecker(
    primary_output_domain="employment",
    makes_binding_determinations=True,
    human_review_before_effect=False,
    processes_art9_data=False,
    organisation_size_employees=180,
    organisation_revenue_eur_million=12.0,
    is_developer_or_provider=True,
)
print(checker.classify())
# {'domain': 'employment', 'high_risk': True, 'basis': 'Annex III Pt.4 — SME deployer: reduced obligations (provider still bears Art.8-25)', ...}

What This Means for Hosting

The Omnibus does not change the geographic scope of the EU AI Act — it applies to providers and deployers who place AI systems on the EU market or put them into service in the EU, regardless of where the provider is established (Art.2(1)).

High-risk AI systems require logging (Art.12), technical documentation (Art.11), and post-market monitoring data (Art.72) that persists for the operational life of the system. Under Art.12(1), the logging must be in a form that enables "the reconstitution of circumstances and events during the system's operation."

For healthcare, legal, or financial AI operating under sector-specific confidentiality obligations, storing these logs on US-incorporated infrastructure creates a structural legal problem: CLOUD Act § 2523 permits US government production orders for data held by US companies anywhere in the world. GDPR Art.48 prohibits disclosing personal data in response to non-EU legal orders without an authorised GDPR derogation. The two obligations are in direct conflict.

EU-native infrastructure — compute, storage, and network operated by EU-incorporated entities with no US parent — removes the conflict at the source. There is no CLOUD Act order path to data that no US entity controls.

sota.io is EU-incorporated PaaS with no US parent. Deploy your Art.12 logging infrastructure in the EU, under EU legal jurisdiction, with a managed PostgreSQL database and zero-configuration deployment. Free tier available.


Checklist: Before Trilogue #3 on May 13

The answers to these questions do not change on May 13. What changes is how long you have to act on them.

EU-Native Hosting

Ready to move to EU-sovereign infrastructure?

sota.io is a German-hosted PaaS — no CLOUD Act exposure, no US jurisdiction, full GDPR compliance by design. Deploy your first app in minutes.