EU AI Act Article 6: Is Your AI System High-Risk? The Complete Classification Guide for Developers 2026

EU AI Act Article 6: Is Your AI System High-Risk? The Complete Classification Guide for Developers 2026

The single most consequential question under the EU AI Act is whether your AI system qualifies as "high-risk" under Article 6. Get it wrong in either direction — assuming you're not high-risk when you are, or over-complying when you aren't — and you either face enforcement risk or waste months on unnecessary conformity work.

With the EU AI Act Omnibus Deal closed on 7 May 2026, the classification rules are final. The August 2026 deadline for prohibited practices is already in force; the high-risk obligations deadline is 2 August 2026 for Annex I products and 2 August 2027 for Annex III standalone systems. You need to know where your product lands now.

This guide covers exactly how Article 6 works: the two classification paths, the full Annex III category list with SaaS-relevant examples, what the Omnibus Deal changed, and the decision tree developers should run before August.

Why Article 6 Is the Threshold That Changes Everything

The EU AI Act's Chapter III obligations — risk management systems (Article 9), data governance (Article 10), transparency documentation (Article 13), human oversight (Article 14), accuracy and robustness standards (Article 15) — only apply to AI systems classified as high-risk under Article 6.

For non-high-risk systems, the obligations are substantially lighter: a voluntary code of conduct, basic transparency if your system interacts with humans (Article 50), and the general prohibited practices rules from Chapter II. For high-risk systems, you're looking at mandatory conformity assessments, extensive technical documentation, registration in an EU database, post-market monitoring, and serious incident reporting.

The gap between "not high-risk" and "high-risk" is roughly the difference between a code-of-conduct and a regulated product. Article 6 is where that line is drawn.

The Two Paths to High-Risk Classification

Article 6 creates two distinct routes by which an AI system becomes high-risk:

Path 1 — Article 6(1): Safety Components of Regulated Products (Annex I)

An AI system is high-risk under Article 6(1) if both of the following are true:

  1. The AI system is used as a safety component of a product covered by Union harmonisation legislation listed in Annex I, or the AI system is itself a product covered by that legislation.
  2. The product (including its AI safety component) is required to undergo a third-party conformity assessment under that Union harmonisation legislation.

Annex I covers regulated product sectors including:

What this means for SaaS developers: If you are building AI features that plug into products in these sectors — AI-powered diagnostic decision support for a medical device, an AI-driven control component for industrial machinery, AI features in a connected vehicle's safety system — and the underlying product requires a notified body conformity assessment, your AI component is automatically high-risk regardless of what it does specifically.

Most pure SaaS applications are not in Annex I territory. But if your product integrates into medical, industrial, or transport hardware, this path is the one to audit first.

Path 2 — Article 6(2): Standalone High-Risk AI Systems (Annex III)

Article 6(2) makes AI systems in specific application areas automatically high-risk, regardless of whether they are connected to a regulated product. These are listed in Annex III and cover eight categories:

Category 1 — Biometrics

Category 2 — Critical Infrastructure AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and supply of water, gas, heating, and electricity.

Category 3 — Education and Vocational Training AI systems used to determine access to, or assign persons to, educational institutions or vocational training programmes. AI systems used to evaluate students and assess learning outcomes, including for detecting prohibited behaviour in tests.

Category 4 — Employment, Workers Management, and Access to Self-Employment AI systems used for recruitment and selection of persons (CV sorting, filtering, evaluation of job advertisements, interview screening), for making decisions on promotions or terminations, for task allocation or performance/behaviour monitoring of workers.

Category 5 — Access to and Enjoyment of Essential Private Services and Essential Public Services AI systems used by public authorities or on their behalf to evaluate the eligibility for public benefits and services, to grant, reduce, revoke, or reclaim benefits and services. AI systems for credit scoring or establishing creditworthiness. AI systems used for risk assessment and pricing of health and life insurance.

Category 6 — Law Enforcement AI systems used by law enforcement authorities or on their behalf for risk assessment, polygraph-type testing, evaluation of evidence reliability, prediction of criminal offences, profiling in criminal investigations, crime analytics.

Category 7 — Migration, Asylum, and Border Control Management AI systems used by competent authorities for risk assessments of persons for irregular migration, for examination of applications for asylum or visa, for verifying the authenticity of travel documents.

Category 8 — Administration of Justice and Democratic Processes AI systems intended to assist judicial authorities in researching, interpreting, and applying the law; in the resolution of disputes. AI systems used for influencing elections.

What the May 2026 Omnibus Deal Changed

The Omnibus Deal of 7 May 2026 introduced important modifications to the Article 6 classification rules, primarily through adjustments to Annex III scope and the introduction of a de minimis exception.

The Annex III Scope Narrowing: The Omnibus agreement clarified that several Annex III entries apply only when the AI system performs a decision function, not merely a supportive or advisory function. An AI system that presents a human decision-maker with structured information about a candidate, without ranking or scoring them directly, may fall outside the relevant Annex III entry even if used in hiring. The final text requires that the AI system "make decisions or materially influence decisions" to qualify in certain categories.

The De Minimis Exception (New Article 6(3)): The Omnibus Deal introduced a new Article 6(3) provision (confirmed in the final text) that excludes AI systems from high-risk classification even when their intended purpose falls within Annex III, if:

  1. The AI system performs a narrow procedural task that does not itself make or materially influence decisions affecting individuals.
  2. The AI system is used to improve the result of a previously completed human activity without replacing the human judgment that determined the outcome.
  3. The AI system detects patterns or anomalies in aggregated data and its output cannot be directly used as a basis for decisions affecting individuals without substantial independent human review.

This exception is practically significant for AI features that assist human analysts, surface insights, or flag items for review — as long as the human decision-maker acts independently and the AI is not the proximate cause of the decision affecting an individual.

Provider vs. Deployer Clarification: The Omnibus also sharpened the definition boundary between "providers" (who place AI systems on the market or put them into service) and "deployers" (who use AI systems in professional contexts). For many SaaS developers, this distinction determines which of the high-risk obligations apply to you and which apply to your enterprise customers.

The Developer Classification Decision Tree

Run this decision tree for each discrete AI feature in your product:

1. Does this AI feature fall under Annex I products 
   (medical, machinery, transport)?
   → YES: Does the product require notified body conformity assessment?
          → YES: HIGH-RISK (Article 6(1))
          → NO: Continue to step 2
   → NO: Continue to step 2

2. Does the intended purpose match any Annex III category?
   (biometrics, critical infrastructure, education, employment, 
    essential services, law enforcement, migration, justice)
   → NO: Not high-risk under Article 6 (check Article 50 transparency separately)
   → YES: Continue to step 3

3. Does the AI system "make or materially influence" 
   decisions affecting individuals?
   → NO: Potentially eligible for Article 6(3) de minimis exception
          → Does it perform only a narrow procedural task?
          → YES: Not high-risk (document your reasoning)
          → NO: HIGH-RISK (Article 6(2))
   → YES: Continue to step 4

4. Does any of the Article 6(3) de minimis conditions apply?
   (narrow procedural task / improvement of completed human activity /
    aggregated-data anomaly detection with independent human review)
   → YES: Not high-risk — but document this determination thoroughly
   → NO: HIGH-RISK (Article 6(2))

Practical Examples for SaaS Developers

HR/Recruiting SaaS with AI-Powered CV Screening: If your platform ranks or scores candidates and customers use that ranking to decide who to interview, this likely qualifies under Annex III Category 4 (employment, recruitment). The "materially influences" threshold is probably met. High-risk classification is the likely outcome unless your system only presents structured data and the human interviewer independently evaluates all candidates regardless of AI output.

Customer Support AI That Routes Tickets: If this is purely internal to your operations (not deployed by your customers to assess their customers for essential services), it likely falls outside Annex III entirely. Not high-risk.

AI-Powered Credit Pre-Screening Feature in a Fintech Product: Category 5 squarely covers creditworthiness assessment. If the AI system's output is used to make or influence credit decisions, this is high-risk. The fact that a human final decision-maker exists does not remove the classification if the AI output materially shapes that decision.

AI Anomaly Detection in a SIEM/Security Product: If this is used by law enforcement for criminal risk profiling, Category 6 applies. If used by private enterprises for general security monitoring (network intrusion detection, log anomaly alerting), it likely falls outside Annex III. The key is the deployer context — your same product may be high-risk for law enforcement customers and not high-risk for commercial enterprise customers.

Chatbot That Interacts with Public-Service Users: If the chatbot determines or substantially shapes eligibility decisions for public benefits or services (Category 5), it is high-risk. If it is a general FAQ assistant that never makes eligibility determinations and always routes to a human for decisions, the Article 6(3) exception may apply — but document the reasoning explicitly.

The High-Risk Consequences: What Classification Triggers

If your AI system is high-risk under Article 6, you are a "provider" subject to Chapter III obligations:

The compliance timeline: Annex I product providers must comply by 2 August 2026. Annex III standalone system providers have until 2 August 2027.

Why Your Hosting Infrastructure Matters for High-Risk AI Compliance

Article 12 requires that high-risk AI systems automatically log interactions, events, and decisions throughout their operational lifetime. These logs are a core compliance artifact — they're what supervisory authorities will request during audits and what deployers need for post-market monitoring.

If your AI system runs on US-hosted infrastructure — AWS, Azure, GCP, or US-incorporated cloud providers operating EU data centers — those audit logs fall under CLOUD Act jurisdiction. Under 18 U.S.C. § 2713, US cloud providers can be compelled by US law enforcement to disclose data stored on non-US servers without EU legal authority.

For high-risk AI systems under the EU AI Act, this creates a direct conflict:

Deploying high-risk AI on EU-sovereign infrastructure — hosted by EU-incorporated providers with no US parent entity — eliminates this structural conflict. sota.io is incorporated and hosted entirely within the EU, with no US corporate parent and no CLOUD Act exposure, making it a compliant deployment substrate for teams building Article 6 high-risk AI systems that need defensible audit log sovereignty.

GDPR and Article 22 Interaction

High-risk AI systems frequently intersect with GDPR Article 22 (automated individual decision-making). Where a high-risk AI system makes fully automated decisions about natural persons that produce significant legal effects, both EU AI Act Chapter III and GDPR Article 22 apply simultaneously. This means:

Providers building HR, credit, or benefit-determination AI systems should treat Article 22 GDPR compliance as a parallel requirement to EU AI Act Article 14 (human oversight) — they overlap substantially but are not identical.

Frequently Asked Questions

Does a "general purpose AI model" (GPAI) get classified under Article 6? GPAI models are regulated separately under Articles 51–56. A standalone GPAI model (like a foundation model you fine-tune or deploy) is not subject to Article 6 high-risk classification directly. However, when a GPAI model is integrated into a downstream AI system with a specific intended purpose that falls under Annex III, that downstream system can be high-risk. The GPAI provider has separate transparency obligations (Article 53); the downstream provider using the GPAI for a high-risk use case has Chapter III obligations.

If I am only a deployer (not a provider), do I have fewer obligations? Yes — deployers have a reduced set of obligations under Article 26, but they are still significant for Annex III systems: ensuring the technical implementation matches the provider's instructions, implementing human oversight, monitoring operations, and maintaining logs. The provider bears the core conformity obligation; the deployer is responsible for using the system as intended.

Can I scope my product to avoid high-risk classification? Yes, if done genuinely. Restricting the intended purpose in your technical documentation and contractual terms to exclude Annex III use cases can legitimately affect classification — but only if the system is not actually deployed in high-risk contexts by your customers. Nominal purpose restrictions that ignore actual use are unlikely to survive supervisory scrutiny.

What if my product could be used for both high-risk and non-high-risk applications? Article 6 classification is based on the "intended purpose" as defined by the provider in technical documentation, labelling, and instructions of use. If the intended purpose includes Annex III use cases for some configurations or customer segments, those configurations are high-risk. You may need to create separate product lines or explicit configuration gates to segment your compliance posture.

Summary

Article 6 classification under the EU AI Act follows two paths: safety components of Annex I regulated products (the hardware-adjacent path), and standalone AI systems used in Annex III application areas (the software-direct path). The May 2026 Omnibus Deal introduced a de minimis exception for AI systems that perform narrow procedural tasks or support — rather than replace — human judgment. For SaaS developers building AI features in employment, credit, education, or public service contexts, the classification analysis is non-trivial and should be documented now, before the August 2026–August 2027 compliance window opens.

Running on EU-sovereign infrastructure is not a substitute for classification analysis — but it is a prerequisite for defensible audit log sovereignty once you conclude your system is high-risk.