2026-04-16·12 min read·

EU AI Act Article 6: High-Risk AI Classification — The Annex III Gateway Guide for Developers

If Article 5 tells you what you cannot build, Article 6 tells you which systems carry the full compliance burden of Articles 9–15. Everything from risk management systems to technical documentation, logging, human oversight, and EU database registration — all of it flows from a single question: does your AI system qualify as "high-risk" under Article 6?

For most AI developers, Article 6 is the most consequential classification decision in the entire EU AI Act.

Two Classification Pathways

Article 6 creates two distinct routes to high-risk status:

Pathway 1: Article 6(1) — Safety Component in Annex I Products

An AI system is high-risk when both conditions are met:

  1. The AI system is intended to be used as a safety component of a product covered by Annex II Union harmonisation legislation (e.g., Machinery Regulation, Medical Devices, Aviation, Automotive), or is itself such a product
  2. The product is required to undergo third-party conformity assessment under that Union harmonisation legislation

The key qualifier here is "safety component." An AI system embedded in a medical device that performs scheduling or billing does not become high-risk merely by being in the device. It must directly perform a safety function — if it fails, could someone be harmed?

Annex II Products That Trigger 6(1):

Product CategoryLegislationExample AI Use
MachineryRegulation (EU) 2023/1230Autonomous safety guards, collision detection
Medical DevicesRegulation (EU) 2017/745AI-assisted diagnosis, treatment planning
In-Vitro DiagnosticsRegulation (EU) 2017/746AI biomarker detection, pathology screening
Radio EquipmentDirective 2014/53/EUAI spectrum management in safety equipment
Civil AviationRegulation (EC) 216/2008Autopilot safety systems, ATC decision support
Marine EquipmentDirective 2014/90/EUAI navigation safety systems
RailwayDirective (EU) 2016/797AI train control, collision avoidance
Motor VehiclesRegulation (EU) 2019/2144ADAS, emergency braking, lane keeping
Agricultural MachineryRegulation (EU) 167/2013Autonomous safety systems
Recreational CraftDirective 2013/53/EUAI navigation safety in leisure vessels

If your AI system is embedded in any of these product categories and performs a safety function, Article 6(1) applies. You inherit the conformity assessment pathway from the product regulation and must satisfy Articles 9–15 of the EU AI Act.

Pathway 2: Article 6(2) — Direct Annex III Classification

An AI system is high-risk when it falls into any of the 8 categories listed in Annex III, provided it poses a "significant risk of harm to the health, safety or fundamental rights of natural persons." The Annex III categories are:

Annex III CategoryScopeRepresentative Systems
Cat. 1: BiometricsRemote biometric identification, biometric categorization, emotion recognitionFace recognition for access control, workforce monitoring
Cat. 2: Critical InfrastructureAI managing electricity, water, gas, road, rail, digital infrastructureGrid balancing AI, traffic management systems
Cat. 3: EducationAI determining access or outcomes in educationAdmissions scoring, exam proctoring, learning assessment AI
Cat. 4: EmploymentRecruitment, performance monitoring, promotion, termination decisionsCV screening tools, employee performance ranking AI
Cat. 5: Essential ServicesCredit scoring, insurance risk, emergency services dispatchLoan decisioning AI, insurance underwriting models
Cat. 6: Law EnforcementIndividual risk assessment, polygraph, evidence reliability, crime analyticsRecidivism prediction, threat assessment tools
Cat. 7: Migration & BorderAsylum risk assessment, lie detection, irregular migration riskBorder screening AI, asylum case assessment
Cat. 8: Justice & DemocracyLegal research, dispute resolution, electoral influence assessmentAI legal analysis tools, judicial decision support

The "Significant Risk" Qualifier in Article 6(2)

This is the most technically important clause in Article 6. Annex III classification alone is not sufficient to make a system high-risk. The system must also pose a "significant risk of harm."

The European Commission may, by delegated act, specify criteria for applying this qualifier — but as of August 2026 enforcement, these criteria are not fully finalized. The default interpretation applied by regulators treats most Annex III systems as posing significant risk unless:

Practically: if your employment screening AI generates a score used in hiring, it is high-risk. If your AI generates a draft email to candidates that a human then rewrites and sends, it probably is not.

Article 6(3): Exclusions from High-Risk Status

Even when a system falls within Annex III categories, Article 6(3) provides explicit exclusions:

Exclusion 1 — Narrow Task / No Significant Risk:
AI systems performing a narrow procedural task that poses no significant risk. Example: an Annex III Category 4 AI that automatically formats CVs for submission (formatting function, no evaluation of candidates).

Exclusion 2 — Improving Prior Human Decision:
AI that improves the result of a previously completed human activity. A post-review AI that checks whether a human made an internally consistent decision — without itself influencing any new decision — may qualify.

Exclusion 3 — Detection of Patterns Without Influence:
AI that detects patterns or anomalies in prior data without producing any output that influences decisions or actions affecting persons. Audit trail analysis that generates no decisions falls here.

Exclusion 4 — Preparatory Tasks Only:
AI performing preparatory tasks to the actual assessment. An AI that extracts named entities from documents for a human to then evaluate is excluded — if the human assessment is the real decision point.

Critical caveat: these exclusions are narrow. Article 6(3) explicitly states that a provider cannot avoid high-risk classification just by labeling the AI as performing a "preparatory" or "supporting" task if it materially influences outcomes. Regulators look at actual function, not marketing language.

What High-Risk Classification Triggers

When Article 6(1) or 6(2) applies, you must comply with all of the following before placing on market or putting into service:

ArticleObligationWhat It Requires
Art. 9Risk Management SystemOngoing iterative risk identification, analysis, evaluation, and mitigation throughout lifecycle
Art. 10Training, Validation, Testing DataData governance, representativeness checks, bias detection, data quality criteria
Art. 11Technical Documentation (Annex IV)14-category documentation package maintained and updated
Art. 12LoggingAutomatic event logging, 6-month minimum retention for biometrics, 1 year minimum otherwise
Art. 13TransparencyInstructions for use with 10 mandatory content elements for deployers
Art. 14Human OversightDesign measures enabling override, stop, and intervention
Art. 15Accuracy, Robustness, CybersecurityPerformance metrics, resilience under perturbations, adversarial input handling
Art. 43Conformity AssessmentInternal or third-party review (third-party mandatory for biometrics and 6(1) products)
Art. 49RegistrationEU database registration before market placement

This is the full compliance stack. Each of these requires documented, auditable evidence. None of this is a checkbox exercise — Articles 9 and 11 in particular require living documents updated throughout the system's lifecycle.

CLOUD Act and Article 6: Why Classification Documentation Is Jurisdictional

Here is the practical risk most developers overlook: all the documentation you generate to support your Article 6 classification decision — the risk assessments, the internal memos deciding you are or are not Annex III, the legal opinions on the "significant risk" qualifier — is discoverable by US authorities under the CLOUD Act if stored on AWS, Azure, or GCP.

If a US court order reaches your cloud provider for e-discovery of your EU AI compliance files, it can compel disclosure of your classification analysis — including any internal documents where you acknowledged you might be high-risk but decided not to classify as such.

The mitigation is simple: store classification and compliance documentation on EU-native infrastructure with no US corporate parent. The documentation is then outside CLOUD Act reach.

Python Tooling: High-Risk Classification Checker

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional

class ClassificationPathway(Enum):
    ARTICLE_6_1 = "annex_i_product_safety_component"
    ARTICLE_6_2 = "annex_iii_standalone"
    NOT_HIGH_RISK = "not_high_risk"
    EXCLUDED_6_3 = "excluded_article_6_3"

class AnnexIIICategory(Enum):
    BIOMETRICS = "cat_1_biometrics"
    CRITICAL_INFRASTRUCTURE = "cat_2_critical_infrastructure"
    EDUCATION = "cat_3_education"
    EMPLOYMENT = "cat_4_employment"
    ESSENTIAL_SERVICES = "cat_5_essential_services"
    LAW_ENFORCEMENT = "cat_6_law_enforcement"
    MIGRATION_BORDER = "cat_7_migration_border"
    JUSTICE_DEMOCRACY = "cat_8_justice_democracy"

@dataclass
class HighRiskClassification:
    pathway: ClassificationPathway
    annex_iii_category: Optional[AnnexIIICategory] = None
    significant_risk_finding: Optional[bool] = None
    exclusion_basis: Optional[str] = None
    triggers_full_compliance_stack: bool = False
    requires_third_party_conformity: bool = False
    rationale: str = ""

    def get_required_articles(self) -> list[str]:
        if not self.triggers_full_compliance_stack:
            return []
        articles = ["Art.9", "Art.10", "Art.11", "Art.12",
                    "Art.13", "Art.14", "Art.15", "Art.43", "Art.49"]
        return articles

def classify_ai_system(
    is_safety_component_of_annex_ii_product: bool,
    annex_ii_product_requires_third_party_conformity: bool,
    annex_iii_categories: list[AnnexIIICategory],
    significant_risk: bool,
    exclusion_basis: Optional[str] = None,
) -> HighRiskClassification:
    """
    Classify an AI system under Article 6 EU AI Act.
    Returns a HighRiskClassification with pathway, triggers, and rationale.
    """
    # Check exclusion first (Article 6(3))
    if exclusion_basis:
        return HighRiskClassification(
            pathway=ClassificationPathway.EXCLUDED_6_3,
            exclusion_basis=exclusion_basis,
            triggers_full_compliance_stack=False,
            rationale=f"Excluded under Art.6(3): {exclusion_basis}"
        )

    # Article 6(1) pathway
    if is_safety_component_of_annex_ii_product:
        return HighRiskClassification(
            pathway=ClassificationPathway.ARTICLE_6_1,
            triggers_full_compliance_stack=True,
            requires_third_party_conformity=annex_ii_product_requires_third_party_conformity,
            rationale="Safety component in Annex II product requiring conformity assessment"
        )

    # Article 6(2) pathway — Annex III
    if annex_iii_categories and significant_risk:
        return HighRiskClassification(
            pathway=ClassificationPathway.ARTICLE_6_2,
            annex_iii_category=annex_iii_categories[0],
            significant_risk_finding=True,
            triggers_full_compliance_stack=True,
            requires_third_party_conformity=(
                AnnexIIICategory.BIOMETRICS in annex_iii_categories
            ),
            rationale=(
                f"Annex III {annex_iii_categories[0].value} with significant risk finding"
            )
        )

    # Annex III but no significant risk
    if annex_iii_categories and not significant_risk:
        return HighRiskClassification(
            pathway=ClassificationPathway.NOT_HIGH_RISK,
            annex_iii_category=annex_iii_categories[0],
            significant_risk_finding=False,
            triggers_full_compliance_stack=False,
            rationale="Annex III category but significant risk assessment: negative"
        )

    return HighRiskClassification(
        pathway=ClassificationPathway.NOT_HIGH_RISK,
        triggers_full_compliance_stack=False,
        rationale="No Annex I product component, no Annex III category"
    )


# Example: employment CV screening tool
cv_screening = classify_ai_system(
    is_safety_component_of_annex_ii_product=False,
    annex_ii_product_requires_third_party_conformity=False,
    annex_iii_categories=[AnnexIIICategory.EMPLOYMENT],
    significant_risk=True,
)
print(cv_screening.pathway)  # ClassificationPathway.ARTICLE_6_2
print(cv_screening.get_required_articles())  # ['Art.9', ..., 'Art.49']

# Example: AI formatting tool in recruitment (narrow task)
formatter = classify_ai_system(
    is_safety_component_of_annex_ii_product=False,
    annex_ii_product_requires_third_party_conformity=False,
    annex_iii_categories=[AnnexIIICategory.EMPLOYMENT],
    significant_risk=False,
    exclusion_basis="narrow procedural task (CV formatting), no evaluation of candidates"
)
print(formatter.pathway)  # ClassificationPathway.EXCLUDED_6_3

Article 6 Classification Decision Tree

Is the AI a safety component of an Annex II product?
├─ YES → Does that product require third-party conformity assessment?
│   ├─ YES → HIGH-RISK (Article 6(1)) → Full Art.9-15 stack
│   └─ NO → NOT high-risk under 6(1) → Check 6(2)
└─ NO → Falls in Annex III category?
    ├─ NO → NOT high-risk
    └─ YES → Significant risk of harm?
        ├─ NO → NOT high-risk (document this finding)
        └─ YES → Article 6(3) exclusion applies?
            ├─ YES → NOT high-risk (document exclusion basis)
            └─ NO → HIGH-RISK (Article 6(2)) → Full Art.9-15 stack

The Annex III Amendment Risk

Article 6(2) delegates to the Commission the power to update Annex III by delegated act. Article 97 mandates the Commission to evaluate the Act within 4 years — and the evaluation explicitly covers whether Annex III categories remain appropriate.

Sectors with high regulatory attention for potential Annex III addition:

If you are building in these sectors, design for Articles 9–15 now — even if you are currently borderline. A 2028 delegated act could reclassify your system retroactively.

30-Item High-Risk Classification Checklist

Annex I / Article 6(1) Assessment (5 items)

Annex III Category Assessment (5 items)

Article 6(3) Exclusion Analysis (5 items)

Compliance Stack Preparation (5 items)

CLOUD Act / Infrastructure Risk (5 items)

Ongoing Classification Review (5 items)

Common Developer Mistakes Under Article 6

Mistake 1: Treating Annex III as a bright-line rule
Annex III categories are necessary but not sufficient. You must also find significant risk. Many developers skip the significant risk analysis and either over-classify (triggering unnecessary compliance cost) or under-classify (assuming Annex III presence alone is definitive when it is not).

Mistake 2: Ignoring upstream supply chain
If you are a component vendor, your AI may end up in an Annex II product. You may not control the integration — but you could still be the "provider" of the AI system for Article 6(1) purposes if your AI is the safety component.

Mistake 3: Confusing intended use with actual use
Article 6 classification is tied to intended purpose. But deployers may use your system outside its intended scope. If you know about reasonably foreseeable misuse that brings the system into Annex III territory, regulators can argue you had constructive knowledge. Your technical documentation must address this.

Mistake 4: One-time classification decisions
Article 6 classification is not static. If you update your model (new training data, new capability), add a new use case, or change the deployment context — re-classify. A system that was not high-risk at v1.0 may be high-risk at v2.0.

Mistake 5: Open source exemption overreach
Article 6(3) and related provisions do not create a blanket open-source exemption. If your open-source AI is deployed as an Annex III system by an operator, the operator bears the compliance obligation. But if you release it knowing it will be used for high-risk purposes, provider obligations may still apply.


EU AI Act full enforcement: August 2, 2026. Article 6 classification determines whether your system triggers the full Art.9–15 compliance stack. The decision is made at design time — not at enforcement time.


See Also: