2026-04-25·14 min read·

EU AI Act Art.96: Commission Guidelines for AI Implementation — SME Compliance Pathways and High-Risk Classification Practical Examples (2026)

Art.95 establishes voluntary compliance for non-high-risk AI. Art.97 creates the delegated act mechanism for updating annexes. Art.96 sits between them: it is the Commission's obligation to issue practical guidance that makes the rest of the regulation navigable — particularly for the developers and SMEs who lack the legal team to interpret raw legislative text.

Article 96 is frequently underweighted in compliance analysis. Practitioners focus on what the regulation requires (Art.5–55) and overlook the interpretive infrastructure that determines how those requirements apply in practice. The Art.96 guidelines are not soft law you can ignore — they are the Commission's authoritative position on classification decisions, conformity pathways, and implementation choices. When a notified body, NCA, or AI Office inspector assesses your system, the Art.96 guidance documents frame the interpretive lens they apply.

For SMEs specifically, Art.96 represents the regulation's most direct acknowledgment that the compliance burden is disproportionate without official simplification. The SME-tailored guidance requirement in Art.96(3) is not a political afterthought — it is a legally mandated commitment that smaller developers receive genuinely simplified implementation tools.

What Article 96 Actually Says

Art.96 addresses three distinct obligations: the scope of Commission guidelines, the specific high-risk classification examples list, and the SME-tailored guidance requirement.

Article 96(1) — The Mandatory Guidance Scope:

The Commission shall issue guidelines on the practical implementation of this Regulation, and in particular on: (a) the application of the requirements and obligations referred to in Articles 8 to 15 and Article 25 in relation to the AI systems referred to in Annex III; (b) the prohibited AI practices referred to in Article 5, with due regard for the evolving state of the art in AI and the proportionality of measures; (c) the practical implementation of Article 50; (d) any other measure that may facilitate implementation of this Regulation.

The scope is explicit and comprehensive. The Commission must address:

The Art.96(1) mandate is not optional and not time-limited. The Commission must update guidelines as the state of the art evolves — creating a living interpretive framework alongside the static regulatory text.

Article 96(2) — The High-Risk Classification Examples List:

When establishing whether an AI system is high-risk in accordance with Article 6, the Commission shall issue guidelines on the practical implementation of Article 6(3) in conjunction with the list of use cases in Annex III, including a list of practical examples of uses of AI systems that are high-risk and that are not high-risk.

This is the provision with the highest immediate practical value. Art.6(3) allows an AI system within an Annex III category to escape high-risk classification if it does not pose "a significant risk of harm to the health, safety or fundamental rights of natural persons." The Commission must publish concrete examples — not abstract principles but specific use cases that are high-risk and specific use cases that are not.

The Art.96(2) examples list is the primary tool for:

The examples list functions as an interpretive safe harbour in practice: a use case that matches the Commission's "not high-risk" examples is substantially easier to defend than one requiring independent legal analysis. Conversely, a use case matching a "high-risk" example makes the Art.6(3) escape route unavailable absent extraordinary documentation.

Article 96(3) — SME-Tailored Guidance:

When developing those guidelines, the Commission shall pay particular attention to the needs of SMEs including start-ups, and shall promote a Union-wide SME-tailored approach.

The SME mandate in Art.96(3) applies to the entire Art.96(1) guidance package. Every guideline document must address SME-specific implementation challenges. Combined with the Art.11(2) simplified documentation rule for SMEs, Art.29(5) deployer obligations scaled to SME size, and Art.68 regulatory sandbox priority for SMEs, Art.96(3) is part of a systematic attempt to make EU AI Act compliance proportionate for smaller developers.

In practice, SME-tailored guidelines should cover:

The Art.96(2) Classification Examples: Practical Impact

The Art.96(2) examples list is the most consequential output of Art.96 for developers making day-to-day product decisions. Understanding how to use the list correctly is critical.

Working with the Examples List

When the Commission publishes the Art.96(2) examples, the operational workflow is:

  1. Identify potential Annex III categories for your AI system (the 8 high-risk categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, justice/democracy)

  2. Check the positive examples (high-risk) — if your use case matches, you are in Annex III and must apply the full Chapter III requirements unless Art.6(3) specifically applies and you can document the distinction

  3. Check the negative examples (not high-risk) — if your use case matches, you have Commission-level interpretive support for a non-high-risk classification, reducing your Art.11 documentation burden and conformity assessment obligations

  4. Document your classification analysis explicitly referencing the Commission examples, whether positive or negative, with your system-specific reasoning

The key limitation: the examples list is not exhaustive. Novel AI applications will fall outside existing examples, requiring independent analysis. When outside the list, apply the Art.6(3) three-part test:

High-Risk vs. Not-High-Risk: Illustrative Distinctions

Based on Commission guidance signals and the regulatory structure, the Art.96(2) examples list is likely to draw distinctions such as:

AI SystemClassificationRationale
Biometric categorisation by ethnicity for law enforcement profilingHigh-risk (Annex III, Art.5 risk)Direct fundamental rights impact
Facial recognition attendance tracking in schoolsHigh-risk (Annex III category 3)Education + biometrics combination
CV screening ranking candidates for jobsHigh-risk (Annex III category 4)Employment decision influence
Grammar-checking tool for CVsNot high-riskProcedural task, human retains decision
Credit scoring determining loan eligibilityHigh-risk (Annex III category 5)Essential private services access
Spending categorisation in personal finance appsNot high-riskAnalytical tool, no eligibility decision
Emotion recognition during police interrogationHigh-risk + Art.5 riskLaw enforcement + biometric combination
Sentiment analysis of customer feedbackNot high-riskNo individual decision impact
AI-assisted criminal recidivism risk scoringHigh-risk (Annex III category 6)Law enforcement fundamental rights
Spam filter for email systemsNot high-riskNo individual human assessment

The pattern the Commission is expected to follow: systems that directly influence a substantive decision affecting an individual's access to opportunities, services, or fundamental rights tend toward high-risk; systems that assist procedural tasks without influencing the substantive decision tend away from it.

The Art.95 → Art.96 Compliance Stack

Art.96 guidelines and Art.95 voluntary codes of conduct are complementary but structurally distinct.

Art.95 voluntary codes are industry-developed (with AI Office facilitation) commitments by non-high-risk providers to apply Chapter III requirements voluntarily. They are about who applies requirements and how they are governed.

Art.96 Commission guidelines are interpretive documents explaining what the requirements mean in practice. They are about how to understand and implement requirements that apply to you — whether mandatorily (high-risk) or voluntarily (non-high-risk via Art.95).

The operational interaction:

Your AI System
     │
     ├─ High-risk (Annex III + Art.6(1)/(2))
     │      │
     │      └─ Art.96(1)(a) guidelines apply directly
     │         (Art.8-15, Art.25 implementation guidance)
     │
     └─ Not high-risk (Art.6(3) safe harbour or outside Annex III)
            │
            ├─ Art.95 voluntary code? → Apply Art.96 guidelines
            │   to interpret voluntary requirements correctly
            │
            └─ No voluntary code? → Art.96(1)(b)/(c) may still apply
                (Art.5 prohibitions + Art.50 transparency apply to ALL AI systems)

The critical insight: Art.5 prohibited practices and Art.50 transparency obligations apply to all AI systems regardless of risk classification. Art.96(1)(b) and (c) guidelines therefore apply to every AI provider operating in the EU market — not just high-risk system providers.

Current Commission Guidance Status (2026)

As of 2026, the Commission guidance landscape includes:

Published AI Office Guidance:

Pending Commission Guidelines (formal Art.96 obligation):

Implementation Note: The formal Art.96 guidelines are expected to consolidate and replace the interim AI Office guidance documents. Until formal Art.96 publication, interim AI Office guidance documents represent the Commission's current interpretive position and should be treated as the working standard for compliance decisions.

SME Practical Implementation Under Art.96

For an SME developer, Art.96 has three operational implications:

1. Classification Safety Check

Before investing in full Chapter III compliance infrastructure, an SME should:

2. Documentation Using Commission Templates

When Commission Art.96(3) SME guidance is published, it will include documentation templates. SMEs should:

3. Art.50 Transparency: Universal Obligation

Even if your system is not high-risk, Art.50 transparency obligations apply. Art.96(1)(c) guidance on Art.50 will be directly relevant. Key Art.50 requirements for all AI systems:

CLOUD Act and Infrastructure Positioning

Art.96 guidelines create a documentation-heavy compliance environment. Every Art.96(1)(a) guideline — from risk management system requirements to technical documentation structure — generates artefacts that must be stored, retained, and potentially produced to NCAs or the AI Office.

For AI providers storing compliance documentation on non-EU infrastructure, this creates a jurisdiction exposure point the Art.96 guidelines themselves are unlikely to address directly but that every compliance team should evaluate:

The CLOUD Act exposure chain for compliance documentation:

Art.96 guideline → compliance artefact requirement
                        │
                        ├─ Stored on US-cloud infrastructure
                        │        │
                        │        └─ CLOUD Act (18 U.S.C. § 2713):
                        │           US government can compel disclosure
                        │           without EU authority notification
                        │
                        └─ Stored on EU-incorporated infrastructure
                                 │
                                 └─ EU law governs disclosure
                                    GDPR Art.48 blocks extraterritorial
                                    access without mutual assistance treaty

Compliance documentation is not the same as personal data, but it can contain sensitive business information, proprietary model architecture details, training data descriptions, and vulnerability assessments. For companies where this information is commercially sensitive or where jurisdiction over disclosure matters, the infrastructure choice is not neutral.

sota.io is incorporated as a German GmbH, operates infrastructure in Frankfurt, and provides managed deployment without US-parent jurisdiction exposure — a directly relevant differentiation for AI compliance infrastructure where the Art.96 documentation chain runs deep.

Python Implementation: Art96GuidelineTracker

from dataclasses import dataclass, field
from datetime import date
from enum import Enum
from typing import Optional

class GuidelineStatus(Enum):
    PUBLISHED = "published"
    PENDING = "pending"
    DRAFT_CONSULTATION = "draft_consultation"
    SUPERSEDED = "superseded"

class RiskClassification(Enum):
    HIGH_RISK = "high_risk"
    NOT_HIGH_RISK = "not_high_risk"
    PROHIBITED = "prohibited"
    REQUIRES_ANALYSIS = "requires_analysis"

@dataclass
class CommissionGuideline:
    article_ref: str
    title: str
    status: GuidelineStatus
    published_date: Optional[date]
    applies_to_sme: bool
    summary: str

@dataclass
class UseCaseClassification:
    system_description: str
    annex_iii_category: Optional[str]
    commission_example_match: Optional[str]
    classification: RiskClassification
    art_6_3_safe_harbour: bool
    documentation_ref: str
    review_date: date

class Art96GuidelineTracker:
    def __init__(self, company_name: str, is_sme: bool = True):
        self.company_name = company_name
        self.is_sme = is_sme
        self.guidelines: list[CommissionGuideline] = []
        self.use_case_classifications: list[UseCaseClassification] = []
        self._load_known_guidelines()

    def _load_known_guidelines(self):
        self.guidelines = [
            CommissionGuideline(
                article_ref="Art.96(1)(a)",
                title="Practical Implementation Guidelines Art.8-15 and Art.25",
                status=GuidelineStatus.PENDING,
                published_date=None,
                applies_to_sme=True,
                summary="Guidelines on risk management, data governance, technical documentation, "
                        "logging, transparency, human oversight, accuracy, and supply chain "
                        "obligations for Annex III high-risk AI systems.",
            ),
            CommissionGuideline(
                article_ref="Art.96(1)(b)",
                title="Guidelines on Prohibited AI Practices (Art.5)",
                status=GuidelineStatus.PENDING,
                published_date=None,
                applies_to_sme=True,
                summary="Interpretive guidance on the Art.5 prohibitions: manipulation, "
                        "social scoring, real-time biometric identification, emotion recognition "
                        "in sensitive contexts, and AI-assisted vulnerability exploitation.",
            ),
            CommissionGuideline(
                article_ref="Art.96(1)(c)",
                title="Practical Implementation of Art.50 Transparency Obligations",
                status=GuidelineStatus.PENDING,
                published_date=None,
                applies_to_sme=True,
                summary="Guidance on chatbot disclosure, AI-generated content marking, "
                        "emotion recognition notification, and machine-readable watermarking "
                        "obligations applicable to all AI systems regardless of risk class.",
            ),
            CommissionGuideline(
                article_ref="Art.96(2)",
                title="High-Risk Classification Examples List (Art.6(3))",
                status=GuidelineStatus.PENDING,
                published_date=None,
                applies_to_sme=True,
                summary="Concrete use case examples of AI systems that are high-risk and "
                        "that are not high-risk under Art.6(3) in conjunction with Annex III. "
                        "Primary tool for classification decisions and NCA documentation.",
            ),
            CommissionGuideline(
                article_ref="Art.96(3)",
                title="SME-Tailored Implementation Guidance Package",
                status=GuidelineStatus.PENDING,
                published_date=None,
                applies_to_sme=True,
                summary="Simplified documentation templates, self-assessment tools, "
                        "sandbox coordination guidance, and reduced-burden conformity "
                        "pathways specifically for SMEs and start-ups.",
            ),
        ]

    def classify_use_case(
        self,
        system_description: str,
        annex_iii_category: Optional[str],
        matches_commission_example: Optional[str] = None,
        art_6_3_applies: bool = False,
    ) -> UseCaseClassification:
        if matches_commission_example and "not high-risk" in matches_commission_example.lower():
            classification = RiskClassification.NOT_HIGH_RISK
        elif matches_commission_example and "high-risk" in matches_commission_example.lower():
            classification = RiskClassification.HIGH_RISK
        elif annex_iii_category and art_6_3_applies:
            classification = RiskClassification.NOT_HIGH_RISK
        elif annex_iii_category:
            classification = RiskClassification.HIGH_RISK
        else:
            classification = RiskClassification.REQUIRES_ANALYSIS

        entry = UseCaseClassification(
            system_description=system_description,
            annex_iii_category=annex_iii_category,
            commission_example_match=matches_commission_example,
            classification=classification,
            art_6_3_safe_harbour=art_6_3_applies,
            documentation_ref=f"art11-annex-iv-classification-{date.today().isoformat()}",
            review_date=date.today(),
        )
        self.use_case_classifications.append(entry)
        return entry

    def pending_guidelines(self) -> list[CommissionGuideline]:
        return [g for g in self.guidelines if g.status == GuidelineStatus.PENDING]

    def sme_compliance_score(self) -> dict:
        classified = len(self.use_case_classifications)
        unresolved = sum(
            1 for c in self.use_case_classifications
            if c.classification == RiskClassification.REQUIRES_ANALYSIS
        )
        pending = len(self.pending_guidelines())
        score = max(0, 100 - (unresolved * 20) - (pending * 5))
        return {
            "score": score,
            "classified_systems": classified,
            "unresolved_systems": unresolved,
            "pending_guidelines": pending,
            "is_sme": self.is_sme,
            "sme_simplified_path_available": self.is_sme,
        }

    def compliance_summary(self) -> str:
        score = self.sme_compliance_score()
        lines = [
            f"Art.96 Compliance Tracker — {self.company_name}",
            f"SME status: {'Yes — simplified path available' if self.is_sme else 'No'}",
            f"Systems classified: {score['classified_systems']}",
            f"Requiring further analysis: {score['unresolved_systems']}",
            f"Commission guidelines pending: {score['pending_guidelines']}",
            f"Overall readiness score: {score['score']}/100",
            "",
            "Pending guidelines to monitor:",
        ]
        for g in self.pending_guidelines():
            lines.append(f"  [{g.article_ref}] {g.title}")
        return "\n".join(lines)


# Example usage
if __name__ == "__main__":
    tracker = Art96GuidelineTracker(company_name="AcmeSoft GmbH", is_sme=True)

    # Classify a CV screening system
    cv_screening = tracker.classify_use_case(
        system_description="AI system that ranks and filters job applicants from CVs",
        annex_iii_category="Category 4 — Employment and workers management",
        matches_commission_example=None,
        art_6_3_applies=False,
    )
    print(f"CV screening: {cv_screening.classification.value}")

    # Classify a grammar checker
    grammar_check = tracker.classify_use_case(
        system_description="Grammar and style checking tool for professional documents",
        annex_iii_category=None,
        matches_commission_example="not high-risk — procedural assistance tool",
        art_6_3_applies=False,
    )
    print(f"Grammar checker: {grammar_check.classification.value}")

    print(tracker.compliance_summary())

25-Item SME Developer Checklist — Art.96 Implementation

Classification Foundation (Art.96(2))

Transparency Obligations (Art.96(1)(c) — applies to all AI systems)

Prohibited Practices (Art.96(1)(b) — applies to all AI systems)

SME Compliance Pathways (Art.96(3))

Documentation and Monitoring (Art.96(1)(a))

See Also