EU AI Act Art.96: Commission Guidelines for AI Implementation — SME Compliance Pathways and High-Risk Classification Practical Examples (2026)
Art.95 establishes voluntary compliance for non-high-risk AI. Art.97 creates the delegated act mechanism for updating annexes. Art.96 sits between them: it is the Commission's obligation to issue practical guidance that makes the rest of the regulation navigable — particularly for the developers and SMEs who lack the legal team to interpret raw legislative text.
Article 96 is frequently underweighted in compliance analysis. Practitioners focus on what the regulation requires (Art.5–55) and overlook the interpretive infrastructure that determines how those requirements apply in practice. The Art.96 guidelines are not soft law you can ignore — they are the Commission's authoritative position on classification decisions, conformity pathways, and implementation choices. When a notified body, NCA, or AI Office inspector assesses your system, the Art.96 guidance documents frame the interpretive lens they apply.
For SMEs specifically, Art.96 represents the regulation's most direct acknowledgment that the compliance burden is disproportionate without official simplification. The SME-tailored guidance requirement in Art.96(3) is not a political afterthought — it is a legally mandated commitment that smaller developers receive genuinely simplified implementation tools.
What Article 96 Actually Says
Art.96 addresses three distinct obligations: the scope of Commission guidelines, the specific high-risk classification examples list, and the SME-tailored guidance requirement.
Article 96(1) — The Mandatory Guidance Scope:
The Commission shall issue guidelines on the practical implementation of this Regulation, and in particular on: (a) the application of the requirements and obligations referred to in Articles 8 to 15 and Article 25 in relation to the AI systems referred to in Annex III; (b) the prohibited AI practices referred to in Article 5, with due regard for the evolving state of the art in AI and the proportionality of measures; (c) the practical implementation of Article 50; (d) any other measure that may facilitate implementation of this Regulation.
The scope is explicit and comprehensive. The Commission must address:
- Art.8–15 implementation guidance: Risk management (Art.9), data governance (Art.10), technical documentation (Art.11), logging (Art.12), transparency (Art.13), human oversight (Art.14), accuracy and robustness (Art.15), plus supply chain obligations under Art.25
- Art.5 prohibited practices: How the prohibitions apply as AI capabilities evolve — what constitutes manipulation of the unconscious, what counts as "real-time" remote biometric identification, when social scoring crosses the threshold
- Art.50 transparency requirements: Disclosure obligations for AI-generated content, chatbot identification, emotion recognition, deep fake detection
- Facilitation measures: Anything else that reduces friction for legitimate compliance
The Art.96(1) mandate is not optional and not time-limited. The Commission must update guidelines as the state of the art evolves — creating a living interpretive framework alongside the static regulatory text.
Article 96(2) — The High-Risk Classification Examples List:
When establishing whether an AI system is high-risk in accordance with Article 6, the Commission shall issue guidelines on the practical implementation of Article 6(3) in conjunction with the list of use cases in Annex III, including a list of practical examples of uses of AI systems that are high-risk and that are not high-risk.
This is the provision with the highest immediate practical value. Art.6(3) allows an AI system within an Annex III category to escape high-risk classification if it does not pose "a significant risk of harm to the health, safety or fundamental rights of natural persons." The Commission must publish concrete examples — not abstract principles but specific use cases that are high-risk and specific use cases that are not.
The Art.96(2) examples list is the primary tool for:
- Classifying novel AI deployments against Annex III categories
- Documenting the classification rationale for Art.11 technical documentation
- Defending a non-high-risk determination to an NCA inspector
- Identifying when a product evolution crosses into high-risk territory
The examples list functions as an interpretive safe harbour in practice: a use case that matches the Commission's "not high-risk" examples is substantially easier to defend than one requiring independent legal analysis. Conversely, a use case matching a "high-risk" example makes the Art.6(3) escape route unavailable absent extraordinary documentation.
Article 96(3) — SME-Tailored Guidance:
When developing those guidelines, the Commission shall pay particular attention to the needs of SMEs including start-ups, and shall promote a Union-wide SME-tailored approach.
The SME mandate in Art.96(3) applies to the entire Art.96(1) guidance package. Every guideline document must address SME-specific implementation challenges. Combined with the Art.11(2) simplified documentation rule for SMEs, Art.29(5) deployer obligations scaled to SME size, and Art.68 regulatory sandbox priority for SMEs, Art.96(3) is part of a systematic attempt to make EU AI Act compliance proportionate for smaller developers.
In practice, SME-tailored guidelines should cover:
- Simplified conformity assessment pathways for SMEs below the third-party assessment threshold
- Template documentation meeting Annex IV requirements without requiring external legal counsel
- Self-assessment tools for Art.6(3) classification decisions
- Regulatory sandbox coordination with Art.68 national sandbox programs
- Reduced monitoring frequency for SME deployers under Art.29(5)
The Art.96(2) Classification Examples: Practical Impact
The Art.96(2) examples list is the most consequential output of Art.96 for developers making day-to-day product decisions. Understanding how to use the list correctly is critical.
Working with the Examples List
When the Commission publishes the Art.96(2) examples, the operational workflow is:
-
Identify potential Annex III categories for your AI system (the 8 high-risk categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, justice/democracy)
-
Check the positive examples (high-risk) — if your use case matches, you are in Annex III and must apply the full Chapter III requirements unless Art.6(3) specifically applies and you can document the distinction
-
Check the negative examples (not high-risk) — if your use case matches, you have Commission-level interpretive support for a non-high-risk classification, reducing your Art.11 documentation burden and conformity assessment obligations
-
Document your classification analysis explicitly referencing the Commission examples, whether positive or negative, with your system-specific reasoning
The key limitation: the examples list is not exhaustive. Novel AI applications will fall outside existing examples, requiring independent analysis. When outside the list, apply the Art.6(3) three-part test:
- Is the AI system within an Annex III category?
- Does it perform a narrow procedural task?
- Does it improve a previously manual human activity without replacing human judgment on substantive decisions?
High-Risk vs. Not-High-Risk: Illustrative Distinctions
Based on Commission guidance signals and the regulatory structure, the Art.96(2) examples list is likely to draw distinctions such as:
| AI System | Classification | Rationale |
|---|---|---|
| Biometric categorisation by ethnicity for law enforcement profiling | High-risk (Annex III, Art.5 risk) | Direct fundamental rights impact |
| Facial recognition attendance tracking in schools | High-risk (Annex III category 3) | Education + biometrics combination |
| CV screening ranking candidates for jobs | High-risk (Annex III category 4) | Employment decision influence |
| Grammar-checking tool for CVs | Not high-risk | Procedural task, human retains decision |
| Credit scoring determining loan eligibility | High-risk (Annex III category 5) | Essential private services access |
| Spending categorisation in personal finance apps | Not high-risk | Analytical tool, no eligibility decision |
| Emotion recognition during police interrogation | High-risk + Art.5 risk | Law enforcement + biometric combination |
| Sentiment analysis of customer feedback | Not high-risk | No individual decision impact |
| AI-assisted criminal recidivism risk scoring | High-risk (Annex III category 6) | Law enforcement fundamental rights |
| Spam filter for email systems | Not high-risk | No individual human assessment |
The pattern the Commission is expected to follow: systems that directly influence a substantive decision affecting an individual's access to opportunities, services, or fundamental rights tend toward high-risk; systems that assist procedural tasks without influencing the substantive decision tend away from it.
The Art.95 → Art.96 Compliance Stack
Art.96 guidelines and Art.95 voluntary codes of conduct are complementary but structurally distinct.
Art.95 voluntary codes are industry-developed (with AI Office facilitation) commitments by non-high-risk providers to apply Chapter III requirements voluntarily. They are about who applies requirements and how they are governed.
Art.96 Commission guidelines are interpretive documents explaining what the requirements mean in practice. They are about how to understand and implement requirements that apply to you — whether mandatorily (high-risk) or voluntarily (non-high-risk via Art.95).
The operational interaction:
Your AI System
│
├─ High-risk (Annex III + Art.6(1)/(2))
│ │
│ └─ Art.96(1)(a) guidelines apply directly
│ (Art.8-15, Art.25 implementation guidance)
│
└─ Not high-risk (Art.6(3) safe harbour or outside Annex III)
│
├─ Art.95 voluntary code? → Apply Art.96 guidelines
│ to interpret voluntary requirements correctly
│
└─ No voluntary code? → Art.96(1)(b)/(c) may still apply
(Art.5 prohibitions + Art.50 transparency apply to ALL AI systems)
The critical insight: Art.5 prohibited practices and Art.50 transparency obligations apply to all AI systems regardless of risk classification. Art.96(1)(b) and (c) guidelines therefore apply to every AI provider operating in the EU market — not just high-risk system providers.
Current Commission Guidance Status (2026)
As of 2026, the Commission guidance landscape includes:
Published AI Office Guidance:
- Guidance on GPAI Model Classification — Operational guidance on the 10^25 FLOP threshold and systemic risk determination under Art.51
- Code of Practice for GPAI Models — Facilitated under Art.56, first version adopted; addresses Art.53 technical documentation, Art.55 systemic risk obligations
- SME FAQ Series — AI Office preliminary guidance documents addressing common SME questions on classification and documentation
Pending Commission Guidelines (formal Art.96 obligation):
- Formal Art.96(1) guidelines on Art.8–15 implementation — scheduled for publication ahead of August 2026 full enforcement
- Art.96(2) high-risk classification examples list — most anticipated document; classification uncertainty is the primary compliance friction point
- Art.96(3) SME-tailored guidance package — simplified documentation templates and self-assessment tools
Implementation Note: The formal Art.96 guidelines are expected to consolidate and replace the interim AI Office guidance documents. Until formal Art.96 publication, interim AI Office guidance documents represent the Commission's current interpretive position and should be treated as the working standard for compliance decisions.
SME Practical Implementation Under Art.96
For an SME developer, Art.96 has three operational implications:
1. Classification Safety Check
Before investing in full Chapter III compliance infrastructure, an SME should:
- Wait for or monitor the Art.96(2) examples list publication
- Check whether your specific use case appears as "not high-risk"
- If it does, document this against your technical documentation (Annex IV)
- If it does not, proceed with full high-risk compliance track
2. Documentation Using Commission Templates
When Commission Art.96(3) SME guidance is published, it will include documentation templates. SMEs should:
- Adopt Commission templates rather than developing custom documentation
- Templates receive presumption of Annex IV compliance until proven otherwise
- Track Art.96 guidance updates — template revisions require documentation updates
3. Art.50 Transparency: Universal Obligation
Even if your system is not high-risk, Art.50 transparency obligations apply. Art.96(1)(c) guidance on Art.50 will be directly relevant. Key Art.50 requirements for all AI systems:
- Art.50(1): Deployers using emotion recognition or biometric categorisation must inform users
- Art.50(2): Providers of AI-generated content must mark it as machine-generated (deep fakes, synthetic audio/video)
- Art.50(3): Deployers of chatbots must inform users they are interacting with AI (unless obvious from context)
- Art.50(4): Providers of AI generating synthetic content must embed machine-readable watermarks where technically feasible
CLOUD Act and Infrastructure Positioning
Art.96 guidelines create a documentation-heavy compliance environment. Every Art.96(1)(a) guideline — from risk management system requirements to technical documentation structure — generates artefacts that must be stored, retained, and potentially produced to NCAs or the AI Office.
For AI providers storing compliance documentation on non-EU infrastructure, this creates a jurisdiction exposure point the Art.96 guidelines themselves are unlikely to address directly but that every compliance team should evaluate:
The CLOUD Act exposure chain for compliance documentation:
Art.96 guideline → compliance artefact requirement
│
├─ Stored on US-cloud infrastructure
│ │
│ └─ CLOUD Act (18 U.S.C. § 2713):
│ US government can compel disclosure
│ without EU authority notification
│
└─ Stored on EU-incorporated infrastructure
│
└─ EU law governs disclosure
GDPR Art.48 blocks extraterritorial
access without mutual assistance treaty
Compliance documentation is not the same as personal data, but it can contain sensitive business information, proprietary model architecture details, training data descriptions, and vulnerability assessments. For companies where this information is commercially sensitive or where jurisdiction over disclosure matters, the infrastructure choice is not neutral.
sota.io is incorporated as a German GmbH, operates infrastructure in Frankfurt, and provides managed deployment without US-parent jurisdiction exposure — a directly relevant differentiation for AI compliance infrastructure where the Art.96 documentation chain runs deep.
Python Implementation: Art96GuidelineTracker
from dataclasses import dataclass, field
from datetime import date
from enum import Enum
from typing import Optional
class GuidelineStatus(Enum):
PUBLISHED = "published"
PENDING = "pending"
DRAFT_CONSULTATION = "draft_consultation"
SUPERSEDED = "superseded"
class RiskClassification(Enum):
HIGH_RISK = "high_risk"
NOT_HIGH_RISK = "not_high_risk"
PROHIBITED = "prohibited"
REQUIRES_ANALYSIS = "requires_analysis"
@dataclass
class CommissionGuideline:
article_ref: str
title: str
status: GuidelineStatus
published_date: Optional[date]
applies_to_sme: bool
summary: str
@dataclass
class UseCaseClassification:
system_description: str
annex_iii_category: Optional[str]
commission_example_match: Optional[str]
classification: RiskClassification
art_6_3_safe_harbour: bool
documentation_ref: str
review_date: date
class Art96GuidelineTracker:
def __init__(self, company_name: str, is_sme: bool = True):
self.company_name = company_name
self.is_sme = is_sme
self.guidelines: list[CommissionGuideline] = []
self.use_case_classifications: list[UseCaseClassification] = []
self._load_known_guidelines()
def _load_known_guidelines(self):
self.guidelines = [
CommissionGuideline(
article_ref="Art.96(1)(a)",
title="Practical Implementation Guidelines Art.8-15 and Art.25",
status=GuidelineStatus.PENDING,
published_date=None,
applies_to_sme=True,
summary="Guidelines on risk management, data governance, technical documentation, "
"logging, transparency, human oversight, accuracy, and supply chain "
"obligations for Annex III high-risk AI systems.",
),
CommissionGuideline(
article_ref="Art.96(1)(b)",
title="Guidelines on Prohibited AI Practices (Art.5)",
status=GuidelineStatus.PENDING,
published_date=None,
applies_to_sme=True,
summary="Interpretive guidance on the Art.5 prohibitions: manipulation, "
"social scoring, real-time biometric identification, emotion recognition "
"in sensitive contexts, and AI-assisted vulnerability exploitation.",
),
CommissionGuideline(
article_ref="Art.96(1)(c)",
title="Practical Implementation of Art.50 Transparency Obligations",
status=GuidelineStatus.PENDING,
published_date=None,
applies_to_sme=True,
summary="Guidance on chatbot disclosure, AI-generated content marking, "
"emotion recognition notification, and machine-readable watermarking "
"obligations applicable to all AI systems regardless of risk class.",
),
CommissionGuideline(
article_ref="Art.96(2)",
title="High-Risk Classification Examples List (Art.6(3))",
status=GuidelineStatus.PENDING,
published_date=None,
applies_to_sme=True,
summary="Concrete use case examples of AI systems that are high-risk and "
"that are not high-risk under Art.6(3) in conjunction with Annex III. "
"Primary tool for classification decisions and NCA documentation.",
),
CommissionGuideline(
article_ref="Art.96(3)",
title="SME-Tailored Implementation Guidance Package",
status=GuidelineStatus.PENDING,
published_date=None,
applies_to_sme=True,
summary="Simplified documentation templates, self-assessment tools, "
"sandbox coordination guidance, and reduced-burden conformity "
"pathways specifically for SMEs and start-ups.",
),
]
def classify_use_case(
self,
system_description: str,
annex_iii_category: Optional[str],
matches_commission_example: Optional[str] = None,
art_6_3_applies: bool = False,
) -> UseCaseClassification:
if matches_commission_example and "not high-risk" in matches_commission_example.lower():
classification = RiskClassification.NOT_HIGH_RISK
elif matches_commission_example and "high-risk" in matches_commission_example.lower():
classification = RiskClassification.HIGH_RISK
elif annex_iii_category and art_6_3_applies:
classification = RiskClassification.NOT_HIGH_RISK
elif annex_iii_category:
classification = RiskClassification.HIGH_RISK
else:
classification = RiskClassification.REQUIRES_ANALYSIS
entry = UseCaseClassification(
system_description=system_description,
annex_iii_category=annex_iii_category,
commission_example_match=matches_commission_example,
classification=classification,
art_6_3_safe_harbour=art_6_3_applies,
documentation_ref=f"art11-annex-iv-classification-{date.today().isoformat()}",
review_date=date.today(),
)
self.use_case_classifications.append(entry)
return entry
def pending_guidelines(self) -> list[CommissionGuideline]:
return [g for g in self.guidelines if g.status == GuidelineStatus.PENDING]
def sme_compliance_score(self) -> dict:
classified = len(self.use_case_classifications)
unresolved = sum(
1 for c in self.use_case_classifications
if c.classification == RiskClassification.REQUIRES_ANALYSIS
)
pending = len(self.pending_guidelines())
score = max(0, 100 - (unresolved * 20) - (pending * 5))
return {
"score": score,
"classified_systems": classified,
"unresolved_systems": unresolved,
"pending_guidelines": pending,
"is_sme": self.is_sme,
"sme_simplified_path_available": self.is_sme,
}
def compliance_summary(self) -> str:
score = self.sme_compliance_score()
lines = [
f"Art.96 Compliance Tracker — {self.company_name}",
f"SME status: {'Yes — simplified path available' if self.is_sme else 'No'}",
f"Systems classified: {score['classified_systems']}",
f"Requiring further analysis: {score['unresolved_systems']}",
f"Commission guidelines pending: {score['pending_guidelines']}",
f"Overall readiness score: {score['score']}/100",
"",
"Pending guidelines to monitor:",
]
for g in self.pending_guidelines():
lines.append(f" [{g.article_ref}] {g.title}")
return "\n".join(lines)
# Example usage
if __name__ == "__main__":
tracker = Art96GuidelineTracker(company_name="AcmeSoft GmbH", is_sme=True)
# Classify a CV screening system
cv_screening = tracker.classify_use_case(
system_description="AI system that ranks and filters job applicants from CVs",
annex_iii_category="Category 4 — Employment and workers management",
matches_commission_example=None,
art_6_3_applies=False,
)
print(f"CV screening: {cv_screening.classification.value}")
# Classify a grammar checker
grammar_check = tracker.classify_use_case(
system_description="Grammar and style checking tool for professional documents",
annex_iii_category=None,
matches_commission_example="not high-risk — procedural assistance tool",
art_6_3_applies=False,
)
print(f"Grammar checker: {grammar_check.classification.value}")
print(tracker.compliance_summary())
25-Item SME Developer Checklist — Art.96 Implementation
Classification Foundation (Art.96(2))
- 1. Reviewed current AI Office interim classification guidance for your use cases
- 2. Identified all potential Annex III categories that could apply to each AI system
- 3. Applied Art.6(3) three-part test to any Annex III system you believe is not high-risk
- 4. Documented classification rationale in technical documentation referencing Commission guidance
- 5. Set calendar reminder to review Art.96(2) examples list when formally published
Transparency Obligations (Art.96(1)(c) — applies to all AI systems)
- 6. Audited all customer-facing AI features for Art.50 disclosure requirements
- 7. Implemented chatbot disclosure ("you are interacting with an AI") where applicable
- 8. Assessed AI-generated content marking requirements for your outputs
- 9. Reviewed emotion recognition or biometric features for Art.50(1) notification obligations
- 10. Evaluated technical feasibility of machine-readable watermarking for synthetic content
Prohibited Practices (Art.96(1)(b) — applies to all AI systems)
- 11. Confirmed no Art.5(1)(a) subliminal manipulation techniques in product
- 12. Confirmed no exploitation of vulnerability groups (children, elderly, disabled)
- 13. Confirmed no social scoring or credit scoring linked to behaviour prediction outside lawful financial services
- 14. Confirmed no real-time remote biometric identification in public spaces (unless law enforcement exception applies)
- 15. Documented Art.5 compliance review with sign-off date in technical documentation
SME Compliance Pathways (Art.96(3))
- 16. Verified SME classification (micro: <10 employees/<€2M; small: <50/<€10M; medium: <250/<€50M)
- 17. Identified applicable SME simplifications: Art.11(2) documentation, Art.29(5) deployer obligations, Art.68 sandbox priority
- 18. Subscribed to AI Office newsletter/RSS for Art.96 SME guidance publication alerts
- 19. Considered Art.68 regulatory sandbox participation if operating in novel high-risk territory
- 20. Evaluated Art.95 voluntary code participation as a pre-classification compliance signal
Documentation and Monitoring (Art.96(1)(a))
- 21. For high-risk systems: confirmed Art.9 risk management system is documented and iterative
- 22. For high-risk systems: Art.11 technical documentation covers all Annex IV sections
- 23. For high-risk systems: Art.13 user transparency disclosure is implemented and documented
- 24. For high-risk systems: Art.14 human oversight mechanisms are technically implemented
- 25. Set quarterly review cadence for Commission guideline updates — Art.96 guidelines update as AI state of the art evolves
See Also
- EU AI Act Art.97: Committee Procedure (Comitology) — Implementing Acts and Regulatory Change Timelines — The Art.97 committee mechanism that converts Art.96 guidance into binding implementing acts
- EU AI Act Art.95: Voluntary Codes of Conduct for Non-High-Risk AI — The Art.95 voluntary compliance lane that feeds into the Art.96 guidelines framework
- EU AI Act Art.6: High-Risk AI Classification Rules — The Art.6(3) escape route that Art.96(2) examples operationalise
- EU AI Act Art.11: Technical Documentation Lifecycle — The Annex IV documentation system that Art.96(1)(a) guidelines interpret
- EU AI Act Art.50: Transparency Obligations for AI Systems — The universal transparency rules that Art.96(1)(c) guidelines address
- EU AI Act Art.68: Regulatory Sandboxes — The SME sandbox program that Art.96(3) SME guidance coordinates with