2026-04-12·13 min read·sota.io team

EU AI Act Art.5: Prohibited Practices — What Developers Cannot Build — Developer Guide (2026)

Article 5 of the EU AI Act is different from everything else in the regulation. Every other provision gives you a compliance pathway: document your system, run a conformity assessment, register in the EU database, get CE-marked. Article 5 gives you nothing. The eight practices listed in Art.5 are absolutely prohibited — and the penalty tier is the highest in the regulation: up to €35 million or 7% of global annual turnover, whichever is higher.

Most developer coverage of the EU AI Act focuses on the high-risk system obligations in Annex III. That coverage misses the more fundamental question: are there things you simply cannot build, regardless of how well-documented or technically safe they are? Article 5 is the answer: yes, eight categories of AI systems fall outside the compliance framework entirely.

This guide walks through all eight prohibitions with the precision needed to make real technical decisions — not the surface-level summaries that appear in most legal overviews.

Why Article 5 Has No Compliance Pathway

The EU AI Act is risk-based. High-risk AI systems (Annex III) must comply with Chapter III requirements. Limited-risk systems (Art.50) have transparency obligations. Minimal-risk systems have no mandatory requirements.

Article 5 sits outside this risk ladder. The legislative rationale is that the practices listed present risks to fundamental rights so severe that no amount of technical mitigation can make them acceptable. The European Parliament added several prohibitions during trilogue — including emotion recognition and biometric categorization — over Commission objections, based on the position that these capabilities are inherently incompatible with human dignity and nondiscrimination rights.

The practical consequence: there is no CE mark for an Art.5 system, no conformity assessment body that can certify it, no regulatory sandbox that applies. If your system falls within one of the eight prohibitions, the only compliant path is redesign.

The Eight Prohibited Practices

1. Subliminal Manipulation — Art.5(1)(a)

What it prohibits: AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, to materially distort behavior in a way that causes or is likely to cause significant harm.

The three-part test: For this prohibition to apply, the system must (i) operate below the threshold of conscious awareness or use deceptive techniques, (ii) materially distort behavior (not merely influence it), and (iii) cause or be likely to cause significant harm.

Developer risk zones: The "subliminal" threshold is the technical complexity here. Psychological research on dark patterns, behavioral nudging, and attention-capture mechanisms runs directly into this provision. Recommendation algorithms optimized for engagement metrics — time-on-platform, compulsive scroll behavior — are candidates if they exploit cognitive biases below the conscious threshold of the targeted user.

The "significant harm" requirement provides some grounding. A recommendation that keeps you watching videos is not obviously covered. A system that exploits gambling addiction patterns to maximize revenue from problem gamblers, or that uses micro-targeting to push users toward self-harm content, is more clearly within scope.

Gray zone: Personalization algorithms that A/B test emotional triggers, pricing systems that exploit loss aversion, or chatbots designed to exploit attachment psychology without user awareness all raise Art.5(1)(a) questions. The key is whether the technique operates below conscious awareness and the harm is significant.

Design implication: Explicitly audit whether your personalization or recommendation logic targets cognitive biases rather than expressed preferences. Preference learning from stated behavior is distinct from behavioral manipulation targeting unconscious patterns.

2. Exploitation of Vulnerabilities — Art.5(1)(b)

What it prohibits: AI systems that exploit vulnerabilities of specific groups — due to age, disability, or socioeconomic situation — to materially distort behavior causing or likely to cause significant harm.

Overlap with Art.5(1)(a): Where (a) focuses on technique (subliminal/deceptive), (b) focuses on target group. A non-subliminal technique that still exploits the cognitive vulnerabilities specific to children, elderly users with cognitive decline, or persons in financial distress can be prohibited under (b) without triggering (a).

Developer risk zones:

Age-related systems: Systems designed for children require particular attention. The combination of Art.5(1)(b) and existing GDPR/ePrivacy protections for minors creates a high bar. Any AI that exploits the developmental characteristics of children (lower impulse control, peer status sensitivity, inability to fully understand algorithmic influence) faces this prohibition.

Design implication: If your system targets or disproportionately serves vulnerable groups, the question is whether the design exploits their vulnerability or serves their interests. The former is prohibited, the latter is not.

3. Social Scoring — Art.5(1)(c)

What it prohibits: AI systems used by public authorities or on behalf of public authorities to evaluate or classify natural persons based on social behavior or personal characteristics, where classifications lead to detrimental or unfavorable treatment in unrelated or disproportionate contexts.

The China parallel: This provision was explicitly motivated by Chinese-style social credit systems, but the legislative text is broader. The key elements are: (i) evaluation of social behavior or personal characteristics, (ii) by or on behalf of a public authority, (iii) leading to detrimental treatment, (iv) in unrelated or disproportionate contexts.

The "public authority" scope: This is the most contested technical boundary. A private company building a social scoring system for its own commercial use is not obviously covered. But a private company contracted by a government to score citizens — for benefit eligibility, immigration processing, or judicial risk assessment — operates "on behalf of" a public authority and is within scope.

Developer risk zones:

Private sector parallel: Private-sector social scoring (employment scores, consumer credit scores) is not prohibited under Art.5 but may be high-risk AI under Annex III categories for employment, credit, or benefits assessment — which triggers Chapter III obligations rather than outright prohibition.

Design implication: If you are building for a public sector customer, audit whether your output creates a behavioral score that affects individuals in contexts unrelated to the original evaluation purpose. That structure is the core of the Art.5(1)(c) prohibition.

4. Predictive Policing Based on Personal Characteristics — Art.5(1)(d)

What it prohibits: AI risk assessments for individual criminal recidivism prediction or prediction of criminal offenses, based solely on profiling or on personal characteristics.

The "solely" qualifier: This prohibition is narrower than it appears. It prohibits prediction "solely" based on profiling or personal characteristics. A system that uses case history, forensic evidence, and behavioral data — where personal characteristics are one factor among many — is not clearly covered. A system that predicts recidivism based primarily on demographic or socioeconomic profiling is.

What remains lawful under Art.5(1)(d): Criminal risk assessment tools that use evidence-based, individual-level factors (prior convictions, case-specific behavior) are not covered by this prohibition. They may be high-risk AI under Annex III (Category 6: AI in law enforcement for individual risk assessment) — which means Chapter III obligations apply — but they are not prohibited.

Developer risk zones:

The hot-spot policing distinction: Geospatial crime prediction (hot-spot policing) that targets areas rather than individuals is likely outside Art.5(1)(d). Individual-level profiling that targets persons based on their characteristics is within scope.

5. Untargeted Facial Recognition Scraping — Art.5(1)(e)

What it prohibits: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

This prohibition is absolute: Unlike some Art.5 provisions that have narrow exceptions, Art.5(1)(e) has none. You cannot scrape the internet for facial images to build a biometric database, regardless of stated purpose.

Developer risk zones:

The Clearview AI model: This prohibition is a direct response to the Clearview AI business model, which scraped billions of facial images from the internet to build a law enforcement facial recognition database. That specific business model is prohibited under Art.5(1)(e).

What remains permitted: Facial recognition databases built from consented enrollment (users upload their own images), verification against ID documents where the individual presents themselves, or law enforcement databases built through lawful collection processes are not covered by this provision.

Design implication: If your product involves facial recognition, the database provenance question is critical. How were your training images collected? How is your production database populated? Web scraping or CCTV harvesting without consent is absolutely prohibited.

6. Emotion Recognition in Workplaces and Education — Art.5(1)(f)

What it prohibits: AI systems used to infer emotions of natural persons in workplaces and educational institutions, with exceptions only for medical or safety reasons.

This is broader than it appears: "Emotion recognition" in EU AI Act terms covers any inference of emotional state, including sentiment analysis of text, voice stress analysis, facial expression interpretation, gaze tracking correlated with engagement, and physiological signal analysis (heart rate variability, skin conductance) if used to infer emotional state.

Scope of prohibition:

The safety exception: Medical reasons (clinical settings, therapeutic applications) and safety reasons (detecting operator fatigue in high-risk environments like aviation or heavy machinery) are excepted. The exception is narrow — it applies to safety-critical monitoring where the alternative is physical harm, not to general productivity improvement.

Developer risk zones:

Design implication: If your system operates in workplace or educational contexts, audit every feature that touches emotional state inference — even as a secondary feature or "engagement metric." The prohibition covers inference of emotion, not just explicit emotion classification.

7. Biometric Categorization for Protected Attributes — Art.5(1)(g)

What it prohibits: AI systems that categorize natural persons individually based on biometric data to infer or deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

The Article 9 GDPR parallel: These are exactly the special categories of personal data under Art.9 GDPR, but Art.5(1)(g) prohibits using biometric AI to infer them — even if you do not explicitly store the inferred attribute as personal data.

What counts as biometric data here: The EU AI Act definition covers facial images, voice recordings, gait analysis, physiological data, or any other biological or behavioral data that can be processed by AI to uniquely identify or categorize a person. This is broader than the narrow biometric definition that applies to access control systems.

Developer risk zones:

The research exception question: The regulation does not explicitly except academic research from Art.5(1)(g). Research that involves creating or training models to classify individuals by protected attributes from biometric data is covered. This is one of the most controversial aspects — it prohibits a class of academic computer vision and social science research.

Design implication: If your model architecture includes biometric input features and outputs that could correlate with protected attributes, Art.5(1)(g) applies regardless of whether you label the output as a "protected attribute prediction." The functional capability is what matters.

8. Real-Time Remote Biometric Identification in Public Spaces — Art.5(1)(h)

What it prohibits: AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement purposes. This is subject to narrow exceptions.

The three components: "Real-time" (live or near-live matching), "remote" (at a distance, without individual awareness or cooperation), "biometric identification" (matching to a known identity, not just detection or classification).

Scope — law enforcement only: This prohibition applies specifically to law enforcement use. Private security use of real-time biometric identification in public spaces may still be covered by GDPR biometric data processing restrictions, but Art.5(1)(h) specifically targets the law enforcement context.

The narrow exceptions — Art.5(2) and (3): Real-time remote biometric identification by law enforcement is permitted only for:

Each exception requires prior judicial authorization or independent administrative authorization (with urgency provisions for immediate threat). Use is time-limited, geographically bounded, and subject to oversight.

Developer risk zones: Building real-time biometric identification capability for law enforcement customers in EU public spaces without this authorization framework. SaaS products that offer general-purpose real-time facial recognition to law enforcement without built-in authorization gating are building toward an Art.5(1)(h) violation.

Art.5 Penalty Structure

Article 5 violations are subject to the highest fine tier in the EU AI Act: €35 million or 7% of total worldwide annual turnover, whichever is higher.

Compare this to:

The 7% rate exceeds GDPR's 4% maximum and matches the highest tier proposed in the Digital Markets Act. For a company with €500 million global turnover, an Art.5 violation exposes it to €35 million in fines.

The "No Compliance Pathway" Consequence

This distinction matters architecturally. For Annex III high-risk AI, a developer has options: document the system, run a conformity assessment, implement Chapter III technical requirements. The compliance is achievable — it is a process question.

For Art.5 systems, there is no process that makes the system compliant. The only compliant state is one where the system does not exist or has been redesigned to fall outside the prohibition.

This has a significant implication for product development: Art.5 risk needs to be identified at the design stage, not the deployment stage. A system that reaches production as an Art.5-prohibited practice requires complete redesign, not documentation updates.

Python Tooling: Art.5 Pre-Build Risk Assessment

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional


class Art5Provision(Enum):
    SUBLIMINAL_MANIPULATION = "Art.5(1)(a)"
    VULNERABILITY_EXPLOITATION = "Art.5(1)(b)"
    SOCIAL_SCORING = "Art.5(1)(c)"
    PREDICTIVE_POLICING = "Art.5(1)(d)"
    FACIAL_SCRAPING = "Art.5(1)(e)"
    EMOTION_RECOGNITION = "Art.5(1)(f)"
    BIOMETRIC_CATEGORIZATION = "Art.5(1)(g)"
    REALTIME_BIOMETRIC_ID = "Art.5(1)(h)"


@dataclass
class Art5RiskFinding:
    provision: Art5Provision
    triggered: bool
    risk_level: str  # "CLEAR", "GRAY_ZONE", "NO_RISK"
    rationale: str
    redesign_required: bool
    notes: str = ""


@dataclass
class Art5Assessment:
    system_name: str
    findings: list[Art5RiskFinding] = field(default_factory=list)
    
    @property
    def prohibited(self) -> list[Art5RiskFinding]:
        return [f for f in self.findings if f.risk_level == "CLEAR" and f.triggered]
    
    @property
    def gray_zone(self) -> list[Art5RiskFinding]:
        return [f for f in self.findings if f.risk_level == "GRAY_ZONE"]
    
    @property
    def is_deployable(self) -> bool:
        return len(self.prohibited) == 0
    
    def summary(self) -> str:
        if self.prohibited:
            provisions = [f.provision.value for f in self.prohibited]
            return f"PROHIBITED: System triggers {', '.join(provisions)}. Redesign required."
        if self.gray_zone:
            provisions = [f.provision.value for f in self.gray_zone]
            return f"GRAY ZONE: Legal review required for {', '.join(provisions)}."
        return "LOW RISK: No clear Art.5 triggers identified. Standard compliance pathway applies."


def assess_subliminal_manipulation(
    uses_behavioral_targeting: bool,
    targets_unconscious_bias: bool,
    can_cause_significant_harm: bool,
    technique_is_deceptive: bool,
) -> Art5RiskFinding:
    """
    Art.5(1)(a): Subliminal manipulation assessment.
    Three-part test: (1) subliminal/deceptive technique, (2) material behavior distortion,
    (3) significant harm.
    """
    if targets_unconscious_bias and can_cause_significant_harm:
        return Art5RiskFinding(
            provision=Art5Provision.SUBLIMINAL_MANIPULATION,
            triggered=True,
            risk_level="CLEAR",
            rationale="System exploits unconscious cognitive patterns with significant harm potential.",
            redesign_required=True,
        )
    if uses_behavioral_targeting and technique_is_deceptive:
        return Art5RiskFinding(
            provision=Art5Provision.SUBLIMINAL_MANIPULATION,
            triggered=True,
            risk_level="GRAY_ZONE",
            rationale="Behavioral targeting with deceptive elements — legal review required.",
            redesign_required=False,
            notes="Review whether harm threshold is met and whether technique is truly subliminal.",
        )
    return Art5RiskFinding(
        provision=Art5Provision.SUBLIMINAL_MANIPULATION,
        triggered=False,
        risk_level="NO_RISK",
        rationale="No subliminal or deceptive manipulation identified.",
        redesign_required=False,
    )


def assess_emotion_recognition(
    context: str,  # "workplace", "education", "medical", "safety_critical", "other"
    infers_emotional_state: bool,
) -> Art5RiskFinding:
    """
    Art.5(1)(f): Emotion recognition in workplace/education contexts.
    Exception: medical reasons or safety in safety-critical environments.
    """
    prohibited_contexts = {"workplace", "education"}
    excepted_contexts = {"medical", "safety_critical"}
    
    if infers_emotional_state and context in prohibited_contexts:
        return Art5RiskFinding(
            provision=Art5Provision.EMOTION_RECOGNITION,
            triggered=True,
            risk_level="CLEAR",
            rationale=f"Emotion inference in {context} context is prohibited under Art.5(1)(f).",
            redesign_required=True,
            notes="Remove emotion inference features. Non-emotion engagement metrics (click, time-on-task) may be permissible.",
        )
    if infers_emotional_state and context not in excepted_contexts | prohibited_contexts:
        return Art5RiskFinding(
            provision=Art5Provision.EMOTION_RECOGNITION,
            triggered=True,
            risk_level="GRAY_ZONE",
            rationale=f"Context '{context}' — emotion inference not clearly prohibited but review recommended.",
            redesign_required=False,
        )
    return Art5RiskFinding(
        provision=Art5Provision.EMOTION_RECOGNITION,
        triggered=False,
        risk_level="NO_RISK",
        rationale="No prohibited emotion recognition pattern identified.",
        redesign_required=False,
    )


def assess_biometric_categorization(
    uses_biometric_input: bool,
    can_infer_protected_attributes: bool,
    protected_attributes: Optional[list[str]] = None,
) -> Art5RiskFinding:
    """
    Art.5(1)(g): Biometric categorization for protected attributes.
    Covers: race, political opinions, union membership, religion,
    philosophical beliefs, sex life, sexual orientation.
    """
    protected = protected_attributes or []
    covered_attributes = {
        "race", "ethnicity", "political_opinion", "union_membership",
        "religion", "philosophical_belief", "sex_life", "sexual_orientation",
    }
    
    if uses_biometric_input and can_infer_protected_attributes:
        triggered_attributes = covered_attributes & set(protected) if protected else covered_attributes
        return Art5RiskFinding(
            provision=Art5Provision.BIOMETRIC_CATEGORIZATION,
            triggered=True,
            risk_level="CLEAR",
            rationale=f"Biometric input used to infer protected attributes: {triggered_attributes}.",
            redesign_required=True,
            notes="Remove protected attribute inference from model outputs regardless of labeling.",
        )
    return Art5RiskFinding(
        provision=Art5Provision.BIOMETRIC_CATEGORIZATION,
        triggered=False,
        risk_level="NO_RISK",
        rationale="No biometric categorization for protected attributes identified.",
        redesign_required=False,
    )


# Example usage
def run_art5_assessment_example():
    assessment = Art5Assessment(system_name="Employee Engagement Platform")
    
    assessment.findings.append(assess_emotion_recognition(
        context="workplace",
        infers_emotional_state=True,
    ))
    assessment.findings.append(assess_subliminal_manipulation(
        uses_behavioral_targeting=True,
        targets_unconscious_bias=False,
        can_cause_significant_harm=False,
        technique_is_deceptive=False,
    ))
    assessment.findings.append(assess_biometric_categorization(
        uses_biometric_input=False,
        can_infer_protected_attributes=False,
    ))
    
    print(assessment.summary())
    for finding in assessment.prohibited:
        print(f"  PROHIBITED: {finding.provision.value} — {finding.rationale}")
    
    return assessment

The sota.io Infrastructure Angle

Art.5 is one area where infrastructure jurisdiction does not change your compliance obligation — but it does change your enforcement exposure. A non-EU SaaS provider offering prohibited AI capabilities to EU users is still violating Art.5 (the EU AI Act applies to providers placing AI systems on the EU market, regardless of where the provider is established).

However, a non-EU provider on US cloud infrastructure has a CLOUD Act exposure that compounds the risk: US law enforcement can subpoena the records of the AI system's operation, potentially including inputs and outputs from prohibited systems, without EU notification requirements. Running a prohibited system on EU-native infrastructure at least eliminates the cross-jurisdictional disclosure risk — though it does not make the prohibited practice lawful.

For compliant developers using EU-native infrastructure: Art.5 compliance is a product design question before it is an infrastructure question. Eliminate the prohibited capability at the design stage; infrastructure choices handle the residual data sovereignty concerns.

30-Item Art.5 Pre-Build Checklist

Subliminal Manipulation (Art.5(1)(a))

Vulnerability Exploitation (Art.5(1)(b))

Social Scoring (Art.5(1)(c))

Predictive Policing (Art.5(1)(d))

Facial Scraping (Art.5(1)(e))

Emotion Recognition (Art.5(1)(f))

Summary

Article 5 is the boundary of the EU AI Act. Eight practices are prohibited without exception pathway: subliminal manipulation, vulnerability exploitation, public authority social scoring, predictive policing based solely on profiling, untargeted facial scraping, emotion recognition in workplaces and education, biometric categorization for protected attributes, and real-time remote biometric identification by law enforcement in public spaces (except under strict authorization).

The penalty exposure — €35 million or 7% of global turnover — reflects the legislative intent: these prohibitions are not default rules with compliance pathways, they are absolute limits on what AI systems in the EU market can do.

For developers, the implication is design-stage: Art.5 risk analysis should happen before architecture decisions, not after production deployment. The 30-item checklist and Python tooling above are starting points for that pre-build review.


Related: EU AI Act Art.9: Risk Management SystemsEU AI Act Art.10: Training Data RequirementsEU AI Act Art.99: Penalties and Fines