2026-04-22·13 min read·

EU AI Act Art.5: Prohibited AI Practices — Social Scoring, Biometric Surveillance and Subliminal Manipulation (2026)

Article 5 of Regulation (EU) 2024/1689 is the most absolute provision in the entire AI Act. Where other articles set requirements, impose documentation burdens, or mandate conformity assessments, Art.5 simply prohibits. Eight categories of AI systems may not be placed on the market, put into service, or used in the EU — with no conformity assessment pathway, no CE marking procedure, and no commercial justification that overrides the ban.

These prohibitions have applied since 2 February 2025 — six months after the Regulation entered into force. Any organisation that deployed, integrated, or operated these systems before that date and has not discontinued them is already in violation.

This guide covers:


Why Art.5 Matters Now

Most EU AI Act coverage focuses on high-risk systems under Annex III, which face the most demanding conformity requirements with a grace period running to August 2026 for many categories. Art.5 has no such grace period.

The prohibitions cover practices that the European legislature determined were incompatible with fundamental rights regardless of technical capability — not because the technology does not work, but because it should not be used for these purposes in this context.

There is no risk-based escape hatch. A system that falls under Art.5 cannot be redesigned to reduce risk or add safeguards to become compliant. The only response is discontinuation.


The Eight Prohibitions

Art.5(1)(a) — Subliminal Manipulation

Prohibition: AI systems that deploy subliminal techniques beyond a person's consciousness or manipulative techniques that exploit psychological weaknesses or biases, with the objective or effect of materially distorting behaviour in a way that causes or is likely to cause significant harm.

Key elements:

Developer relevance: This prohibition targets systems specifically designed to manipulate rather than inform. A recommendation engine that optimises for engagement is not automatically prohibited — but one engineered to exploit documented psychological vulnerabilities (scarcity triggers, loss aversion, social proof manipulation) with demonstrable harm potential enters prohibited territory. The effect-based standard means intent is not a defence.

Key distinction from legitimate persuasion: Advertising, A/B testing, personalisation, and recommendation systems are not prohibited unless they operate subliminally or exploit cognitive vulnerabilities with material harmful effect. The GDPR Recital 47 analogy is instructive: ordinary commercial communication is permitted; dark patterns causing significant harm are not.


Art.5(1)(b) — Exploitation of Vulnerabilities of Specific Groups

Prohibition: AI systems that exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability, or specific social or economic situation, with the objective or effect of materially distorting behaviour in a way that causes or is likely to cause significant harm.

Key elements:

Developer relevance: Systems targeting children with aggressive monetisation mechanics that exploit developmental psychology are the clearest case. Systems targeting people in financial distress with high-cost credit decisions that exploit decision-making under stress are a second clear case. The prohibition requires a causal link between the specific vulnerability and the manipulative mechanism — a system that happens to be used by children but does not exploit child-specific vulnerabilities does not automatically fall here.


Art.5(1)(c) — Social Scoring by Public Authorities

Prohibition: AI systems used by or on behalf of public authorities for the evaluation or classification of natural persons or groups thereof on the basis of their social behaviour or known or predicted personal or professional characteristics, where the social score leads to detrimental or unfavourable treatment in social contexts unrelated to those in which the data was originally generated, or treatment that is disproportionate or unjustified with regard to the severity of the social behaviour.

Key elements:

Developer relevance: Government SaaS vendors building citizen-facing systems must audit whether their products could enable social scoring as a secondary use case. A risk assessment tool used by a municipality for housing allocation becomes prohibited if it systematically penalises citizens for unrelated prior government interactions. Cloud providers hosting government AI workloads are not automatically liable — the prohibition targets deployment decisions, not infrastructure.


Art.5(1)(d) — Individual Predictive Policing

Prohibition: AI systems used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or to predict the occurrence of an actual or potential criminal offence based solely on the profiling of a natural person or on assessing their personality traits and characteristics.

Key elements:

Developer relevance: Predictive policing products marketed to EU law enforcement agencies must be reviewed against Art.5(1)(d). Systems that generate individual-level risk scores based on demographic, behavioural, or historical data fall squarely within the prohibition. Systems that produce geographic hot-spot mapping without individual profiling operate outside Art.5(1)(d) — though they may be regulated as high-risk systems under Annex III.


Art.5(1)(e) — Untargeted Facial Image Scraping

Prohibition: AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

Key elements:

Developer relevance: Computer vision startups that trained models using web-scraped face datasets and are now offering those models in the EU face significant exposure. The prohibition covers the database creation — a downstream model built on prohibited training data carries legal uncertainty even if the model itself is used for permitted purposes. Legal teams should audit training dataset provenance for facial recognition models marketed in the EU.


Art.5(1)(f) — Emotion Recognition in Workplace and Educational Institutions

Prohibition: AI systems used in the workplace and in education institutions to infer the emotions of natural persons, except where the AI system is intended to be put in place or placed on the market for medical or safety reasons.

Key elements:

Developer relevance: HR analytics platforms that incorporate emotional state inference from video calls, facial expressions, or speech patterns for performance management or productivity monitoring are prohibited. Educational technology platforms that use facial expression analysis to detect student engagement or attention fall within the prohibition. The medical/safety exception is narrow — a "wellbeing monitoring" product does not qualify unless it has a clear clinical or safety justification.

Contrast with permitted uses: Emotion recognition in entertainment, consumer product testing with explicit consent, or clinical mental health applications outside the workplace/education context is not prohibited by Art.5(1)(f) — though it may be subject to other regulatory requirements.


Art.5(1)(g) — Biometric Categorisation for Sensitive Attributes

Prohibition: AI systems that categorise natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation, except where the AI system is used for labelling or filtering of lawfully acquired biometric datasets in the context of law enforcement in accordance with Union law.

Key elements:

Developer relevance: Any system that uses facial features, voice patterns, gait analysis, or other biometric data to infer protected characteristics is prohibited. This covers both explicit systems ("predict political affiliation from face") and implicit systems where the prohibited inference is a documented secondary output. Marketing technology platforms that attempt to infer religious or political characteristics from biometric data for targeting purposes are clearly prohibited.


Art.5(1)(h) — Real-Time Remote Biometric Identification (RRBI) in Public Spaces

Prohibition: AI systems used for real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement.

This prohibition has the most complex exception structure of any Art.5 provision, so the text requires careful reading.

What is prohibited: Real-time RRBI systems operated by law enforcement in publicly accessible spaces without prior judicial or administrative authorisation.

What is permitted under Art.5(2)-(6): National law may authorise real-time RRBI for law enforcement strictly for:

Exception conditions:

Developer relevance: Commercial RRBI systems may not be deployed by EU law enforcement without these authorisation structures. Vendors marketing real-time biometric systems to EU law enforcement buyers must ensure their contracts and deployment configurations comply with the Art.5(2)-(6) exception framework. Private sector RRBI in publicly accessible spaces (retail, hospitality) is not covered by Art.5(1)(h) — but may be restricted by GDPR and national biometric data protection laws.


Penalties

Art.5 violations carry the highest penalties in the EU AI Act:

ViolationMaximum FinePer-Turnover Cap
Art.5 prohibited practices€35,000,0007% of global annual turnover
High-risk system non-compliance€15,000,0003% of turnover
Provision of incorrect information€7,500,0001% of turnover
SME/startup adjustment½ of above maximaSame percentage caps

The 7% cap is higher than GDPR's 4% for the most severe violations — placing prohibited AI practices in the same legislative severity tier as the most serious data protection violations.


Compliance Assessment for Developers

Step 1: System Inventory

from dataclasses import dataclass
from enum import Enum

class Art5Prohibition(Enum):
    SUBLIMINAL_MANIPULATION = "5(1)(a)"
    VULNERABILITY_EXPLOITATION = "5(1)(b)"
    SOCIAL_SCORING = "5(1)(c)"
    PREDICTIVE_POLICING = "5(1)(d)"
    FACIAL_SCRAPING = "5(1)(e)"
    EMOTION_RECOGNITION_WORKPLACE = "5(1)(f)"
    BIOMETRIC_CATEGORISATION = "5(1)(g)"
    RRBI_PUBLIC_SPACES = "5(1)(h)"

@dataclass
class SystemAssessment:
    system_name: str
    description: str
    deployment_context: str  # 'law_enforcement', 'workplace', 'education', 'public', 'commercial'
    data_inputs: list[str]
    outputs: list[str]
    identified_prohibitions: list[Art5Prohibition]
    discontinuation_required: bool
    discontinuation_date: str | None
    legal_exception_applicable: bool
    exception_basis: str | None
    reviewed_by: str
    review_date: str

def assess_art5_exposure(system: SystemAssessment) -> dict:
    """
    Return structured exposure assessment for Art.5 review.
    """
    exposure = {
        "system": system.system_name,
        "prohibitions_identified": [p.value for p in system.identified_prohibitions],
        "action_required": "DISCONTINUE" if system.discontinuation_required and not system.legal_exception_applicable else "MONITOR",
        "exception_available": system.legal_exception_applicable,
        "exception_basis": system.exception_basis,
        "deadline": system.discontinuation_date or "2025-02-02 (already past)"
    }
    return exposure

Step 2: Discontinuation Protocol

For systems that fall within Art.5 prohibitions without applicable exceptions:

from datetime import datetime, date

def generate_discontinuation_plan(assessment: SystemAssessment) -> dict:
    """
    Generate actionable discontinuation plan for prohibited systems.
    """
    if not assessment.discontinuation_required:
        return {"status": "no_action_required"}
    
    today = date.today()
    deadline = date(2025, 2, 2)  # Art.5 applicable since this date
    already_past_deadline = today > deadline
    
    steps = [
        "1. Immediately suspend new deployments of the system",
        "2. Notify all internal stakeholders of prohibition status",
        "3. Identify all active integrations and downstream dependencies",
        "4. Establish decommissioning timeline (target: immediate for past-deadline systems)",
        "5. Delete training datasets created through prohibited methods (Art.5(1)(e))",
        "6. Document discontinuation for regulatory record-keeping",
        "7. Assess downstream model exposure if trained on prohibited data",
    ]
    
    return {
        "system": assessment.system_name,
        "prohibition": [p.value for p in assessment.identified_prohibitions],
        "already_in_violation": already_past_deadline,
        "recommended_action": "immediate_discontinuation",
        "steps": steps,
        "regulatory_notification": "Consider voluntary disclosure to market surveillance authority if already deployed",
        "data_deletion": "facial databases (Art.5(1)(e)) and biometric categorisation outputs (Art.5(1)(g)) must be deleted"
    }

Step 3: Ongoing Monitoring

Art.5 prohibitions are technology-neutral — what is prohibited today remains prohibited regardless of how the technical implementation evolves. New system designs should include an Art.5 screen before development begins:

ART5_SCREENING_QUESTIONS = [
    ("5(1)(a)", "Does the system deploy techniques designed to influence behaviour below the user's conscious awareness?"),
    ("5(1)(a)", "Does the system exploit documented psychological biases to materially distort behaviour?"),
    ("5(1)(b)", "Does the system specifically target vulnerable groups (age, disability, economic situation) to exploit those vulnerabilities?"),
    ("5(1)(c)", "Is this system used by a public authority to classify citizens based on social behaviour with cross-context effects?"),
    ("5(1)(d)", "Does this system generate individual criminal risk scores for law enforcement use?"),
    ("5(1)(e)", "Does this system scrape facial images from the internet or CCTV to build or expand a biometric database?"),
    ("5(1)(f)", "Does this system infer emotional states of individuals in a workplace or educational setting?"),
    ("5(1)(g)", "Does this system use biometric data to infer race, religion, political opinion, union membership, or sexual orientation?"),
    ("5(1)(h)", "Is this system used for real-time facial identification of individuals in public spaces for law enforcement without authorisation?"),
]

def run_art5_screen(system_name: str, answers: dict[str, bool]) -> dict:
    """
    answers: dict mapping Art5Prohibition value to True/False
    Returns: screening result with flags
    """
    flagged = [q for prohibition, q in ART5_SCREENING_QUESTIONS if answers.get(prohibition, False)]
    
    return {
        "system": system_name,
        "art5_flags": len(flagged),
        "flagged_questions": flagged,
        "verdict": "PROHIBITED — do not proceed" if flagged else "CLEAR — no Art.5 issues identified",
        "note": "This screen is not a legal opinion. Engage legal counsel for borderline cases."
    }

Key Distinctions: What Art.5 Does Not Prohibit

Art.5 is frequently misread as a general prohibition on "surveillance AI" or "biometric AI." It is more precise:

PermittedProhibited
Recommendation systems optimising engagementSubliminal manipulation techniques
Targeting ads to demographicsExploiting disability/age vulnerabilities for harmful manipulation
Government credit scoring with proportionate effectsCross-context social scoring by public authorities
Geographic crime pattern analysisIndividual-level predictive policing
Using existing face datasets for lawful purposesUntargeted scraping to create biometric databases
Emotion detection for safety (fatigue detection)Emotion inference for workplace performance monitoring
Biometric authentication (identity verification)Biometric inference of race, religion, political views
Post-event forensic biometric analysisReal-time public RRBI without judicial authorisation

Interaction with Other Regulations

Art.5 prohibitions operate alongside, not instead of, other applicable law:


Timeline: Art.5 Applicability

August 2024    EU AI Act enters into force (Regulation (EU) 2024/1689)
February 2025  Art.5 prohibitions apply — 6-month transition period ends
August 2025    AI literacy requirements (Art.4) apply
August 2026    High-risk system requirements (Annex III) apply for most categories
2027+          Remaining provisions phased in

Any system falling under Art.5 that continued operating after 2 February 2025 is in ongoing violation.


See Also