EU AI Act Art.5: Prohibited AI Practices — Social Scoring, Biometric Surveillance and Subliminal Manipulation (2026)
Article 5 of Regulation (EU) 2024/1689 is the most absolute provision in the entire AI Act. Where other articles set requirements, impose documentation burdens, or mandate conformity assessments, Art.5 simply prohibits. Eight categories of AI systems may not be placed on the market, put into service, or used in the EU — with no conformity assessment pathway, no CE marking procedure, and no commercial justification that overrides the ban.
These prohibitions have applied since 2 February 2025 — six months after the Regulation entered into force. Any organisation that deployed, integrated, or operated these systems before that date and has not discontinued them is already in violation.
This guide covers:
- The statutory text of each Art.5 prohibition with analysis
- The narrow law-enforcement exception for real-time biometric identification
- What "manipulation" and "exploitation of vulnerabilities" mean for product teams
- Compliance assessment and discontinuation strategies
- Maximum penalties
Why Art.5 Matters Now
Most EU AI Act coverage focuses on high-risk systems under Annex III, which face the most demanding conformity requirements with a grace period running to August 2026 for many categories. Art.5 has no such grace period.
The prohibitions cover practices that the European legislature determined were incompatible with fundamental rights regardless of technical capability — not because the technology does not work, but because it should not be used for these purposes in this context.
There is no risk-based escape hatch. A system that falls under Art.5 cannot be redesigned to reduce risk or add safeguards to become compliant. The only response is discontinuation.
The Eight Prohibitions
Art.5(1)(a) — Subliminal Manipulation
Prohibition: AI systems that deploy subliminal techniques beyond a person's consciousness or manipulative techniques that exploit psychological weaknesses or biases, with the objective or effect of materially distorting behaviour in a way that causes or is likely to cause significant harm.
Key elements:
- Covers subliminal techniques (below conscious perception) AND manipulative techniques exploiting cognitive biases
- The distortion must be material — not trivial influence
- Must be likely to cause significant harm — economic, physical, psychological, or societal
- Applies to both objective (intent) and effect (regardless of intent)
Developer relevance: This prohibition targets systems specifically designed to manipulate rather than inform. A recommendation engine that optimises for engagement is not automatically prohibited — but one engineered to exploit documented psychological vulnerabilities (scarcity triggers, loss aversion, social proof manipulation) with demonstrable harm potential enters prohibited territory. The effect-based standard means intent is not a defence.
Key distinction from legitimate persuasion: Advertising, A/B testing, personalisation, and recommendation systems are not prohibited unless they operate subliminally or exploit cognitive vulnerabilities with material harmful effect. The GDPR Recital 47 analogy is instructive: ordinary commercial communication is permitted; dark patterns causing significant harm are not.
Art.5(1)(b) — Exploitation of Vulnerabilities of Specific Groups
Prohibition: AI systems that exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability, or specific social or economic situation, with the objective or effect of materially distorting behaviour in a way that causes or is likely to cause significant harm.
Key elements:
- Requires a specific vulnerability linked to age (children, elderly), disability, or social/economic situation
- The AI must exploit that vulnerability — not merely target or reach the group
- Same harm threshold as Art.5(1)(a): material distortion + significant harm
Developer relevance: Systems targeting children with aggressive monetisation mechanics that exploit developmental psychology are the clearest case. Systems targeting people in financial distress with high-cost credit decisions that exploit decision-making under stress are a second clear case. The prohibition requires a causal link between the specific vulnerability and the manipulative mechanism — a system that happens to be used by children but does not exploit child-specific vulnerabilities does not automatically fall here.
Art.5(1)(c) — Social Scoring by Public Authorities
Prohibition: AI systems used by or on behalf of public authorities for the evaluation or classification of natural persons or groups thereof on the basis of their social behaviour or known or predicted personal or professional characteristics, where the social score leads to detrimental or unfavourable treatment in social contexts unrelated to those in which the data was originally generated, or treatment that is disproportionate or unjustified with regard to the severity of the social behaviour.
Key elements:
- Applies to public authorities or those acting on their behalf
- The system must classify or evaluate based on social behaviour or personal characteristics
- Harm pathway: treatment in unrelated contexts OR disproportionate treatment
- Private actors are not covered by Art.5(1)(c) — but may fall under other prohibitions
Developer relevance: Government SaaS vendors building citizen-facing systems must audit whether their products could enable social scoring as a secondary use case. A risk assessment tool used by a municipality for housing allocation becomes prohibited if it systematically penalises citizens for unrelated prior government interactions. Cloud providers hosting government AI workloads are not automatically liable — the prohibition targets deployment decisions, not infrastructure.
Art.5(1)(d) — Individual Predictive Policing
Prohibition: AI systems used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or to predict the occurrence of an actual or potential criminal offence based solely on the profiling of a natural person or on assessing their personality traits and characteristics.
Key elements:
- Specific to law enforcement deployment context
- Targets individual risk assessment — not general crime pattern analysis
- The prohibited basis is: profiling of the person OR personality traits/characteristics assessment
- "Solely" — this prohibition is narrow; area-level crime prediction without individual profiling is not covered
Developer relevance: Predictive policing products marketed to EU law enforcement agencies must be reviewed against Art.5(1)(d). Systems that generate individual-level risk scores based on demographic, behavioural, or historical data fall squarely within the prohibition. Systems that produce geographic hot-spot mapping without individual profiling operate outside Art.5(1)(d) — though they may be regulated as high-risk systems under Annex III.
Art.5(1)(e) — Untargeted Facial Image Scraping
Prohibition: AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
Key elements:
- Untargeted scraping — mass collection without a specific, defined target
- Sources: internet or CCTV footage
- The prohibition is in the creation or expansion of databases — using a pre-existing database for legitimate purposes is governed by other provisions
Developer relevance: Computer vision startups that trained models using web-scraped face datasets and are now offering those models in the EU face significant exposure. The prohibition covers the database creation — a downstream model built on prohibited training data carries legal uncertainty even if the model itself is used for permitted purposes. Legal teams should audit training dataset provenance for facial recognition models marketed in the EU.
Art.5(1)(f) — Emotion Recognition in Workplace and Educational Institutions
Prohibition: AI systems used in the workplace and in education institutions to infer the emotions of natural persons, except where the AI system is intended to be put in place or placed on the market for medical or safety reasons.
Key elements:
- Scope limited to: workplace AND educational institutions
- Prohibition is on inferring emotions — not simply detecting physiological signals for safety
- Exception: systems intended for medical or safety reasons (e.g., detecting drowsiness in vehicle operators for safety, not productivity monitoring)
Developer relevance: HR analytics platforms that incorporate emotional state inference from video calls, facial expressions, or speech patterns for performance management or productivity monitoring are prohibited. Educational technology platforms that use facial expression analysis to detect student engagement or attention fall within the prohibition. The medical/safety exception is narrow — a "wellbeing monitoring" product does not qualify unless it has a clear clinical or safety justification.
Contrast with permitted uses: Emotion recognition in entertainment, consumer product testing with explicit consent, or clinical mental health applications outside the workplace/education context is not prohibited by Art.5(1)(f) — though it may be subject to other regulatory requirements.
Art.5(1)(g) — Biometric Categorisation for Sensitive Attributes
Prohibition: AI systems that categorise natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation, except where the AI system is used for labelling or filtering of lawfully acquired biometric datasets in the context of law enforcement in accordance with Union law.
Key elements:
- Covers biometric categorisation — using biometric data to infer protected characteristics
- The prohibited characteristics mirror GDPR special category data (Art.9 GDPR) minus genetic/health data, plus sexual orientation explicitly
- Law enforcement exception: limited to labelling/filtering of lawfully acquired datasets
Developer relevance: Any system that uses facial features, voice patterns, gait analysis, or other biometric data to infer protected characteristics is prohibited. This covers both explicit systems ("predict political affiliation from face") and implicit systems where the prohibited inference is a documented secondary output. Marketing technology platforms that attempt to infer religious or political characteristics from biometric data for targeting purposes are clearly prohibited.
Art.5(1)(h) — Real-Time Remote Biometric Identification (RRBI) in Public Spaces
Prohibition: AI systems used for real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement.
This prohibition has the most complex exception structure of any Art.5 provision, so the text requires careful reading.
What is prohibited: Real-time RRBI systems operated by law enforcement in publicly accessible spaces without prior judicial or administrative authorisation.
What is permitted under Art.5(2)-(6): National law may authorise real-time RRBI for law enforcement strictly for:
- Targeted search for victims of specific serious crimes (abduction, trafficking, sexual exploitation)
- Prevention of specific, substantial, and imminent terrorist threats
- Detection, localisation, identification, or prosecution of suspects in serious crimes (Art.2(2) of Framework Decision 2002/584/JHA offences carrying 3+ year sentences)
Exception conditions:
- Prior authorisation from a judicial authority or independent administrative body (except urgent cases where post-deployment authorisation within 24h is required)
- Geographically and temporally limited deployment
- Notified to relevant market surveillance authority
- Fundamental rights impact assessment conducted
Developer relevance: Commercial RRBI systems may not be deployed by EU law enforcement without these authorisation structures. Vendors marketing real-time biometric systems to EU law enforcement buyers must ensure their contracts and deployment configurations comply with the Art.5(2)-(6) exception framework. Private sector RRBI in publicly accessible spaces (retail, hospitality) is not covered by Art.5(1)(h) — but may be restricted by GDPR and national biometric data protection laws.
Penalties
Art.5 violations carry the highest penalties in the EU AI Act:
| Violation | Maximum Fine | Per-Turnover Cap |
|---|---|---|
| Art.5 prohibited practices | €35,000,000 | 7% of global annual turnover |
| High-risk system non-compliance | €15,000,000 | 3% of turnover |
| Provision of incorrect information | €7,500,000 | 1% of turnover |
| SME/startup adjustment | ½ of above maxima | Same percentage caps |
The 7% cap is higher than GDPR's 4% for the most severe violations — placing prohibited AI practices in the same legislative severity tier as the most serious data protection violations.
Compliance Assessment for Developers
Step 1: System Inventory
from dataclasses import dataclass
from enum import Enum
class Art5Prohibition(Enum):
SUBLIMINAL_MANIPULATION = "5(1)(a)"
VULNERABILITY_EXPLOITATION = "5(1)(b)"
SOCIAL_SCORING = "5(1)(c)"
PREDICTIVE_POLICING = "5(1)(d)"
FACIAL_SCRAPING = "5(1)(e)"
EMOTION_RECOGNITION_WORKPLACE = "5(1)(f)"
BIOMETRIC_CATEGORISATION = "5(1)(g)"
RRBI_PUBLIC_SPACES = "5(1)(h)"
@dataclass
class SystemAssessment:
system_name: str
description: str
deployment_context: str # 'law_enforcement', 'workplace', 'education', 'public', 'commercial'
data_inputs: list[str]
outputs: list[str]
identified_prohibitions: list[Art5Prohibition]
discontinuation_required: bool
discontinuation_date: str | None
legal_exception_applicable: bool
exception_basis: str | None
reviewed_by: str
review_date: str
def assess_art5_exposure(system: SystemAssessment) -> dict:
"""
Return structured exposure assessment for Art.5 review.
"""
exposure = {
"system": system.system_name,
"prohibitions_identified": [p.value for p in system.identified_prohibitions],
"action_required": "DISCONTINUE" if system.discontinuation_required and not system.legal_exception_applicable else "MONITOR",
"exception_available": system.legal_exception_applicable,
"exception_basis": system.exception_basis,
"deadline": system.discontinuation_date or "2025-02-02 (already past)"
}
return exposure
Step 2: Discontinuation Protocol
For systems that fall within Art.5 prohibitions without applicable exceptions:
from datetime import datetime, date
def generate_discontinuation_plan(assessment: SystemAssessment) -> dict:
"""
Generate actionable discontinuation plan for prohibited systems.
"""
if not assessment.discontinuation_required:
return {"status": "no_action_required"}
today = date.today()
deadline = date(2025, 2, 2) # Art.5 applicable since this date
already_past_deadline = today > deadline
steps = [
"1. Immediately suspend new deployments of the system",
"2. Notify all internal stakeholders of prohibition status",
"3. Identify all active integrations and downstream dependencies",
"4. Establish decommissioning timeline (target: immediate for past-deadline systems)",
"5. Delete training datasets created through prohibited methods (Art.5(1)(e))",
"6. Document discontinuation for regulatory record-keeping",
"7. Assess downstream model exposure if trained on prohibited data",
]
return {
"system": assessment.system_name,
"prohibition": [p.value for p in assessment.identified_prohibitions],
"already_in_violation": already_past_deadline,
"recommended_action": "immediate_discontinuation",
"steps": steps,
"regulatory_notification": "Consider voluntary disclosure to market surveillance authority if already deployed",
"data_deletion": "facial databases (Art.5(1)(e)) and biometric categorisation outputs (Art.5(1)(g)) must be deleted"
}
Step 3: Ongoing Monitoring
Art.5 prohibitions are technology-neutral — what is prohibited today remains prohibited regardless of how the technical implementation evolves. New system designs should include an Art.5 screen before development begins:
ART5_SCREENING_QUESTIONS = [
("5(1)(a)", "Does the system deploy techniques designed to influence behaviour below the user's conscious awareness?"),
("5(1)(a)", "Does the system exploit documented psychological biases to materially distort behaviour?"),
("5(1)(b)", "Does the system specifically target vulnerable groups (age, disability, economic situation) to exploit those vulnerabilities?"),
("5(1)(c)", "Is this system used by a public authority to classify citizens based on social behaviour with cross-context effects?"),
("5(1)(d)", "Does this system generate individual criminal risk scores for law enforcement use?"),
("5(1)(e)", "Does this system scrape facial images from the internet or CCTV to build or expand a biometric database?"),
("5(1)(f)", "Does this system infer emotional states of individuals in a workplace or educational setting?"),
("5(1)(g)", "Does this system use biometric data to infer race, religion, political opinion, union membership, or sexual orientation?"),
("5(1)(h)", "Is this system used for real-time facial identification of individuals in public spaces for law enforcement without authorisation?"),
]
def run_art5_screen(system_name: str, answers: dict[str, bool]) -> dict:
"""
answers: dict mapping Art5Prohibition value to True/False
Returns: screening result with flags
"""
flagged = [q for prohibition, q in ART5_SCREENING_QUESTIONS if answers.get(prohibition, False)]
return {
"system": system_name,
"art5_flags": len(flagged),
"flagged_questions": flagged,
"verdict": "PROHIBITED — do not proceed" if flagged else "CLEAR — no Art.5 issues identified",
"note": "This screen is not a legal opinion. Engage legal counsel for borderline cases."
}
Key Distinctions: What Art.5 Does Not Prohibit
Art.5 is frequently misread as a general prohibition on "surveillance AI" or "biometric AI." It is more precise:
| Permitted | Prohibited |
|---|---|
| Recommendation systems optimising engagement | Subliminal manipulation techniques |
| Targeting ads to demographics | Exploiting disability/age vulnerabilities for harmful manipulation |
| Government credit scoring with proportionate effects | Cross-context social scoring by public authorities |
| Geographic crime pattern analysis | Individual-level predictive policing |
| Using existing face datasets for lawful purposes | Untargeted scraping to create biometric databases |
| Emotion detection for safety (fatigue detection) | Emotion inference for workplace performance monitoring |
| Biometric authentication (identity verification) | Biometric inference of race, religion, political views |
| Post-event forensic biometric analysis | Real-time public RRBI without judicial authorisation |
Interaction with Other Regulations
Art.5 prohibitions operate alongside, not instead of, other applicable law:
- GDPR Art.9: Biometric data is special category data. Systems that process biometric data for permitted purposes still require an Art.9 legal basis.
- GDPR Art.22: Automated decision-making restrictions apply to many AI systems outside Art.5's scope.
- NIS2/DORA: If prohibited AI systems were part of critical infrastructure or financial sector ICT risk, discontinuation obligations interact with incident reporting requirements.
- National law: Several EU member states have enacted stricter biometric surveillance restrictions (France, Germany) that apply independently of the AI Act.
Timeline: Art.5 Applicability
August 2024 EU AI Act enters into force (Regulation (EU) 2024/1689)
February 2025 Art.5 prohibitions apply — 6-month transition period ends
August 2025 AI literacy requirements (Art.4) apply
August 2026 High-risk system requirements (Annex III) apply for most categories
2027+ Remaining provisions phased in
Any system falling under Art.5 that continued operating after 2 February 2025 is in ongoing violation.
See Also
- EU AI Act Art.4: AI Literacy Obligations for Providers and Deployers
- EU AI Act Art.3(4)-(12): Provider vs. Deployer Classification Guide
- EU AI Act Art.3(1): AI System Definition and Commission Guidelines (April 2026)
- GDPR Art.9: Special Categories of Personal Data — Prohibition and Exceptions
- NIS2 Art.21: Cybersecurity Risk Management Measures