2026-04-14·16 min read·sota.io team

EU AI Act Art.5: Prohibited AI Practices — Complete Developer Guide (2026)

Article 5 of the EU AI Act is the hardest line in European AI regulation. While most of the regulation works through risk classification, conformity assessments, and compliance documentation, Art.5 operates differently: the eight categories of AI practices it defines are simply prohibited. No conformity assessment can clear them. No CE marking is available. No risk management process can make them compliant. If your AI system falls within Art.5, you cannot deploy it in the EU.

More importantly for developers planning compliance timelines: Art.5 is the first provision of the EU AI Act to apply. Under Art.113, prohibited practices became enforceable on February 2, 2025 — six months after the regulation entered into force on August 2, 2024. The two-year general application date of August 2, 2026 does not apply here. Art.5 has been live for over a year.

The penalty for violation is among the highest in EU regulatory law: up to EUR 35,000,000 or 7% of total worldwide annual turnover, whichever is higher, under Art.99(1).

This guide covers each prohibition in detail, the limited exceptions, how to determine if your system is in scope, and what to document if you operate near the boundary.


Why Art.5 Exists: The Policy Logic

The EU AI Act's risk-based framework is designed to be proportionate — most AI systems face no regulation at all, and only high-risk systems face substantial compliance obligations. Art.5 breaks from that logic because the practices it prohibits are considered incompatible with fundamental rights regardless of safeguards.

The Commission's impact assessment identified these practices as posing unacceptable risks to human dignity, autonomy, non-discrimination, and the rule of law. The key characteristic they share: they systematically undermine the capacity of individuals to make free, informed decisions, or they enable surveillance and control mechanisms incompatible with democratic society.

Art.5 is also the provision where the EU AI Act intersects most directly with constitutional rights law. The prohibitions on social scoring, mass biometric surveillance, and emotion recognition in institutional contexts map directly onto rights under the EU Charter of Fundamental Rights — dignity (Art.1), privacy (Art.7), data protection (Art.8), non-discrimination (Art.21), and freedom of thought (Art.10).


The Eight Prohibited Practices

1. Subliminal Manipulation — Art.5(1)(a)

The prohibition: Placing on the market, putting into service, or using an AI system that deploys subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting behaviour by appreciably impairing the ability to make an informed decision, causing the person to take a decision they would not otherwise have taken, in a manner that causes or is reasonably likely to cause significant harm.

What this covers: AI systems designed to influence decisions by bypassing conscious processing. The prohibition has two branches:

Key threshold: The harm element is significant. The prohibition requires that the distortion "causes or is reasonably likely to cause significant harm." Not every nudge or persuasion technique is covered — only those that (a) operate below consciousness or through deliberate deception, (b) materially impair decision-making, and (c) cause or likely cause significant harm.

For developers: Personalisation and recommendation systems that operate transparently and within conscious awareness are not covered. What is covered: systems calibrated to exploit unconscious biases, timing decisions to catch users in moments of cognitive depletion, or using AI-generated synthetic media to deceive users about facts material to their decisions.

Documentation practice: If your system uses personalisation, maintain documentation of what signals it uses and ensure those signals do not include below-threshold manipulation. A transparency assessment should confirm users can understand the basis for recommendations.


2. Exploitation of Vulnerabilities — Art.5(1)(b)

The prohibition: AI systems that exploit vulnerabilities of a natural person or specific group due to their age, disability, or social or economic situation, with the objective or effect of materially distorting behaviour in a manner that causes or is reasonably likely to cause significant harm.

What distinguishes this from Art.5(1)(a): Subliminal manipulation operates below consciousness; vulnerability exploitation operates through targeted application of legitimate persuasion techniques to persons whose capacity for resistance is specifically reduced. The system must be calibrated to exploit the specific vulnerability — not merely deployed in a context where vulnerable persons may be present.

Examples:

Scope of "vulnerabilities": Age and disability are listed, but the provision also covers social and economic situation — meaning AI that specifically targets people in financial crisis, housing insecurity, or social isolation for purposes that exploit rather than support that situation.

For developers: Consumer-facing AI systems that adapt behaviour based on user vulnerability signals need careful review. The key question is purpose and calibration: is the adaptation designed to support the vulnerable user, or to exploit their vulnerability for commercial advantage?


3. Social Scoring by Public Authorities — Art.5(1)(c)

The prohibition: AI systems used for the evaluation or classification of natural persons or groups over a given period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the social score leads to:

Who is covered: Public authorities, or entities acting on their behalf. This is not a general prohibition on scoring natural persons — private credit scoring, insurance underwriting, and similar practices are not automatically prohibited under this provision (though they may be regulated as high-risk AI under Annex III). The Art.5(1)(c) prohibition targets state-operated or state-mandated social control systems.

The two triggers:

  1. Cross-context use: Scoring someone based on their behaviour in one domain (e.g., traffic violations) and using that score to affect treatment in an unrelated domain (e.g., access to public housing, social services, or educational opportunities).
  2. Disproportionality: Treatment that is unjustified or disproportionate relative to the underlying behaviour — punishing minor infractions with major life consequences.

Practical EU context: This provision was drafted primarily in response to the Chinese social credit system model. No EU member state operates such a system, but the prohibition closes the door on any future deployment and on pilot programmes that might approach the boundary.

For developers building for public sector clients: Any AI system that aggregates behavioural data across multiple public service interactions to produce a composite score affecting benefit access, service quality, or enforcement priority needs immediate Art.5(1)(c) review.


4. Predictive Policing of Individuals — Art.5(1)(d)

The prohibition: AI systems used for risk assessments of natural persons to predict the risk of committing a criminal offence, based solely on profiling or on assessing personality traits and characteristics.

The exception: This prohibition does not apply to AI systems used to support human assessment of involvement in criminal activity that is already based on objective and verifiable facts directly linked to a criminal activity.

The critical distinction: The prohibited practice is prediction based on who someone is (personality, demographics, behavioural profile). The permitted practice is risk assessment based on what someone has done — concrete facts already established and directly linked to criminal activity.

Examples of what is prohibited:

Examples of what is permitted (with human oversight):

For developers: Law enforcement AI is a high-risk category under Annex III regardless of Art.5. If you build for law enforcement, Art.5(1)(d) requires that your system not make individual risk predictions based on personality or profiling alone — any risk flag must be grounded in concrete, verifiable facts about specific acts.


5. Facial Recognition Database Scraping — Art.5(1)(e)

The prohibition: AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

What this covers: Any system that collects facial images at scale without targeting specific individuals, for the purpose of building or expanding biometric databases. This prohibition is absolute — there are no law enforcement exceptions in Art.5(1)(e). Even national security agencies cannot operate untargeted facial recognition scraping under the EU AI Act.

The "untargeted" qualifier: Targeted collection — searching for images of a specific named suspect as part of an active criminal investigation — is not covered by this prohibition (though it remains subject to other provisions). The prohibition is specifically aimed at mass, indiscriminate collection that builds comprehensive biometric databases.

Real-world context: This provision directly targets services like Clearview AI, which built a database of billions of facial images scraped from social media and public web sources. Several EU data protection authorities had already found Clearview's practices to violate GDPR before the AI Act; Art.5(1)(e) adds a categorical AI Act prohibition on top.

For developers: If your AI system processes images from public sources and extracts or stores facial biometric data, it requires immediate legal review. The prohibition applies regardless of the stated purpose — building the database is prohibited even if the use of the database would otherwise be lawful.


6. Emotion Recognition in Workplaces and Education — Art.5(1)(f)

The prohibition: AI systems that infer the emotions of natural persons in the areas of the workplace and educational institutions.

The exceptions: Medical reasons or safety reasons. AI systems used for medical diagnosis of conditions with emotional components, or safety-critical monitoring (e.g., fatigue detection for vehicle operators where the inference is about safety state, not emotional state) are excluded.

Scope of "emotion recognition": The prohibition covers inference of emotional states — joy, fear, frustration, engagement, distraction — from facial expressions, voice patterns, body language, physiological signals, or other inputs. It does not cover systems that detect physical states relevant to safety (e.g., detecting that an operator is asleep or medically impaired) as long as they do not characterise emotional states.

Why workplaces and education specifically: These are contexts where the power imbalance makes emotion recognition particularly coercive. Employees and students cannot meaningfully consent to emotion monitoring when compliance is a condition of employment or enrolment. The AI Act treats this as an unacceptable dignity and autonomy violation.

For developers:


7. Biometric Categorisation for Sensitive Attributes — Art.5(1)(g)

The prohibition: AI systems for biometric categorisation that individually categorise natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

The exception: The prohibition does not apply to labelling or filtering of lawfully acquired biometric datasets in the context of law enforcement in accordance with Union law.

What this covers: Using facial recognition, gait analysis, voice biometrics, or other biometric data to infer sensitive category characteristics. This prohibition targets the inference step — drawing conclusions about protected attributes from biometric signals — not simply having biometric data.

Why this is prohibited absolutely: The sensitive categories listed (race, political opinions, trade union membership, religious beliefs, sex life, sexual orientation) correspond to the special categories of personal data under GDPR Art.9, which are subject to the highest level of data protection. Inferring these characteristics from biometric data at scale creates the infrastructure for discrimination and persecution.

Examples:

For developers: If your biometric processing pipeline includes any inference about GDPR Art.9 special categories, the pipeline is prohibited under Art.5(1)(g) regardless of purpose. Even if the output is labelled as "demographic segmentation" rather than "racial identification," if the underlying inference derives from biometric data to a protected characteristic, it is covered.


8. Real-Time Remote Biometric Identification in Public Spaces — Art.5(1)(h)

The prohibition: Use of real-time remote biometric identification (RBI) systems in publicly accessible spaces for law enforcement purposes.

The exceptions — Art.5(2): Real-time RBI is permitted by law enforcement only where strictly necessary for one of three objectives:

  1. Targeted search for specific victims of abduction, trafficking, sexual exploitation, or missing persons
  2. Prevention of specific, substantial and imminent threat to life or physical safety, or a genuine and foreseeable threat of a terrorist attack
  3. Localisation or identification of a criminal suspect for investigation, prosecution, or execution of a criminal penalty for offences listed in Annex II (serious crimes punishable by at least 4 years' custodial sentence in the relevant member state)

Procedural requirements — Art.5(3)-(8): Even where an exception applies:

"Real-time" qualifier: Post-deployment analysis of CCTV footage is not "real-time" and falls under different rules. The prohibition specifically targets live identification — simultaneous capture and identification — not forensic analysis of historical footage (which is still regulated as high-risk AI but not prohibited).

"Publicly accessible spaces": Includes any space that is open to the public regardless of ownership — shopping centres, transport hubs, public streets, stadiums. Private spaces not accessible to the public (secure facilities, private property) are not covered by this prohibition.

For developers building law enforcement AI: Real-time biometric identification systems require the most careful legal scoping. If your system can perform live identification in publicly accessible spaces, it cannot be deployed for law enforcement purposes outside the three exceptions, and even within exceptions, the procedural requirements are strict.


Penalties Under Art.99(1)

Art.5 violations attract the highest penalty tier in the EU AI Act:

For context: a company with EUR 500M annual revenue faces a maximum fine of EUR 35M. A company with EUR 1B annual revenue faces EUR 70M. For major tech platforms, the 7% threshold is the binding constraint.


Art.5 and the Prohibited Practice Boundary Problem

The most practical challenge Art.5 creates for developers is the boundary problem: systems that do not fall within a prohibition but operate near the line. Several categories are worth highlighting:

Near the subliminal manipulation boundary:

Near the social scoring boundary:

Near the biometric categorisation boundary:


Python Tooling for Art.5 Screening

from dataclasses import dataclass
from enum import Enum
from typing import Optional

class Art5Risk(Enum):
    PROHIBITED = "PROHIBITED"
    LIKELY_PROHIBITED = "LIKELY_PROHIBITED"
    REVIEW_REQUIRED = "REVIEW_REQUIRED"
    PERMITTED = "PERMITTED"

@dataclass
class Art5Assessment:
    provision: str
    risk: Art5Risk
    reasoning: str
    exception_applies: bool = False
    exception_note: Optional[str] = None

class Art5ProhibitedPracticeScreener:
    """
    Art.5 EU AI Act prohibited practice screening tool.
    Applicable since February 2, 2025 (Art.113 first application date).
    """
    
    def screen_subliminal_manipulation(
        self,
        uses_below_threshold_signals: bool,
        uses_deceptive_framing: bool,
        can_cause_significant_harm: bool
    ) -> Art5Assessment:
        """Art.5(1)(a) — Subliminal manipulation."""
        if uses_below_threshold_signals and can_cause_significant_harm:
            return Art5Assessment(
                provision="Art.5(1)(a)",
                risk=Art5Risk.PROHIBITED,
                reasoning="Below-threshold signals + significant harm = prohibited subliminal manipulation"
            )
        if uses_deceptive_framing and can_cause_significant_harm:
            return Art5Assessment(
                provision="Art.5(1)(a)",
                risk=Art5Risk.LIKELY_PROHIBITED,
                reasoning="Deceptive techniques + significant harm likely meets Art.5(1)(a) threshold"
            )
        return Art5Assessment(
            provision="Art.5(1)(a)",
            risk=Art5Risk.PERMITTED,
            reasoning="No subliminal or deceptive manipulation detected"
        )
    
    def screen_vulnerability_exploitation(
        self,
        targets_age_disability_economic: bool,
        calibrated_to_exploit: bool,
        can_cause_significant_harm: bool
    ) -> Art5Assessment:
        """Art.5(1)(b) — Vulnerability exploitation."""
        if targets_age_disability_economic and calibrated_to_exploit and can_cause_significant_harm:
            return Art5Assessment(
                provision="Art.5(1)(b)",
                risk=Art5Risk.PROHIBITED,
                reasoning="Targeted vulnerability exploitation causing significant harm — prohibited"
            )
        if targets_age_disability_economic and can_cause_significant_harm:
            return Art5Assessment(
                provision="Art.5(1)(b)",
                risk=Art5Risk.REVIEW_REQUIRED,
                reasoning="Vulnerable group targeting with harm potential — requires legal review"
            )
        return Art5Assessment(
            provision="Art.5(1)(b)",
            risk=Art5Risk.PERMITTED,
            reasoning="No targeted vulnerability exploitation identified"
        )
    
    def screen_social_scoring(
        self,
        operated_by_public_authority: bool,
        scores_social_behaviour_over_time: bool,
        cross_context_effect: bool,
        disproportionate_effect: bool
    ) -> Art5Assessment:
        """Art.5(1)(c) — Social scoring."""
        if not operated_by_public_authority:
            return Art5Assessment(
                provision="Art.5(1)(c)",
                risk=Art5Risk.PERMITTED,
                reasoning="Private operator — Art.5(1)(c) applies only to public authorities"
            )
        if scores_social_behaviour_over_time and (cross_context_effect or disproportionate_effect):
            return Art5Assessment(
                provision="Art.5(1)(c)",
                risk=Art5Risk.PROHIBITED,
                reasoning="Public authority social scoring with cross-context or disproportionate effect — prohibited"
            )
        return Art5Assessment(
            provision="Art.5(1)(c)",
            risk=Art5Risk.REVIEW_REQUIRED,
            reasoning="Public authority scoring system — review for Art.5(1)(c) criteria"
        )
    
    def screen_predictive_policing(
        self,
        predicts_criminal_risk: bool,
        based_solely_on_profiling: bool,
        based_on_objective_facts: bool
    ) -> Art5Assessment:
        """Art.5(1)(d) — Predictive policing."""
        if predicts_criminal_risk and based_solely_on_profiling and not based_on_objective_facts:
            return Art5Assessment(
                provision="Art.5(1)(d)",
                risk=Art5Risk.PROHIBITED,
                reasoning="Criminal risk prediction based solely on profiling — prohibited"
            )
        if predicts_criminal_risk and based_on_objective_facts:
            return Art5Assessment(
                provision="Art.5(1)(d)",
                risk=Art5Risk.PERMITTED,
                reasoning="Criminal assessment based on objective verifiable facts — exception applies",
                exception_applies=True,
                exception_note="Must be based on objective, verifiable facts directly linked to criminal activity"
            )
        return Art5Assessment(
            provision="Art.5(1)(d)",
            risk=Art5Risk.PERMITTED,
            reasoning="No criminal risk prediction based on profiling detected"
        )
    
    def screen_facial_recognition_scraping(
        self,
        scrapes_facial_images: bool,
        is_untargeted: bool,
        builds_biometric_database: bool
    ) -> Art5Assessment:
        """Art.5(1)(e) — Facial recognition database scraping."""
        if scrapes_facial_images and is_untargeted and builds_biometric_database:
            return Art5Assessment(
                provision="Art.5(1)(e)",
                risk=Art5Risk.PROHIBITED,
                reasoning="Untargeted facial image scraping to build biometric database — absolutely prohibited"
            )
        return Art5Assessment(
            provision="Art.5(1)(e)",
            risk=Art5Risk.PERMITTED,
            reasoning="No untargeted biometric database scraping detected"
        )
    
    def screen_emotion_recognition(
        self,
        infers_emotions: bool,
        deployment_context: str,  # "workplace", "education", "medical", "safety", "other"
    ) -> Art5Assessment:
        """Art.5(1)(f) — Emotion recognition in workplace/education."""
        if not infers_emotions:
            return Art5Assessment(
                provision="Art.5(1)(f)",
                risk=Art5Risk.PERMITTED,
                reasoning="No emotion inference"
            )
        if deployment_context in ("medical", "safety"):
            return Art5Assessment(
                provision="Art.5(1)(f)",
                risk=Art5Risk.PERMITTED,
                reasoning=f"Medical/safety exception applies for context: {deployment_context}",
                exception_applies=True,
                exception_note="Verify that inference is limited to safety/medical state, not emotional characterisation"
            )
        if deployment_context in ("workplace", "education"):
            return Art5Assessment(
                provision="Art.5(1)(f)",
                risk=Art5Risk.PROHIBITED,
                reasoning=f"Emotion recognition in {deployment_context} — prohibited without medical/safety justification"
            )
        return Art5Assessment(
            provision="Art.5(1)(f)",
            risk=Art5Risk.REVIEW_REQUIRED,
            reasoning=f"Emotion inference in context '{deployment_context}' — review required"
        )
    
    def screen_biometric_categorisation(
        self,
        uses_biometric_data: bool,
        infers_sensitive_attributes: bool,  # race, politics, religion, sex life, sexual orientation
        law_enforcement_labelling: bool = False
    ) -> Art5Assessment:
        """Art.5(1)(g) — Biometric categorisation for sensitive attributes."""
        if uses_biometric_data and infers_sensitive_attributes:
            if law_enforcement_labelling:
                return Art5Assessment(
                    provision="Art.5(1)(g)",
                    risk=Art5Risk.PERMITTED,
                    reasoning="Law enforcement labelling exception — lawfully acquired dataset, Union law compliance required",
                    exception_applies=True,
                    exception_note="Must be lawfully acquired biometric dataset in accordance with Union law"
                )
            return Art5Assessment(
                provision="Art.5(1)(g)",
                risk=Art5Risk.PROHIBITED,
                reasoning="Biometric categorisation inferring sensitive attributes (race/politics/religion/sex life/orientation) — prohibited"
            )
        return Art5Assessment(
            provision="Art.5(1)(g)",
            risk=Art5Risk.PERMITTED,
            reasoning="No biometric categorisation for sensitive attributes detected"
        )
    
    def screen_realtime_rbi(
        self,
        is_realtime_biometric_identification: bool,
        deployment_context: str,  # "law_enforcement_public", "private", "other"
        exception_objective: Optional[str] = None  # "missing_persons", "imminent_threat", "annex_ii_crime"
    ) -> Art5Assessment:
        """Art.5(1)(h) — Real-time remote biometric identification in public spaces."""
        if not is_realtime_biometric_identification:
            return Art5Assessment(
                provision="Art.5(1)(h)",
                risk=Art5Risk.PERMITTED,
                reasoning="Not real-time RBI — post-deployment analysis or non-biometric"
            )
        if deployment_context != "law_enforcement_public":
            return Art5Assessment(
                provision="Art.5(1)(h)",
                risk=Art5Risk.REVIEW_REQUIRED,
                reasoning=f"Real-time RBI outside law enforcement public space — review scope and other obligations"
            )
        # Law enforcement in public spaces
        if exception_objective in ("missing_persons", "imminent_threat", "annex_ii_crime"):
            return Art5Assessment(
                provision="Art.5(1)(h)",
                risk=Art5Risk.PERMITTED,
                reasoning=f"Art.5(2) exception applies: {exception_objective}",
                exception_applies=True,
                exception_note="Art.5(3)-(8) procedural requirements mandatory: prior judicial authorisation, Commission notification, FRIA, EU database registration"
            )
        return Art5Assessment(
            provision="Art.5(1)(h)",
            risk=Art5Risk.PROHIBITED,
            reasoning="Real-time RBI by law enforcement in public spaces without applicable Art.5(2) exception — prohibited"
        )
    
    def full_screen(self, system_profile: dict) -> list[Art5Assessment]:
        """Run full Art.5 screen based on system profile dict."""
        results = []
        results.append(self.screen_subliminal_manipulation(
            system_profile.get("subliminal_signals", False),
            system_profile.get("deceptive_framing", False),
            system_profile.get("significant_harm_potential", False)
        ))
        results.append(self.screen_vulnerability_exploitation(
            system_profile.get("targets_vulnerable_groups", False),
            system_profile.get("calibrated_to_exploit", False),
            system_profile.get("significant_harm_potential", False)
        ))
        results.append(self.screen_social_scoring(
            system_profile.get("public_authority", False),
            system_profile.get("scores_social_behaviour", False),
            system_profile.get("cross_context_scoring", False),
            system_profile.get("disproportionate_scoring", False)
        ))
        results.append(self.screen_predictive_policing(
            system_profile.get("predicts_criminal_risk", False),
            system_profile.get("based_on_profiling", False),
            system_profile.get("based_on_objective_facts", False)
        ))
        results.append(self.screen_facial_recognition_scraping(
            system_profile.get("scrapes_facial_images", False),
            system_profile.get("untargeted_scraping", False),
            system_profile.get("builds_biometric_db", False)
        ))
        results.append(self.screen_emotion_recognition(
            system_profile.get("infers_emotions", False),
            system_profile.get("deployment_context", "other")
        ))
        results.append(self.screen_biometric_categorisation(
            system_profile.get("uses_biometrics", False),
            system_profile.get("infers_sensitive_attributes", False),
            system_profile.get("law_enforcement_labelling", False)
        ))
        results.append(self.screen_realtime_rbi(
            system_profile.get("realtime_rbi", False),
            system_profile.get("rbi_context", "other"),
            system_profile.get("rbi_exception", None)
        ))
        return results

def assess_art5_compliance(system_profile: dict) -> dict:
    """
    Full Art.5 compliance assessment.
    Returns summary with prohibited/review/permitted counts and findings.
    """
    screener = Art5ProhibitedPracticeScreener()
    results = screener.full_screen(system_profile)
    
    prohibited = [r for r in results if r.risk == Art5Risk.PROHIBITED]
    likely = [r for r in results if r.risk == Art5Risk.LIKELY_PROHIBITED]
    review = [r for r in results if r.risk == Art5Risk.REVIEW_REQUIRED]
    permitted = [r for r in results if r.risk == Art5Risk.PERMITTED]
    
    return {
        "overall_status": "PROHIBITED" if prohibited or likely else ("REVIEW_REQUIRED" if review else "PERMITTED"),
        "prohibited_count": len(prohibited),
        "likely_prohibited_count": len(likely),
        "review_required_count": len(review),
        "permitted_count": len(permitted),
        "prohibited_provisions": [r.provision for r in prohibited + likely],
        "review_provisions": [r.provision for r in review],
        "findings": results,
        "applicable_since": "2026-02-02",
        "penalty_max": "EUR 35,000,000 or 7% global annual turnover (Art.99(1))"
    }

30-Item Art.5 Compliance Checklist

Application Date (2 items)

  1. Confirmed that Art.5 has applied since February 2, 2025 — no grace period, no transition
  2. Reviewed all AI systems in portfolio against Art.5 before or immediately after that date

Subliminal Manipulation — Art.5(1)(a) (4 items)

  1. Confirmed no AI output operates below the threshold of human consciousness
  2. Confirmed no AI personalisation is calibrated to exploit unconscious cognitive biases
  3. Documented the basis for recommendations/decisions so users can understand and contest them
  4. Assessed whether AI timing, framing, or content design could constitute deceptive manipulation causing significant harm

Vulnerability Exploitation — Art.5(1)(b) (4 items)

  1. Identified whether the system collects or infers vulnerability signals (age, disability, economic situation)
  2. Confirmed that any vulnerability-aware adaptation supports rather than exploits the user
  3. Conducted harm assessment for use cases where vulnerable groups are target audience
  4. Documented evidence that the system is not calibrated to extract disproportionate value from vulnerable users

Social Scoring — Art.5(1)(c) (3 items)

  1. Confirmed whether operator is a public authority or acting on its behalf
  2. If public authority: confirmed no scoring system aggregates social behaviour across domains for cross-context treatment
  3. If public authority: confirmed any scoring is proportionate and directly related to the domain of assessment

Predictive Policing — Art.5(1)(d) (4 items)

  1. Identified whether the system makes individual criminal risk predictions
  2. Confirmed that any risk assessment is based on objective, verifiable facts about specific acts — not personality or demographics alone
  3. Documented the evidence basis for any criminal risk output
  4. Ensured human oversight layer cannot be bypassed for decisions affecting individuals' criminal risk classification

Facial Recognition Scraping — Art.5(1)(e) (3 items)

  1. Confirmed the system does not scrape facial images from internet or CCTV without individual targeting
  2. Confirmed no biometric database is expanded through untargeted image collection
  3. Reviewed data pipeline for any automated facial image collection components

Emotion Recognition — Art.5(1)(f) (4 items)

  1. Identified whether the system infers emotional states from any signal type
  2. Confirmed that any emotion inference in workplace context is limited to medical or safety applications
  3. Confirmed that any emotion inference in educational context is limited to medical or safety applications
  4. Documented the distinction between safety state detection (permitted) and emotional characterisation (prohibited)

Biometric Categorisation — Art.5(1)(g) (3 items)

  1. Confirmed the system does not infer race, political opinions, trade union membership, religion, sex life, or sexual orientation from biometric data
  2. Reviewed all biometric processing outputs for any proxies or correlates of sensitive attributes
  3. If law enforcement labelling exception claimed: confirmed dataset is lawfully acquired and Union law compliant

Real-Time RBI — Art.5(1)(h) (3 items)

  1. Confirmed whether system performs real-time remote biometric identification in publicly accessible spaces
  2. If law enforcement deployment: confirmed applicable Art.5(2) exception and complied with Art.5(3)-(8) procedural requirements (prior authorisation, Commission notification, FRIA, EU database registration)
  3. Documented distinction between real-time identification (restricted) and post-deployment forensic analysis (high-risk but not prohibited)

Relationship to Other EU AI Act Provisions

Art.5 sits at the top of the EU AI Act's risk pyramid, but it does not operate in isolation:

Art.5 → Art.71 (EU Database): AI systems used for real-time RBI under the Art.5(2) exception must be registered in the EU database.

Art.5 → Art.99(1) (Penalties): The highest penalty tier — EUR 35M or 7% — applies exclusively to Art.5 violations.

Art.5 → Annex III (High-Risk): Systems that approach Art.5 boundaries often qualify as high-risk AI under Annex III even if they do not cross the prohibited practice line. Law enforcement AI (Category 6), emotion recognition systems, biometric categorisation, and predictive scoring systems are all listed in Annex III.

Art.5 → GDPR: Biometric data is special category data under GDPR Art.9. Most Art.5(1)(e) and Art.5(1)(g) violations will simultaneously constitute GDPR Art.9 violations.

Art.5 → AI Liability Directive (ALD): Where an Art.5 violation causes harm, the ALD's rebuttable presumption of causal link applies automatically: violating an AI Act obligation + harm of the type the obligation protects against = presumed causation.


Conclusion

Article 5 defines the absolute boundary of acceptable AI in the EU. It has been in force since February 2, 2025 — not August 2026. Developers and deployers who have not screened their systems against Art.5 are operating with unquantified legal exposure under the highest penalty tier in the regulation.

The practical compliance approach is a two-stage screen: first, use the structured screening tool above to identify any system that clearly falls within a prohibition. Second, conduct a legal review for any system that operates near the boundary — near-subliminal personalisation, vulnerability-aware consumer AI, law enforcement risk tools, biometric processing pipelines. The boundary cases require legal opinion, not just engineering assessment.

Art.5 compliance documentation should be maintained alongside your Annex IV technical documentation. If you are ever required to demonstrate that a system does not fall within a prohibited practice, contemporaneous documentation of the design decisions that keep it outside scope is far more persuasive than post-hoc explanations.

See Also