EU AI Act Digital Omnibus Art.5(1)(k): Prohibition of AI-Based Predictive Policing Through Profiling (2026)
The EU AI Act Digital Omnibus introduces Art.5(1)(k): an absolute prohibition on placing on the market, putting into service, or using AI systems that assess or predict the risk of a natural person committing a criminal offence, based solely on profiling of that person or on assessing their personality traits and characteristics. This is the Digital Omnibus answer to the "Minority Report" problem — AI systems that generate individual criminal risk scores from demographic data, behavioral patterns, and social network analysis without corroborating physical evidence.
The Digital Omnibus Art.5 series now covers four new prohibited practices: Art.5(1)(l) NCII prohibition, Art.5(1)(i) democratic disinformation prohibition, Art.5(1)(j) emotion inference in workplace and education, and Art.5(1)(k) — predictive policing through profiling alone. Together they extend the EU AI Act's absolute prohibition list to cover AI-specific harms that were not fully anticipated in the original 2024 legislative text.
Enforcement timeline: Art.5(1)(k) applies under the Digital Omnibus extended timeline of December 2027. Law enforcement vendors, criminal justice analytics SaaS providers, and risk scoring platform operators should begin product audits immediately — systems that compute individual-level criminal risk scores from demographic or behavioral profiles will require fundamental architectural changes.
Penalty tier: Art.5(1)(k) violations fall under Art.99(1) — the highest EU AI Act penalty tier: fines of up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher. This is the same tier as subliminal manipulation (Art.5(1)(a)) and NCII generation (Art.5(1)(l)).
What Art.5(1)(k) Actually Prohibits
The Digital Omnibus amendment text for Art.5(1)(k) prohibits placing on the market, putting into service, or using AI systems that:
assess or predict the risk of a natural person committing a criminal offence based solely on the profiling of that natural person or on the assessment of their personality traits and characteristics.
The prohibition rests on two defining elements:
- Criminal risk assessment or prediction at the individual level — the AI system generates a score, flag, probability, or risk category for a specific natural person regarding their likelihood of committing a criminal offence
- Solely on profiling or personality trait assessment — the individual-level criminal risk is derived exclusively from statistical profiles, demographic correlations, behavioral pattern matching, or personality/character assessment, without corroborating physical evidence or human intelligence
Both elements must be present. Art.5(1)(k) does not prohibit all AI use in law enforcement — it prohibits a specific methodology: pure profiling as the sole basis for individual criminal risk prediction.
The "Solely on Profiling" Test
The word "solely" is the critical qualifier in Art.5(1)(k). It distinguishes the prohibited class of systems from lawful AI tools in law enforcement and criminal justice. Applying the solely-on-profiling test requires asking: what inputs does the AI system use to generate the individual-level criminal risk assessment?
Prohibited: Profiling-Only Inputs
The following inputs, when used as the sole or exclusive basis for individual criminal risk assessment, place a system within Art.5(1)(k)'s prohibition:
Demographic profiling — Age, gender, nationality, ethnicity, country of birth, place of residence (postcode, neighbourhood), socioeconomic indicators. AI systems that assign criminal risk scores based on demographic cluster membership are prohibited regardless of how sophisticated the statistical model is.
Behavioral pattern profiling — Movement patterns, communication frequency patterns, social network topology, purchasing patterns, online activity patterns — when used as the sole input for individual criminal risk prediction without corroborating evidence of specific criminal activity.
Personality trait and character assessment — Psychological profiling tools that score individuals on "criminogenic personality" traits (antisocial tendency, low empathy, impulsivity, risk appetite) derived from behavioral indicators, social media analysis, or psychometric testing, when used as the sole basis for criminal risk prediction.
Prior contact profiling — Criminal history records, arrest records (including non-conviction records), police contact logs, social services involvement — when these historical records are used alone, without current evidence, to predict future criminal behaviour.
Social network and association profiling — Guilt-by-association analysis: AI systems that elevate an individual's criminal risk score based primarily on who they associate with, rather than on evidence of their own conduct.
Permitted: Evidence-Combined Inputs
Art.5(1)(k) does not prohibit AI tools where profiling data is one input among multiple inputs that include corroborating physical evidence or specific intelligence:
- Forensic evidence + profiling context — AI systems that analyse physical forensic evidence (DNA, fingerprints, CCTV footage) and use profile data as supplementary context for investigative leads, not as the primary risk determination
- Specific intelligence + pattern correlation — AI systems that assess behavioral patterns in the context of a specific, evidence-supported investigation
- Post-hoc recidivism assessment — Structured recidivism risk tools (like COMPAS or equivalent) used in post-conviction sentencing contexts, subject to Annex III high-risk obligations and human oversight requirements, where profiling is not the sole input
The distinction is architecturally meaningful: a prohibited system asks "is this person likely to commit a crime based on their profile?" A potentially-permitted system asks "given that we have this evidence about a specific incident, does this person's profile provide relevant context?"
Who Is Affected
Law Enforcement Technology Vendors
Predictive policing platform providers — Companies that sell or licence AI-powered crime prediction platforms to police forces. If these platforms generate individual-level risk scores or "person of interest" flags based primarily on demographic, behavioral, or social network profiles, they fall within Art.5(1)(k). This includes platforms marketed as "next generation" or "intelligence-led policing" tools if their individual-level outputs rely solely on profile data.
Criminal justice analytics SaaS — SaaS platforms that provide risk scoring dashboards to prosecutors, judges, probation officers, or law enforcement, where the risk scores are derived primarily from profiling inputs. "Pre-trial flight risk" or "recidivism likelihood" tools that rely solely on demographic and history data without corroborating current-situation assessment are within scope.
Social media monitoring and link analysis vendors — AI platforms that monitor social media activity, map social networks, and generate individual threat or risk assessments based on network topology and behavioral patterns. The risk assessment function — not the monitoring function — is what triggers Art.5(1)(k).
Behavioral analytics and anomaly detection platforms — Security analytics tools that generate individual risk flags based on behavioral deviation from peer group norms, when those flags are used for criminal risk assessment purposes rather than access control or fraud detection.
Deployers — Law Enforcement and Justice Sector
Police forces and public safety agencies — Law enforcement agencies that deploy predictive policing tools face direct deployer liability under Art.5(1)(k) when they configure or use AI systems in ways that make individual-level criminal risk predictions from profile data alone. The prohibition applies to both the tool vendor (provider) and the agency deploying it (deployer).
Courts and sentencing authorities — Courts that rely on AI-generated risk scores as a primary input for sentencing, bail, or probation decisions face Art.5(1)(k) liability where those scores are based solely on profiling. This intersects with fundamental rights obligations under EU Charter Art.47 (effective judicial protection) and Art.48 (presumption of innocence).
Prison and probation services — AI risk classification systems used for prisoner categorization, parole assessment, or probation supervision levels fall within Art.5(1)(k) when they generate individual-level criminal risk predictions solely from profile data.
Importers and Distributors
EU-based distributors or resellers of non-EU predictive policing technology face importer liability. This is particularly relevant for US-developed predictive policing platforms — tools widely deployed in US law enforcement that may be marketed for EU adoption require full Art.5(1)(k) compliance review before EU distribution.
What Art.5(1)(k) Does NOT Prohibit
Geographic hotspot analysis — Area-level crime prediction that identifies geographic zones with elevated crime risk based on historical incident patterns, environmental factors, and time-of-day analysis. Hotspot mapping generates predictions about locations, not individuals. Art.5(1)(k) targets individual-level profiling; hotspot analysis remains outside scope provided it does not produce individual-level risk flags as an output.
Pattern-of-life analysis for existing investigations — AI tools that analyse communication patterns, movement data, or transaction records in the context of an active, evidence-supported investigation into specific criminal activity. The key distinction: the criminal activity is already evidenced; the AI provides analytical support, not a standalone risk determination.
Post-conviction structured risk assessment — Validated risk assessment instruments used in post-conviction contexts (sentencing, parole, probation classification) where the tool is one input in a structured professional judgment process with human oversight, rather than the sole determination. These tools are likely Annex III high-risk (not prohibited), subject to conformity assessment, logging, and human oversight requirements.
Fraud detection and financial crime monitoring — AI systems that detect unusual financial transaction patterns and flag individual accounts for review. The criminal risk assessment here is grounded in specific transactional evidence (anomalous behaviour relative to baseline), not on demographic profile alone.
Victim risk assessment tools — AI tools that assess the risk of a person becoming a victim of crime (domestic violence risk assessment, vulnerable person identification) are outside Art.5(1)(k) scope. The prohibition targets prediction of criminal offence commission by the person, not victimisation risk.
Cyber threat intelligence — AI systems that correlate threat actor behaviour patterns to identify cybercriminals or state-sponsored attackers in a specific investigation context, where the attribution is based on technical indicators of compromise, not on demographic profiling.
Art.5(1)(k) vs. Art.5(1)(d): Social Scoring — Key Distinctions
Art.5(1)(k) is frequently confused with the Art.5(1)(d) social scoring prohibition. Both target AI systems that generate individual negative assessments from profile data — but they differ in scope, actor coverage, and context.
| Dimension | Art.5(1)(d) Social Scoring | Art.5(1)(k) Predictive Policing |
|---|---|---|
| Output type | Social trustworthiness / character scores affecting access to services | Criminal offence risk scores affecting law enforcement action |
| Context | Any social context where public authority treats person detrimentally across unrelated domains | Criminal justice / law enforcement context |
| Actor coverage | Public authorities acting in public interest, entities acting similarly | Any provider or deployer (including private vendors, law enforcement) |
| Temporal scope | Continuous social behaviour evaluation over time | Future criminal risk prediction |
| Prohibition trigger | Detrimental treatment in unrelated domains OR disproportionate treatment | Solely profiling-based individual criminal risk determination |
| Exemption pathway | None — absolute prohibition | None — absolute prohibition (but evidence-combined tools may fall outside scope) |
The practical difference: China's Social Credit System is the Art.5(1)(d) reference case. A US-style predictive policing algorithm generating individual arrest probability scores from demographic clusters is the Art.5(1)(k) reference case. A system could violate both if it generates criminal risk scores from social behaviour patterns for detrimental treatment purposes.
Art.5(1)(k) vs. Annex III High-Risk: The Compliance Pathway Difference
This distinction is critical for law enforcement technology vendors. Not every AI system used in criminal justice is prohibited under Art.5(1)(k) — many are regulated as Annex III high-risk systems with a compliance pathway rather than an absolute prohibition.
Annex III, Point 6 (Law Enforcement High-Risk AI) covers AI systems used for:
- Polygraphs and similar tools to assess reliability of natural persons
- AI to evaluate criminal recidivism risk of individual natural persons
- AI for criminal risk assessment to be used to profile individuals
- AI for individual crime-risk assessment used as a priority tool for law enforcement
- AI for detecting deep fakes in the context of law enforcement
These are high-risk, not prohibited — they require conformity assessment, technical documentation, human oversight, and logging. They are on a compliance pathway.
Art.5(1)(k) triggers the absolute prohibition when the system relies solely on profiling for individual criminal risk prediction. A system that combines profile data with forensic evidence inputs may be Annex III high-risk (compliance pathway) rather than Art.5(1)(k) prohibited. The architecture of the system — specifically, what inputs feed the individual-level risk output — determines which category applies.
This creates a meaningful design decision for law enforcement AI vendors: systems designed with mandatory evidence-input requirements before generating individual risk assessments may avoid Art.5(1)(k) prohibition while remaining subject to Annex III high-risk compliance obligations.
Intersection with the EU Charter of Fundamental Rights
Art.5(1)(k) is anchored in multiple Charter rights that are particularly strong in the criminal justice context:
Art.48 — Presumption of Innocence: AI systems that generate individual criminal risk scores from profiling data alone directly conflict with the presumption of innocence. Flagging a person as "high risk" of future criminal conduct, based on their demographic profile, treats them as a prospective criminal before any criminal act occurs or is evidenced. Art.5(1)(k) gives legislative expression to this Charter right in the AI context.
Art.6 — Right to Liberty and Security: Predictive policing outputs that influence detention, stop-and-search decisions, or enhanced surveillance directly implicate the right to liberty. AI-generated risk scores that function as algorithmic justifications for liberty restrictions without evidential basis violate Art.6.
Art.21 — Non-Discrimination: Profiling-based predictive policing is inherently at risk of encoding and amplifying historical discrimination. Training data that reflects historical over-policing of specific communities produces models that perpetuate that over-policing. The prohibition addresses this structural risk — not by regulating the training data, but by prohibiting the use of solely-profiling-based individual risk assessment outputs in law enforcement decisions.
Art.8 — Data Protection / GDPR Special Categories: Criminal record data and data revealing criminal convictions or offences are GDPR Art.10 special category data, subject to strict processing limitations. Using such data as profiling inputs for predictive criminal risk assessment requires explicit legal basis beyond Art.5(1)(k) prohibition compliance.
AI Liability Directive Exposure
Art.5(1)(k) violations create AI Liability Directive (ALD) exposure beyond the EU AI Act penalties:
ALD Art.4 — Fault Presumption: When an Art.5(1)(k) violation causes damage, the ALD creates a rebuttable presumption of fault by the provider. A person who suffers harm — wrongful arrest, discrimination, damage to reputation, loss of employment — as a result of a prohibited predictive policing system can invoke this presumption in civil claims without having to prove technical fault.
ALD Art.3 — Disclosure of Evidence: Defendants in civil claims can request disclosure of the AI system's profiling inputs, model architecture, and individual risk score generation logic. This disclosure obligation applies to black-box predictive policing systems — the prohibition on the system itself is the EU AI Act enforcement path; the ALD is the civil damages path for affected individuals.
Dual exposure: An operator of a prohibited predictive policing system faces simultaneous EU AI Act administrative penalties (up to €35M / 7% turnover) and civil liability to affected individuals under the ALD. Law enforcement agencies deploying prohibited systems on behalf of the state face both institutional liability and potential individual officer liability under national law implementing the ALD.
Technical Implementation: Compliant Law Enforcement AI Architecture
For law enforcement AI vendors seeking to position products outside Art.5(1)(k) scope while serving legitimate policing needs, the architecture must enforce mandatory evidence integration before generating any individual-level criminal risk assessment.
Prohibited Architecture Pattern
# PROHIBITED: Individual criminal risk from profile alone
def assess_criminal_risk(person_id: str) -> float:
profile = get_demographic_profile(person_id) # age, location, history
network = get_social_network_score(person_id) # association-based risk
behavior = get_behavioral_pattern_score(person_id) # movement, activity
# Risk score derived solely from profile inputs → Art.5(1)(k) prohibited
risk_score = model.predict([profile, network, behavior])
return risk_score
Compliant Architecture Pattern
# COMPLIANT: Profile is supplementary context, not sole basis
def investigate_incident(incident_evidence: EvidencePackage) -> InvestigativeAnalysis:
# Physical evidence is mandatory prerequisite
if not incident_evidence.has_physical_evidence:
raise InsufficientEvidenceError("Physical evidence required before profile analysis")
# Profile data supplements evidence analysis, does not replace it
forensic_analysis = analyse_forensic_evidence(incident_evidence.physical_evidence)
profile_context = get_supplementary_profile_context(incident_evidence.suspect_id)
# Output is investigative context for human analyst — not a standalone risk score
return InvestigativeAnalysis(
evidence_summary=forensic_analysis,
supplementary_context=profile_context,
requires_human_review=True,
no_automated_risk_score=True # Critical: no solo-profiling risk output
)
The architecture principle: evidence gates must be enforced before profile data is incorporated, and outputs must be framed as investigative context for human analysts rather than standalone individual risk scores.
Python PredictivePolicingChecker
from enum import Enum
from dataclasses import dataclass, field
from typing import List, Optional
class InputCategory(Enum):
DEMOGRAPHIC = "demographic" # age, gender, location, ethnicity
BEHAVIORAL_PATTERN = "behavioral_pattern" # movement, communication, purchase patterns
SOCIAL_NETWORK = "social_network" # association analysis, network topology
PERSONALITY_TRAIT = "personality_trait" # psychological scoring, character assessment
PRIOR_CONTACT = "prior_contact" # arrest records, police contacts, history
PHYSICAL_EVIDENCE = "physical_evidence" # forensic, CCTV, fingerprints, DNA
SPECIFIC_INTELLIGENCE = "specific_intelligence" # case-specific intelligence reports
TRANSACTIONAL_ANOMALY = "transactional_anomaly" # financial anomaly (not pure profiling)
class ComplianceResult(Enum):
PROHIBITED = "prohibited" # Art.5(1)(k) — absolute prohibition
HIGH_RISK_ANNEX_III = "high_risk_annex_iii" # Evidence-combined — Annex III pathway
OUTSIDE_SCOPE = "outside_scope" # Geographic/area-level — no prohibition
REQUIRES_REVIEW = "requires_review" # Ambiguous — legal review needed
PROFILING_INPUTS = {
InputCategory.DEMOGRAPHIC,
InputCategory.BEHAVIORAL_PATTERN,
InputCategory.SOCIAL_NETWORK,
InputCategory.PERSONALITY_TRAIT,
InputCategory.PRIOR_CONTACT,
}
EVIDENCE_INPUTS = {
InputCategory.PHYSICAL_EVIDENCE,
InputCategory.SPECIFIC_INTELLIGENCE,
InputCategory.TRANSACTIONAL_ANOMALY,
}
@dataclass
class RiskAssessmentSystem:
name: str
generates_individual_risk_score: bool
output_is_geographic_only: bool
input_categories: List[InputCategory]
human_oversight_required: bool
evidence_gate_enforced: bool # Technical gate before profile use
used_in_criminal_justice: bool
@dataclass
class Art5kComplianceResult:
system_name: str
result: ComplianceResult
profiling_inputs_used: List[InputCategory]
evidence_inputs_used: List[InputCategory]
solely_profiling: bool
prohibited_features: List[str] = field(default_factory=list)
required_actions: List[str] = field(default_factory=list)
annex_iii_requirements: List[str] = field(default_factory=list)
class PredictivePolicingChecker:
"""
Checks whether an AI risk assessment system falls within the
Art.5(1)(k) Digital Omnibus prohibition on predictive policing
based solely on profiling of natural persons.
Enforcement: December 2027
Penalty: Art.99(1) — €35M or 7% global annual turnover
"""
def check(self, system: RiskAssessmentSystem) -> Art5kComplianceResult:
profiling_inputs = [i for i in system.input_categories if i in PROFILING_INPUTS]
evidence_inputs = [i for i in system.input_categories if i in EVIDENCE_INPUTS]
# Geographic-only outputs are outside Art.5(1)(k) scope
if system.output_is_geographic_only and not system.generates_individual_risk_score:
return Art5kComplianceResult(
system_name=system.name,
result=ComplianceResult.OUTSIDE_SCOPE,
profiling_inputs_used=profiling_inputs,
evidence_inputs_used=evidence_inputs,
solely_profiling=False,
)
# Not in criminal justice context → Art.5(1)(k) does not apply
if not system.used_in_criminal_justice:
return Art5kComplianceResult(
system_name=system.name,
result=ComplianceResult.OUTSIDE_SCOPE,
profiling_inputs_used=profiling_inputs,
evidence_inputs_used=evidence_inputs,
solely_profiling=False,
)
# Individual-level risk scoring with no evidence inputs → PROHIBITED
solely_profiling = (
system.generates_individual_risk_score and
len(profiling_inputs) > 0 and
len(evidence_inputs) == 0
)
if solely_profiling:
prohibited_features = [f"Individual risk score from {i.value}" for i in profiling_inputs]
return Art5kComplianceResult(
system_name=system.name,
result=ComplianceResult.PROHIBITED,
profiling_inputs_used=profiling_inputs,
evidence_inputs_used=evidence_inputs,
solely_profiling=True,
prohibited_features=prohibited_features,
required_actions=[
"Immediately cease placing system on EU market",
"Audit all active deployments with EU law enforcement agencies",
"Remove individual-level risk scoring functionality or",
"Implement mandatory evidence-gate before any profile analysis",
"Conduct legal review for Art.99(1) penalty exposure",
"Assess ALD Art.4 civil liability exposure for past outputs",
],
)
# Evidence inputs present → Annex III high-risk, not prohibited
if system.generates_individual_risk_score and len(evidence_inputs) > 0:
annex_iii_reqs = [
"Conformity assessment (Annex VI or notified body)",
"Technical documentation (Art.11 and Annex IV)",
"Risk management system (Art.9)",
"Human oversight mechanism (Art.14)",
"Logging and audit trail (Art.12)",
"EU database registration (Art.49)",
"Fundamental rights impact assessment (Art.27)",
]
if not system.human_oversight_required:
annex_iii_reqs.append("CRITICAL: human oversight currently not enforced — required for Annex III")
if not system.evidence_gate_enforced:
annex_iii_reqs.append("WARNING: evidence gate not technically enforced — implement to maintain Art.5(1)(k) boundary")
return Art5kComplianceResult(
system_name=system.name,
result=ComplianceResult.HIGH_RISK_ANNEX_III,
profiling_inputs_used=profiling_inputs,
evidence_inputs_used=evidence_inputs,
solely_profiling=False,
annex_iii_requirements=annex_iii_reqs,
)
return Art5kComplianceResult(
system_name=system.name,
result=ComplianceResult.REQUIRES_REVIEW,
profiling_inputs_used=profiling_inputs,
evidence_inputs_used=evidence_inputs,
solely_profiling=False,
)
# --- Example usage ---
checker = PredictivePolicingChecker()
# PROHIBITED: Demographic + behavioral profiling → individual risk score, no evidence gate
prohibited_system = RiskAssessmentSystem(
name="CrimePredictPro v2",
generates_individual_risk_score=True,
output_is_geographic_only=False,
input_categories=[
InputCategory.DEMOGRAPHIC,
InputCategory.BEHAVIORAL_PATTERN,
InputCategory.PRIOR_CONTACT,
InputCategory.SOCIAL_NETWORK,
],
human_oversight_required=False,
evidence_gate_enforced=False,
used_in_criminal_justice=True,
)
result = checker.check(prohibited_system)
assert result.result == ComplianceResult.PROHIBITED
# result.prohibited_features → ['Individual risk score from demographic', ...]
# OUTSIDE SCOPE: Geographic hotspot analysis only
hotspot_system = RiskAssessmentSystem(
name="CrimeHotspotMapper",
generates_individual_risk_score=False,
output_is_geographic_only=True,
input_categories=[InputCategory.PRIOR_CONTACT], # historical incidents by location
human_oversight_required=True,
evidence_gate_enforced=True,
used_in_criminal_justice=True,
)
result = checker.check(hotspot_system)
assert result.result == ComplianceResult.OUTSIDE_SCOPE
# ANNEX III: Evidence-combined investigative tool with human oversight
annex_iii_system = RiskAssessmentSystem(
name="ForensicInvestigativeAI",
generates_individual_risk_score=True,
output_is_geographic_only=False,
input_categories=[
InputCategory.PHYSICAL_EVIDENCE,
InputCategory.SPECIFIC_INTELLIGENCE,
InputCategory.BEHAVIORAL_PATTERN, # supplementary context
],
human_oversight_required=True,
evidence_gate_enforced=True,
used_in_criminal_justice=True,
)
result = checker.check(annex_iii_system)
assert result.result == ComplianceResult.HIGH_RISK_ANNEX_III
20-Item Art.5(1)(k) Compliance Checklist
System Classification (Items 1–6)
- Does the AI system generate individual-level criminal risk scores, flags, or probability assessments for natural persons?
- Are those individual-level outputs based solely on demographic, behavioral, network, or personality profile data — without corroborating physical evidence or specific intelligence?
- Does the system have a technical evidence-gate that requires physical evidence or specific intelligence before profile analysis is incorporated into individual risk outputs?
- Is the system's primary output geographic (area-level) or individual-level? Geographic-only systems are outside Art.5(1)(k) scope.
- Is the system used in criminal justice, law enforcement, probation, sentencing, or pre-trial decision contexts?
- Does the system assess or score personality traits, character, or "criminogenic" psychological attributes as inputs to criminal risk prediction?
Evidence Gate Architecture (Items 7–11)
- Are mandatory physical evidence or specific intelligence inputs technically enforced before any individual risk score is generated?
- Is there a code-level gate that rejects individual risk score requests that lack evidence inputs?
- Is the evidence gate auditable — do logs capture what evidence inputs were present at the time of any individual risk output?
- Is profiling data labelled as supplementary context in system outputs, not as the primary risk basis?
- Are outputs explicitly framed as "investigative context for human analyst review" rather than "risk determination"?
Annex III High-Risk Requirements (Items 12–15)
- If the system combines evidence with profiling for individual outputs (Art.5(1)(k) boundary maintained), is a conformity assessment underway under Annex VI?
- Is technical documentation prepared per Art.11 and Annex IV covering input data, model architecture, and validation methodology?
- Is a risk management system (Art.9) in place covering foreseeable risks including discriminatory output patterns?
- Is the system registered in the EU AI database (Art.49) if it is a high-risk system deployed by public authorities?
Fundamental Rights and Governance (Items 16–18)
- Has a Fundamental Rights Impact Assessment (Art.27) been conducted for this system, specifically addressing Art.6 (liberty), Art.21 (non-discrimination), and Art.48 (presumption of innocence)?
- Are training data sources audited for historical policing bias that could encode discriminatory profiling patterns into the model?
- Is there a documented human oversight protocol that ensures no law enforcement decision based on system output is taken without human review and approval?
Legal and ALD Readiness (Items 19–20)
- Has external legal counsel reviewed the system against Art.5(1)(k) and assessed whether the "solely on profiling" test is satisfied or avoided?
- Is ALD Art.3 disclosure readiness in place — can the organisation disclose individual risk score generation logic, profiling inputs, and model outputs to courts or civil claimants within required timelines?
The December 2027 Deadline Is a Design Deadline
Law enforcement AI vendors face a structural reality: Art.5(1)(k) compliance cannot be achieved by toggling a configuration setting or adding a consent screen. The prohibition targets the architecture of how individual criminal risk scores are generated. Systems whose core function is to predict individual criminal risk from demographic, behavioral, or personality profiles need to be redesigned — adding evidence gates, reframing outputs as investigative context, or exiting the individual-level criminal risk scoring market entirely before December 2027.
The Annex III high-risk pathway remains open for vendors who design evidence-combined tools. These tools face conformity assessment, documentation, logging, and human oversight requirements — but they are not prohibited. The compliance investment is substantial, but the market for compliant law enforcement AI in the EU is significant. The market for prohibited profiling-based predictive policing tools in the EU ends in December 2027.
This analysis covers Art.5(1)(k) as introduced in the EU AI Act Digital Omnibus proposal. Regulatory text is subject to change during legislative process. Enforcement deadline: December 2027. Penalty: Art.99(1) — €35,000,000 or 7% of total worldwide annual turnover. For infrastructure that keeps your compliance documentation in EU jurisdiction — outside Cloud Act reach — sota.io provides EU-native deployment with no US parent company data access.