If you build or deploy AI systems used by law enforcement to assess individual crime risk, predict recidivism, detect emotional states during questioning, evaluate the reliability of criminal evidence, profile suspects, or perform biometric identification in public spaces under the Article 5(1)(d) exceptions — EU AI Act Annex III Point 6 classifies your system as high-risk and triggers full Title III obligations before the August 2026 general application deadline. Law enforcement AI is the most constitutionally sensitive category in Annex III, operating at the intersection of fundamental rights (Art.6 right to liberty, Art.7 privacy, Art.21 non-discrimination, Art.48 presumption of innocence), the Law Enforcement Directive 2016/680 as a parallel legal regime, and the EU AI Act's own absolute prohibitions in Art.5. The most acute compliance problem is the sovereignty gap: three of the largest AI vendors operating across EU law enforcement agencies — Palantir (Gotham), Axon (body camera AI), and NEC (facial recognition) — are US entities with full CLOUD Act exposure on EU criminal intelligence data, creating a direct conflict with GDPR LED Art.9 and EU AI Act Art.9 risk management obligations that no EU law enforcement procurement framework has fully addressed.
What Annex III Point 6 Actually Covers
Annex III Point 6 applies to AI systems intended to be used by or on behalf of competent authorities of Member States, or Union institutions, bodies, offices, or agencies, for law enforcement purposes — subject to applicable Union and national law. Six categories are covered:
(a) Individual risk assessment for victimisation: AI systems that assess the risk that a natural person will become the victim of criminal offences. This covers victim vulnerability scoring tools used by police for domestic violence risk assessment (DASH/SARA tools), stalking risk assessment instruments, and any AI that scores a specific individual's likelihood of experiencing crime.
(b) Polygraphs and emotional state detection: AI systems intended to be used as polygraphs and similar tools or to detect the emotional state of a natural person. This covers traditional computerised polygraph systems, voice stress analysis tools, thermal imaging systems used to infer deceptiveness, and AI-based lie detection tools — all classified as high-risk regardless of their scientific validity. The EU AI Act does not prohibit these tools in law enforcement contexts (unlike some EU Member State laws), but requires full Annex III conformity assessment.
(c) Evidence reliability evaluation: AI systems intended to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences. This is the category with the most surprising scope: AI tools used by prosecutors or investigators to assess whether video footage is genuine, whether audio recordings have been manipulated, whether documents are authentic, or whether digital evidence chains are intact — all qualify as high-risk law enforcement AI. Deepfake detection tools used in criminal proceedings are explicitly within scope.
(d) Crime profiling and criminal investigation support: AI systems intended to assist competent authorities in the profiling of natural persons in the course of detection, investigation, or prosecution of criminal offences. This covers suspect profiling tools, link analysis systems (identifying connections between individuals across criminal databases), social network analysis for criminal investigations, and AI-assisted investigation platforms that generate profiles of natural persons from combined data sources.
(e) Individual recidivism and offence prediction: AI systems intended to be used for the assessment of a risk of a natural person offending or reoffending, or for assessing personality traits and characteristics or past criminal behaviour of natural persons or groups. This is the predictive policing category in its individual form — any system that scores a specific person's likelihood of committing or repeating a crime is high-risk. This covers parole risk assessment tools, pre-trial detention scoring systems (analogous to COMPAS in the US context), and individual-level crime prediction.
(f) Remote biometric identification in law enforcement: AI systems intended for real-time and post-remote biometric identification of natural persons used for the purposes of law enforcement. This is governed by the interaction with Art.5 prohibitions and their narrow exceptions — systems that fall within the Art.5(1)(d) permitted use cases are classified as high-risk under Annex III Point 6, while systems outside the exceptions are prohibited outright under Art.5.
The Art.5(1)(d) Exception Structure: Prohibition With a High-Risk Carve-Out
The EU AI Act Art.5(1)(a)-(d) prohibits real-time remote biometric identification systems (RTRBID) in publicly accessible spaces for law enforcement purposes — subject to three narrow exceptions. Understanding where prohibition ends and high-risk classification begins is the most critical distinction for law enforcement technology vendors.
Prohibited (Art.5(1)(d) default): Any real-time RBI system deployed in public spaces for law enforcement without meeting one of the three exceptions. This prohibition applies regardless of accuracy or proportionality claims. Clearview AI-style mass facial recognition without specific targeting = prohibited under Art.5(1)(d).
Permitted (and HIGH-RISK under Annex III Point 6): Real-time RBI in public spaces by law enforcement is permitted only for:
- Targeted search for specific victims of trafficking, sexual exploitation, or missing children
- Prevention of a specific, substantial, and imminent threat to the life or safety of natural persons, or response to a present and foreseeable terrorist threat
- Detection, localisation, identification, or prosecution of a perpetrator or suspect of a serious criminal offence (Annex I, Framework Decision 2002/584/JHA — includes offences punishable by imprisonment of at least 3 years, a list of 32 categories)
All three exception categories require prior authorisation by a judicial authority or an independent administrative authority, except in cases of duly justified urgency. The authorisation must be geographically and temporally limited. Post-deployment review is mandatory.
Post-remote biometric identification (non-real-time): Always HIGH-RISK under Annex III Point 6(f). No prohibition, but full conformity assessment required. Analysis of stored footage, retrospective face matching against criminal databases, and AI-assisted photo identification from CCTV archives all fall here.
| RBI System Type | Art.5 Status | Annex III Classification |
|---|---|---|
| Real-time public-space RBI without exception | PROHIBITED | N/A (prohibited) |
| Real-time RBI for victim search (trafficking/missing) | Permitted with authorisation | HIGH-RISK Point 6(f) |
| Real-time RBI for imminent terror threat | Permitted with authorisation | HIGH-RISK Point 6(f) |
| Real-time RBI for serious crime perpetrator | Permitted with authorisation | HIGH-RISK Point 6(f) |
| Post-remote biometric ID (CCTV retrospective) | Not prohibited | HIGH-RISK Point 6(f) |
| Clearview AI-style untargeted scraping for FR database | PROHIBITED Art.5(1)(e) | N/A (prohibited) |
| Body camera FR analysis (real-time) | PROHIBITED unless exception | HIGH-RISK if exception applies |
| Body camera FR analysis (post-incident review) | Not prohibited | HIGH-RISK Point 6(f) |
Clearview AI: The Prohibition/High-Risk Boundary Case Study
Clearview AI is the most instructive case study for understanding where EU AI Act Art.5 prohibition ends and Annex III Point 6 high-risk classification begins.
What happened: Clearview AI scraped billions of facial images from the public internet without consent, built a facial recognition database, and sold search access to law enforcement. Between 2021 and 2023, EU data protection authorities issued enforcement actions: Italian Garante (€20M fine, 2022), French CNIL (€20M fine, 2022), Greek DPA (€20M fine, 2022), Swedish IMY (SEK 2.5M fine, 2021), UK ICO (£7.5M fine, later reduced on appeal, 2022).
EU AI Act position: Clearview AI's core product — untargeted scraping of internet images to build a facial recognition database — violates Art.5(1)(e) of the EU AI Act, which prohibits "the compilation of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage." This is a categorical prohibition, not a high-risk classification.
However: A targeted law enforcement facial recognition tool — operated by or for competent authorities, searching against a specific individual's biometrics, within the Art.5(1)(d) exception structure, without mass untargeted scraping — is NOT prohibited. It is HIGH-RISK under Annex III Point 6(f). The key distinctions:
| Feature | Clearview AI (as operated) | Compliant Law Enforcement FR |
|---|---|---|
| Database construction | Untargeted scraping — PROHIBITED Art.5(1)(e) | Lawfully obtained biometrics from existing criminal databases |
| Real-time use | In public spaces without exception authorisation — PROHIBITED Art.5(1)(d) | Within exception + prior judicial authorisation |
| Post-remote use | Not addressed (mass use problem is database construction) | Permitted — HIGH-RISK Point 6(f) |
| Annex III classification | Prohibited — no high-risk path | HIGH-RISK requiring full conformity assessment |
Several EU law enforcement agencies used Clearview AI before the enforcement actions, creating retroactive compliance exposure. GDPR LED violation + EU AI Act Art.5 violation (when EU AI Act applies) = compounded liability for both the procuring law enforcement authority and any residual Clearview AI operations.
Predictive Policing: Geographic Hotspot vs Individual Risk Assessment
The most important technical distinction for predictive policing compliance is the one between geographic crime prediction (place-based) and individual risk scoring (person-based). Only the latter triggers Annex III Point 6.
Geographic hotspot prediction (NOT necessarily high-risk): AI systems that predict which geographic areas are likely to experience increased criminal activity based on historical crime data, demographic patterns, weather, events, and similar variables — without generating scores for specific named individuals — may fall outside Annex III Point 6. PredPol (now Geolitica) and similar "place-based" predictive policing tools operate primarily at the location level, not the individual level. Place-based systems may still trigger GDPR (aggregated data can be personal in context) and ethical oversight frameworks, but do not automatically qualify as Annex III high-risk AI systems.
Individual crime risk assessment (HIGH-RISK): Any AI system that generates a risk score for a specific individual — their likelihood of committing an offence, their recidivism probability, their threat level — triggers Annex III Point 6(e). This covers COMPAS-analogues in EU courts (pre-trial detention, parole decisions), individual threat assessment tools used by police intelligence units, and any system that generates a person-level crime risk output.
The EU context for US predictive policing tools:
| Tool | Origin | EU Deployment | Annex III Trigger |
|---|---|---|---|
| PredPol/Geolitica | US (Santa Cruz, CA) | Pilot use in Netherlands, Switzerland (not EU) | Geographic only — Point 6(e) probably not triggered |
| ShotSpotter (SoundThinking) | US (Newark, CA) | No confirmed EU law enforcement use | Gunshot detection + geographic — probably not Point 6 |
| Palantir Gotham | US (Denver, CO) | BKA Germany (pilot), Bavaria LKA, Netherlands KLPD, Netherlands AIVD | Individual profiling + link analysis — HIGH-RISK Point 6(d) |
| IBM i2 Analyst's Notebook | US (Armonk, NY) | Widespread EU use for link analysis | Link analysis without individual scoring — borderline |
| HunchLab (Motorola Solutions) | US (Philadelphia, PA) | Limited EU | Place-based primarily — probably not Point 6(e) |
Palantir Gotham in EU Law Enforcement: CLOUD Act + EU AI Act Double Exposure
Palantir Technologies (NYSE: PLTR, headquartered Denver, Colorado) operates its Gotham platform for law enforcement intelligence in multiple EU Member States. Palantir is a US entity subject to the CLOUD Act (Clarifying Lawful Overseas Use of Data Act, 2018), meaning US authorities can compel Palantir to disclose data held on its platform to US law enforcement, potentially including EU criminal intelligence, suspect profiles, and investigation records.
German BKA + Palantir History:
- Bundeskriminalamt (BKA) evaluated Palantir Gotham in a pilot programme
- Hessen and Bayern LKA operated "HessenData" and "VeRA" — Palantir-based analysis systems
- Bundesverfassungsgericht (BVerfG) reviewed aspects of police data analysis laws
- Hessen Datenschutzbeauftragte raised GDPR LED compatibility questions
- Under EU AI Act: Palantir Gotham as deployed for criminal profiling = Annex III Point 6(d) HIGH-RISK system
CLOUD Act + EU AI Act compliance matrix:
| Obligation | CLOUD Act (US side) | EU AI Act + GDPR LED (EU side) | Conflict |
|---|---|---|---|
| US law enforcement access request | Palantir must comply | GDPR LED Art.9 prohibits unlawful transfers | YES — direct conflict |
| Data sovereignty | Data accessible to US DOJ | Competent authority data stays within EU LEA jurisdiction | STRUCTURAL |
| EU AI Act Art.9 risk management | N/A | Must document CLOUD Act exposure as systemic risk | Point 6 conformity gap |
| EU AI Act Art.11 technical docs | N/A | Must include data governance for criminal intelligence | No public documentation |
No EU law enforcement agency that uses Palantir Gotham has published a conformity assessment under EU AI Act Annex III Point 6 that addresses the CLOUD Act risk. This is a structural compliance gap affecting multiple EU Member State intelligence systems.
German Law Enforcement AI: §81b StPO, BKA-Gesetz, and EU AI Act Obligations
Germany provides the most developed European legal framework for law enforcement AI, creating a three-layer compliance challenge: §81b StPO (Code of Criminal Procedure), BKA-Gesetz (Federal Criminal Police Office Act), and EU AI Act.
§81b StPO — Biometric Measures in Criminal Proceedings: Section 81b of the German Code of Criminal Procedure authorises biometric data collection (fingerprints, photographs, body measurements) from accused persons for criminal proceedings and identification purposes. Systems that process this data using AI — matching fingerprints against criminal databases (AFIS), identifying suspects from photographs — are in scope for Annex III Point 6(d/f).
BKA-Gesetz — Preventive Police Powers: The Federal Criminal Police Office Act authorises the BKA to use automated data analysis for counter-terrorism and serious crime prevention. Sections 24-34 BKA-Gesetz govern automated data analysis and profiling. Systems deployed under BKA-Gesetz powers = Annex III Point 6(d/e) high-risk. The BKA has not published EU AI Act conformity assessments for its automated analysis systems.
Verfassungsschutz boundary: Domestic intelligence services (Verfassungsschutz) are not "law enforcement" in the EU AI Act sense — they exercise administrative surveillance powers, not law enforcement powers. AI systems used exclusively by Verfassungsschutz for surveillance (without criminal prosecution function) may fall outside Annex III Point 6 but within the broader EU AI Act framework for general AI systems or other high-risk categories.
State (Länder) level exposure: Each German federal state (Land) has its own Landeskriminalamt (LKA) with independent AI procurement. Bayern LKA (VeRA/Palantir), Hessen LKA (HessenData), and NRW LKA have used AI analysis platforms. Under EU AI Act, each LKA deployment is a separate deployer obligation — there is no federal EU AI Act compliance that covers LKA use.
Europol AI: Dual Compliance Under Europol Regulation and EU AI Act
Europol is an EU institution and therefore subject to EU AI Act obligations as both a deployer (using external AI tools) and potentially as a provider (if it develops AI tools for distribution to Member State law enforcement). The Europol Regulation 2016/794 (as amended by 2022/991) governs Europol's data processing powers.
Key Europol AI uses:
- SIENA: Europol's secure information exchange network with AI-assisted link analysis
- QUEST (Query Europol Systems): AI-assisted search across criminal databases
- Image analysis: facial recognition and object recognition in investigation support
- Dark web monitoring: AI-assisted OSINT and dark web crawling for criminal intelligence
Europol Regulation Art.26(10): Europol's Executive Director must maintain a register of all processing activities of personal data, including automated processing. EU AI Act adds conformity assessment obligations on top.
Europol as provider vs deployer: If Europol develops AI tools that are then made available to Member State law enforcement (as analysis support services), Europol may qualify as an AI provider under EU AI Act Art.3(3), with provider obligations including technical documentation (Art.11), conformity assessment (Art.43), and EU AI Act database registration (Art.71). No public Europol EU AI Act compliance statement has been issued.
Body Camera AI: The Real-Time vs Post-Incident Distinction
Body-worn camera (BWC) AI is one of the most practically significant law enforcement AI categories for the EU because of the rapid deployment of BWC programmes across EU Member States (France: mandatory for CRS since 2020, Germany: multiple Länder pilots, Netherlands: national programme).
The compliance distinction:
Real-time AI analysis of body camera footage (during deployment): If a body camera feeds live video to an AI system that performs real-time facial recognition to identify individuals in the environment — this is real-time remote biometric identification in publicly accessible spaces. Art.5(1)(d) applies. Without meeting one of the three exceptions with prior judicial authorisation, this is PROHIBITED. Body camera facial recognition overlays (showing names/records in the officer's field of view) = Art.5(1)(d) prohibition unless exception authorisation exists.
Post-incident AI analysis of body camera footage: If BWC footage is stored and later analysed by AI to identify persons in the recording — this is post-remote biometric identification. NOT prohibited under Art.5. HIGH-RISK under Annex III Point 6(f). Police departments that use AI to retrospectively search BWC archives for a specific suspect = deployers of a high-risk Annex III Point 6(f) system.
Object recognition and behaviour analysis: AI on BWC that detects weapons, suspicious behaviour patterns, or crowd dynamics — without biometric identification — may fall outside Annex III Point 6(f) but could still trigger Point 6(d) (crime profiling) depending on whether individual risk scores are generated.
Key vendors with EU BWC AI deployments: Axon (formerly TASER, Scottsdale, Arizona) is the dominant EU BWC vendor through Axon Body series. Axon's AI transcription and evidence management (Axon Evidence, Draft One) operate in multiple EU jurisdictions. Axon is a US entity with CLOUD Act exposure on EU police BWC footage — including footage of individuals who have not been charged with any offence.
The Law Enforcement Directive (LED) 2016/680 Interaction
The EU AI Act explicitly does not replace or modify the Law Enforcement Directive 2016/680 (LED), which governs the processing of personal data by competent authorities for law enforcement purposes. All Annex III Point 6 systems that process personal data are subject to BOTH LED and EU AI Act obligations simultaneously.
Critical LED obligations that interact with EU AI Act:
| LED Article | Obligation | EU AI Act Interaction |
|---|---|---|
| Art.11 | Automated individual decision-making — no solely automated decisions with significant effects without suitable safeguards | EU AI Act Art.14 human oversight reinforces Art.11 LED — both require human review |
| Art.4(1)(a) | Lawfulness of processing — criminal data must have legal basis | EU AI Act Art.9 risk management must document legal basis as part of risk assessment |
| Art.4(1)(e) | Storage limitation | EU AI Act Art.12 logging obligations must comply with LED storage limitation |
| Art.9 | Special-category data (racial/ethnic origin, biometrics, health) | EU AI Act Art.10(5) special-category training data requirements parallel LED Art.9 |
| Art.17 | Right of access | EU AI Act Art.13 transparency obligations must be structured to enable LED Art.17 access |
| Art.18 | Right to rectification/erasure | EU AI Act Art.14 human oversight mechanism must enable LED Art.18 exercise |
The LED applies to "competent authorities" — not private entities. When a private AI vendor provides a tool to a law enforcement agency, the law enforcement agency is the LED controller, but the AI vendor (as EU AI Act provider) must ensure the system is technically capable of enabling the law enforcement agency to meet its LED obligations.
Python LawEnforcementAIClassifier
from dataclasses import dataclass
from enum import Enum
class LawEnforcementAIPoint6Category(Enum):
POINT_6A = "point_6a_victim_risk_assessment"
POINT_6B = "point_6b_polygraph_emotional_detection"
POINT_6C = "point_6c_evidence_reliability"
POINT_6D = "point_6d_crime_profiling"
POINT_6E = "point_6e_recidivism_offence_prediction"
POINT_6F_REALTIME = "point_6f_realtime_rbi"
POINT_6F_POST = "point_6f_post_remote_rbi"
PROHIBITED_ART5 = "prohibited_art5"
NOT_IN_SCOPE = "not_in_scope"
@dataclass
class LawEnforcementAIAssessment:
system_name: str
category: LawEnforcementAIPoint6Category
is_high_risk: bool
is_prohibited: bool
requires_prior_authorisation: bool
cloud_act_exposure: bool
led_applies: bool
compliance_notes: list[str]
def classify_law_enforcement_ai(
system_name: str,
operator_is_competent_authority: bool,
function_type: str,
is_realtime_biometric_in_public_space: bool,
has_art5_exception_authorisation: bool,
involves_individual_scoring: bool,
vendor_is_us_entity: bool,
processes_personal_data: bool
) -> LawEnforcementAIAssessment:
"""
Classify a law enforcement AI system under EU AI Act Annex III Point 6.
function_type options:
- "victim_risk": Assesses risk of person becoming victim of crime
- "polygraph": Polygraph, lie detection, emotional state detection
- "evidence_eval": Evaluates reliability of evidence in criminal proceedings
- "crime_profiling": Profiles individuals in criminal investigation/prosecution
- "recidivism": Predicts individual likelihood of offending/reoffending
- "biometric_id": Remote biometric identification
- "geographic_hotspot": Geographic crime prediction without individual scoring
- "gunshot_detection": Audio gunshot detection (no individual identification)
- "object_recognition": Object detection without individual identification
"""
if not operator_is_competent_authority:
# Private sector use — different Annex III categories may apply
return LawEnforcementAIAssessment(
system_name=system_name,
category=LawEnforcementAIPoint6Category.NOT_IN_SCOPE,
is_high_risk=False,
is_prohibited=False,
requires_prior_authorisation=False,
cloud_act_exposure=vendor_is_us_entity,
led_applies=False,
compliance_notes=["Not operated by competent authority — check other Annex III categories"]
)
notes = []
cloud_act_note = "CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ" if vendor_is_us_entity else ""
if cloud_act_note:
notes.append(cloud_act_note)
if processes_personal_data:
notes.append("LED 2016/680 applies — dual compliance with EU AI Act required")
if function_type == "biometric_id":
if is_realtime_biometric_in_public_space and not has_art5_exception_authorisation:
return LawEnforcementAIAssessment(
system_name=system_name,
category=LawEnforcementAIPoint6Category.PROHIBITED_ART5,
is_high_risk=False,
is_prohibited=True,
requires_prior_authorisation=False,
cloud_act_exposure=vendor_is_us_entity,
led_applies=processes_personal_data,
compliance_notes=notes + ["PROHIBITED under Art.5(1)(d) — real-time RBI in public space without exception authorisation"]
)
elif is_realtime_biometric_in_public_space and has_art5_exception_authorisation:
return LawEnforcementAIAssessment(
system_name=system_name,
category=LawEnforcementAIPoint6Category.POINT_6F_REALTIME,
is_high_risk=True,
is_prohibited=False,
requires_prior_authorisation=True,
cloud_act_exposure=vendor_is_us_entity,
led_applies=processes_personal_data,
compliance_notes=notes + ["HIGH-RISK Point 6(f) — real-time RBI within Art.5(1)(d) exception", "Prior judicial/independent administrative authorisation mandatory", "Full Annex III conformity assessment required before deployment"]
)
else:
return LawEnforcementAIAssessment(
system_name=system_name,
category=LawEnforcementAIPoint6Category.POINT_6F_POST,
is_high_risk=True,
is_prohibited=False,
requires_prior_authorisation=False,
cloud_act_exposure=vendor_is_us_entity,
led_applies=processes_personal_data,
compliance_notes=notes + ["HIGH-RISK Point 6(f) — post-remote biometric identification", "Full Annex III conformity assessment required"]
)
category_map = {
"victim_risk": (LawEnforcementAIPoint6Category.POINT_6A, "HIGH-RISK Point 6(a) — victim risk assessment"),
"polygraph": (LawEnforcementAIPoint6Category.POINT_6B, "HIGH-RISK Point 6(b) — polygraph/emotional state detection"),
"evidence_eval": (LawEnforcementAIPoint6Category.POINT_6C, "HIGH-RISK Point 6(c) — evidence reliability evaluation"),
"crime_profiling": (LawEnforcementAIPoint6Category.POINT_6D, "HIGH-RISK Point 6(d) — crime profiling of natural persons"),
"recidivism": (LawEnforcementAIPoint6Category.POINT_6E, "HIGH-RISK Point 6(e) — recidivism/offence prediction"),
}
if function_type in category_map:
cat, note = category_map[function_type]
return LawEnforcementAIAssessment(
system_name=system_name,
category=cat,
is_high_risk=True,
is_prohibited=False,
requires_prior_authorisation=False,
cloud_act_exposure=vendor_is_us_entity,
led_applies=processes_personal_data,
compliance_notes=notes + [note, "Full Annex III conformity assessment required before deployment"]
)
# Geographic hotspot or non-identifying detection — not high-risk under Point 6
if function_type in ("geographic_hotspot", "gunshot_detection", "object_recognition") and not involves_individual_scoring:
return LawEnforcementAIAssessment(
system_name=system_name,
category=LawEnforcementAIPoint6Category.NOT_IN_SCOPE,
is_high_risk=False,
is_prohibited=False,
requires_prior_authorisation=False,
cloud_act_exposure=vendor_is_us_entity,
led_applies=processes_personal_data,
compliance_notes=notes + ["Not high-risk under Annex III Point 6 — no individual scoring", "Assess general-purpose AI Act obligations and LED data minimisation"]
)
return LawEnforcementAIAssessment(
system_name=system_name,
category=LawEnforcementAIPoint6Category.NOT_IN_SCOPE,
is_high_risk=False,
is_prohibited=False,
requires_prior_authorisation=False,
cloud_act_exposure=vendor_is_us_entity,
led_applies=processes_personal_data,
compliance_notes=notes + ["Insufficient information to classify — review function type against Point 6(a)-(f)"]
)
# Example classifications
systems = [
classify_law_enforcement_ai("Palantir Gotham (BKA)", True, "crime_profiling", False, False, True, True, True),
classify_law_enforcement_ai("Clearview AI (real-time public)", True, "biometric_id", True, False, False, True, True),
classify_law_enforcement_ai("Post-CCTV FR (retrospective)", True, "biometric_id", False, False, False, False, True),
classify_law_enforcement_ai("COMPAS-analogue (parole AI)", True, "recidivism", False, False, True, True, True),
classify_law_enforcement_ai("PredPol geographic", True, "geographic_hotspot", False, False, False, True, False),
classify_law_enforcement_ai("Deepfake detector (prosecution)", True, "evidence_eval", False, False, False, False, True),
classify_law_enforcement_ai("BWC real-time FR (authorised)", True, "biometric_id", True, True, False, True, True),
classify_law_enforcement_ai("Axon body cam post-analysis", True, "biometric_id", False, False, False, True, True),
classify_law_enforcement_ai("Lie detector (questioning)", True, "polygraph", False, False, True, False, True),
classify_law_enforcement_ai("ShotSpotter (no ID)", True, "gunshot_detection", False, False, False, True, False),
]
for s in systems:
status = "PROHIBITED" if s.is_prohibited else ("HIGH-RISK" if s.is_high_risk else "NOT HIGH-RISK")
print(f"{s.system_name}: {status} ({s.category.value})")
for note in s.compliance_notes:
print(f" → {note}")
Output:
Palantir Gotham (BKA): HIGH-RISK (point_6d_crime_profiling)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(d) — crime profiling of natural persons
→ Full Annex III conformity assessment required before deployment
Clearview AI (real-time public): PROHIBITED (prohibited_art5)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ PROHIBITED under Art.5(1)(d) — real-time RBI in public space without exception authorisation
Post-CCTV FR (retrospective): HIGH-RISK (point_6f_post_remote_rbi)
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(f) — post-remote biometric identification
→ Full Annex III conformity assessment required
COMPAS-analogue (parole AI): HIGH-RISK (point_6e_recidivism_offence_prediction)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(e) — recidivism/offence prediction
→ Full Annex III conformity assessment required before deployment
PredPol geographic: NOT HIGH-RISK (not_in_scope)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ Not high-risk under Annex III Point 6 — no individual scoring
→ Assess general-purpose AI Act obligations and LED data minimisation
Deepfake detector (prosecution): HIGH-RISK (point_6c_evidence_reliability)
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(c) — evidence reliability evaluation
→ Full Annex III conformity assessment required before deployment
BWC real-time FR (authorised): HIGH-RISK (point_6f_realtime_rbi)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(f) — real-time RBI within Art.5(1)(d) exception
→ Prior judicial/independent administrative authorisation mandatory
→ Full Annex III conformity assessment required before deployment
Axon body cam post-analysis: HIGH-RISK (point_6f_post_remote_rbi)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(f) — post-remote biometric identification
→ Full Annex III conformity assessment required
Lie detector (questioning): HIGH-RISK (point_6b_polygraph_emotional_detection)
→ LED 2016/680 applies — dual compliance with EU AI Act required
→ HIGH-RISK Point 6(b) — polygraph/emotional state detection
→ Full Annex III conformity assessment required before deployment
ShotSpotter (no ID): NOT HIGH-RISK (not_in_scope)
→ CLOUD Act exposure: US vendor may be compelled to disclose EU criminal data to US DOJ
→ Not high-risk under Annex III Point 6 — no individual scoring
→ Assess general-purpose AI Act obligations and LED data minimisation
25-Item EU AI Act Annex III Point 6 Compliance Checklist
Scope Determination (apply before any other step):
- Confirm whether the operator is a "competent authority" under LED Art.3(7) — national police, prosecution, court, or other entity with law enforcement powers. Only competent authority use triggers Point 6.
- Apply the geographic-vs-individual test for crime prediction: does the system generate outputs for specific named individuals (HIGH-RISK 6(e)) or only geographic areas/statistical populations (not necessarily high-risk)?
- For biometric identification systems: apply the Art.5(1)(d) three-step test — (a) is it real-time in public space? (b) does it fall within one of the three exceptions? (c) is prior authorisation in place? Navigate to prohibited/high-risk/permitted accordingly.
- Assess whether the system processes personal data — if yes, LED 2016/680 applies as a parallel compliance regime throughout all subsequent steps.
Provider Obligations (AI vendors selling to law enforcement):
- Register the system in the EU AI Act database (Art.71) before first supply to a law enforcement deployer
- Prepare Art.11 technical documentation: model architecture, training data sources, accuracy metrics disaggregated by demographic group, known failure modes
- Conduct Art.10 training data bias audit: test for disparate impact across race, ethnicity, sex, religion, national origin — note that law enforcement training data systematically overrepresents certain demographics (historical arrest data bias)
- Implement Art.9 risk management system specific to law enforcement failure modes: false positives causing wrongful detention, systematic bias against protected characteristics, performance degradation on minority populations
- Design Art.14 human oversight mechanisms: ensure law enforcement deployers cannot override or bypass human review before adverse individual decisions
- Assess CLOUD Act exposure: if your company is a US entity, document the CLOUD Act risk in Art.11 technical documentation and notify EU law enforcement deployers of the structural GDPR LED Art.9 conflict
Deployer Obligations (law enforcement agencies using AI):
- Before deployment: conduct the Art.26(10) Fundamental Rights Impact Assessment (FRIA) — this is a specific obligation for law enforcement AI deployers that does not apply to all high-risk AI categories
- Register your use of the system in the EU AI Act database under the deployer registration requirement (applicable to law enforcement deployers under Art.71(4))
- Implement Art.14 human oversight: no solely automated adverse decisions — criminal investigations must maintain meaningful human review at each AI-influenced decision point
- Train staff on AI system limitations per Art.26(4): officers using predictive policing, profiling, or biometric identification tools must be trained to understand confidence scores, false positive rates, and known bias patterns
- Establish LED Art.11 safeguards for automated decisions: even where EU AI Act Art.14 is technically met, LED Art.11 independently requires safeguards including individual notification rights
Biometric Identification Specific:
- For any real-time RBI deployment: obtain prior authorisation from judicial authority or independent administrative authority specifying geographic scope, temporal limits, and specific exception category
- Maintain Art.12 access logs: every search query, every match generated, every case linkage — mandatory for post-deployment audit
- Implement post-deployment review: real-time RBI deployments must document accuracy of each match and outcomes (arrest, no action) for mandatory post-hoc review
- Conduct GDPR LED Art.9 special-category assessment: biometric data is special-category under LED Art.9 — additional safeguards required beyond standard processing
US Vendor CLOUD Act:
- Map all law enforcement AI vendors to US-entity status — identify CLOUD Act exposure
- Document the GDPR LED Art.9 (prohibition on unlawful transfers) + CLOUD Act structural conflict in your risk register
- Assess sovereign EU alternatives: are there EU-incorporated vendors with no US parent offering equivalent functionality?
- Include CLOUD Act risk as explicit line item in procurement due diligence and contract terms
Ongoing Monitoring:
- Implement Art.72 post-market monitoring: track accuracy disaggregated by demographic group, false positive/false negative rates, and any evidence of systematic bias — re-test when population demographics or crime patterns change significantly
- Establish incident reporting: Art.73 serious incidents from law enforcement AI — if a system generates a match leading to wrongful arrest or materially influences an adverse criminal justice outcome — require reporting to the national market surveillance authority
See Also — EU AI Act Annex III Series
This post is part of the EU AI Act Annex III High-Risk Categories series:
- Annex III Point 1: Biometric Identification, Categorisation, and Emotion Recognition AI
- Annex III Point 2: Critical Infrastructure AI — Water, Gas, Electricity, and Transport
- Annex III Point 3: Education and Vocational Training AI
- Annex III Point 4: Employment and Recruitment AI
- Annex III Point 5: Essential Services AI — Credit Scoring, Insurance, and Public Benefits
- Annex III Point 6: Law Enforcement AI (this post)
- Annex III Point 7: Migration and Border Management AI — Frontex, Eurodac, Asylum Screening